sentences
sequence
labels
sequence
[ "Recent work on non-autoregressive neural machine translation (NAT) aims at improving the efficiency by parallel decoding without sac-rificing the quality.", "However, existing NAT methods are either inferior to Transformer or require multiple decoding passes, leading to reduced speedup.", "We propose the Glancing Language Model (GLM) for single-pass parallel generation models.", "With GLM, we develop Glancing Transformer (GLAT) for machine translation.", "With only single-pass parallel decoding, GLAT is able to generate high-quality translation with 8 -15 speedup.", "Note that GLAT does not modify the network architecture, which is a training method to learn word interdependency.", "Experiments on multiple WMT language directions show that GLAT outperforms all previous single pass non-autoregressive methods, and is nearly comparable to Transformer, reducing the gap to 0.25-0.9 BLEU points.", "(a) Sequential LM y 1 y 2 y 4 y 5 y 3 h 1 h 2 h 3 h 4 h 5 NAT Decoding H H \u0000 Glancing Sampling Hamming Distance N ( Y , Y )= 3 y 1 y 2 y 3 y 4 y 5 y 1 y 2 y 4 y 5 y 3 y 1 y 3 y 5 Replace Inputs 0.8 0.5 0.7 0.6 0.9 h 2 h 4 y 1 y 2 y 1 y 2 y 5 Decoder Encoder x 1 x 2 x 3 x 4 y 3 y 4 y 3 y 4 [BOS] y 1 y 2 y 5 Decoder Encoder x 1 x 2 x 3 x 4 y 3 y 4 h 2 h 4 h 1 h 3 h 5 a tt e n t i o n R a nd o m M a s k i n g Decoder Encoder x 1 x 2 x 3 x 4 y 1 y 4 y 5 [MASK][MASK] y 2 y 3 y 1 y 5 y 4 a tt e n t i o n an apple in the car ein Apfel im Auto Decoder Encoder x 1 x 2 x 3 x 4 y 1 h 2 y 5 h 4 y 3 y 2 y 4 G l a n c i n g S a m p li n g y 3 y 1 y 5 ein Apfel im Auto an apple in the car a tt e n t i o n ein Apfel im Auto ein Apfel im Auto apple in apple the an the car an apple in the an in car a tt e n t i o n", "(c) Masked LM (MLM) y 1 y 2 y 4 y 5 y 3 h 1 h 2 h 3 h 4 h 5 NAT Decoding HH \u0000 Glancing Sampling Hamming Distance N ( Y , Y )= 3 y 1 y 2 y 3 y 4 y 5 y 1 y 2 y 4 y 5 y 3 y 1 y 3 y 5 Replace Inputs 0.8 0.5 0.7 0.6 0.9 h 2 h 4 y 1 y 2 y 1 y 2 y 5 Decoder Encoder x 1 x 2 x 3 x 4 y 3 y 4 y 3 y 4 [BOS] y 1 y 2 y 5 Decoder Encoder x 1 x 2 x 3 x 4 y 3 y 4 h 2 h 4 h 1 h 3 h 5 a tt e n t i o n R a nd o m M a s k i n g Decoder Encoder x 1 x 2 x 3 x 4 y 1 y 4 y 5 [MASK][MASK] y 2 y 3 y 1 y 5 y 4 a tt e n t i o n an apple in the car ein Apfel im Auto Decoder Encoder x 1 x 2 x 3 x 4 y 1 h 2 y 5 h 4 y 3 y 2 y 4 G l a n c i n g S a m p li n g y 3 y 1 y 5 ein Apfel im Auto an apple in the car a tt e n t i o n ein Apfel im Auto ein Apfel im Auto apple in apple the an the car an apple in the an in car a tt e n t i o n", "Transformer has been the most widely used architecture for machine translation (Vaswani et al., 2017).", "Despite its strong performance, the decoding of Transformer is inefficient as it adopts the sequential auto-regressive factorization for its probability model (Figure 1a).", "Recent work such as the non-autoregressive transformer (NAT), aims to decode target tokens in parallel to speed up the generation (Gu et al., 2018).", "However, the vanilla NAT still lags behind the Transformer in translation quality with a gap of about 7.0 BLEU points.", "NAT assumes the conditional independence of the target tokens given the source sentence.", "We suspect that NAT's conditional independence assumption prevents learning word interdependency in the target The work was done when the first author was an intern at Bytedance.", "y 1 y 2 y 4 y 5 y 3 h 1 h 2 h 3 h 4 h 5 NAT Decoding H H \u0000 Glancing Sampling Hamming Distance N ( Y , Y )= 3 y 1 y 2 y 3 y 4 y 5 y 1 y 2 y 4 y 5 y 3 y 1 y 3 y 5 Replace Inputs 0.8 0.5 0.7 0.6 0.9 h 2 h 4 y 1 y 2 y 1 y 2 y 5 Decoder Encoder x 1 x 2 x 3 x 4 y 3 y 4 y 3 y 4 [BOS] y 1 y 2 y 5 Decoder Encoder x 1 x 2 x 3 x 4 y 3 y 4 h 2 h 4 h 1 h 3 h 5 a tt e n t i o n R a nd o m M a s k i n g Decoder Encoder x 1 x 2 x 3 x 4 y 1 y 4 y 5 [MASK][MASK] y 2 y 3 y 1 y 5 y 4 a tt e n t i o n an apple in the car ein Apfel im Auto Decoder Encoder x 1 x 2 x 3 x 4 y 1 h 2 y 5 h 4 y 3 y 2 y 4 G l a n c i n g S a m p li n g y 3 y 1 y 5 ein Apfel im Auto an apple in the car a tt e n t i o n ein Apfel im Auto ein Apfel im Auto apple in apple the an the car an apple in the an in car a tt e n t i o n", "Independent LM y 1 y 2 y 4 y 5 y 3 h 1 h 2 h 3 h 4 h 5 NAT Decoding H H \u0000 Glancing Sampling Hamming Distance N ( Y , Y )= 3 y 1 y 2 y 3 y 4 y 5 y 1 y 2 y 4 y 5 y 3 y 1 y 3 y 5 Replace Inputs 0.8 0.5 0.7 0.6 0.9 h 2 h 4 y 1 y 2 y 1 y 2 y 5 Decoder Encoder x 1 x 2 x 3 x 4 y 3 y 4 y 3 y 4 [BOS] y 1 y 2 y 5 Decoder Encoder x 1 x 2 x 3 x 4 y 3 y 4 h 2 h 4 h 1 h 3 h 5 a tt e n t i o n R a nd o m M a s k i n g Decoder Encoder x 1 x 2 x 3 x 4 y 1 y 4 y 5 [MASK][MASK] y 2 y 3 y 1 y 5 y 4 a tt e n t i o n an apple in the car ein Apfel im Auto Decoder Encoder x 1 x 2 x 3 x 4 y 1 h 2 y 5 h 4 y 3 y 2 y 4 G l a n c i n g S a m p li n g y 3 y 1 y 5 ein Apfel im Auto an apple in the car a tt e n t i o n ein Apfel im Auto ein Apfel im Auto apple in apple the an the car an apple in the an in car a tt e n t i o n", "(d) Glancing LM (GLM) Figure 1: Probabilistic models for machine translation methods.", "(b) Vanilla NAT uses conditional indepe-dent LM.", "(c) Mask-Predict NAT uses MLM and requires multiple passes of decoding.", "(d) Our proposed GLM leverages the decoder prediction to decide glancing sampling policy during training and only requires one pass of decoding during inference.", "sentence.", "Notice that such word interdependency is crucial, as the Transformer explicitly captures that via decoding from left to right (Figure 1a).", "Several remedies are proposed (Ghazvininejad et al., 2019; Gu et al., 2019) to capture word interdependency while keeping parallel decoding.", "Their common idea is to decode the target tokens iteratively while each pass of decoding is trained using the masked language model (Figure 1c).", "Since these methods require multiple passes of decoding, its generation speed is measurably slower than the vanilla NAT.", "With single-pass generation only, these methods still largely lag behind the autoregressive Transformer.", "One open question is whether a complete parallel decoding model can achieve comparable machine translation performance to the Transformer.", "It should be non-autoregressive and take only one pass of decoding during the inference time.", "To address the quest, we propose glancing language model (GLM), a new method to train a probabilistic sequence model.", "Based on GLM, we develop the glancing Transformer (GLAT) for neural machine translation.", "It achieves parallel text generation with only single decoding.", "Yet, it outperforms previous NAT methods and achieves comparable performance as the strong Transformer baseline in multiple cases.", "Intuitively, GLM adopts a adaptive glancing sampling strategy, which glances at some fragments of the reference if the reference is too difficult to fit in the training of GLAT.", "Correspondingly, when the model is well tuned, it will adaptively reduce the percentage of glancing sampling, making sure that the resulting model could learn to generate the whole sentence in the single-pass fashion.", "The gradual learning process smooths the learning curve of single-pass parallel generation.", "Specifically, our proposed GLM differs from MLM in two aspects.", "Firstly, GLM proposes an adaptive glancing sampling strategy, which enables GLAT to generate sentences in a one-iteration way, working by gradual training instead of iterative inference (see Figure 1d).", "Generally, GLM is quite similar to curriculum learning (Bengio et al., 2009) in spirit, namely first learning to generate some fragments and gradually moving to learn the whole sentences (from easy to hard).", "To achieve the adaptive glancing sampling, GLM performs decoding twice in training.", "The first decoding is the same as the vanilla NAT, and the prediction accuracy indicates whether the current reference is difficult for fitting.", "In the second decoding, GLM gets words of the reference via glancing sampling according to the first decoding, and learn to predict the remaining words that are not sampled.", "Note that only the second decoding will update the model parameters.", "Secondly, instead of using the [MASK] token, GLM directly uses representations from the encoder at corresponding positions, which is more natural and could enhance the interactions between sampled words and signals from the encoder.", "Note that GLAT does not modify the network architecture, which is a training method to explicityly learn word interdependency.", "Experimental results show that GLAT obtains significant improvements (about 5 BLEU) on standard benchmarks compared to the vanilla NAT, without losing inference speedup.", "GLAT achieves competitive results against iterative approaches like Mask-Predict (Ghazvininejad et al., 2019), even outperforming the Mask-Predict model on WMT14 DE-EN and WMT16 RO-EN.", "Compared to the strong AT baseline, GLAT can still close the performance gap within 0.9 BLEU point while keeping 7 .", "9 speed-up.", "Empirically, we even find that GLAT outperforms AT when the length of the reference is less than 20 on WMT14 DE-EN.", "We speculate this is because GLM could capture bidirectional context for generation while its left-to-right counterpart is only unidirectional, which indicates the potential of parallel generation approaches like GLAT.", "We state and compare different probability models for machine translation.", "A machine translation task can be formally defined as a sequence to sequence generation problem: given the source sentence X = { x 1 , x 2 , ..., x N } , to generate the target sentence Y = { y 1 , y 2 , ..., y T } according to the conditional probability P ( Y | X ; ) , where denotes the parameter set of a network.", "Different methods factorize the conditional probability differently.", "The Transformer uses the autoregressive factorization to maximize the following likelihood: LAT = log P ( Y | X ; ) = T (cid:88) t =1 log p ( y t | y <t , X ; ) , where y <t = { [BOS] , y 1 , ..., y t 1 } .", "For simplicity, we omit the number of samples in the equation.", "Note the training of AT adopts left-to-right teacher forcing on the target tokens (Vaswani et al., 2017).", "The word interdependency is learned in a unidirectional way.", "During inference, the preceding predicted token is fed into the decoder to generate the next token.", "The vanilla NAT consists of the same encoder as the Transformer and a parallel decoder with layers of multi-head attention (Gu et al., 2018).", "During training, it uses the conditional independent factorization for the target sentence: LNAT = T (cid:88) t =1 log P ( y t | X ; ) .", "Notice that, NAT's log-likelihood is an approximation to the full log-likelihood log P ( Y | X ; ) .", "During inference, the encoder representation is copied as the input to the decoder, therefore all tokens on the target side can be generated in parallel.", "Such a conditional independence assumption does not hold in general, which explains the inferior performance of NAT.", "Multi-pass iterative decoding approaches such as Mask-Predict (Ghazvininejad et al., 2019) extends the vanilla NAT.", "It still uses the conditional independent factorization, together with the random masking scheme: LMLM = (cid:88) y t RM ( Y ) log p (cid:16) y t | (cid:0) Y, RM ( Y ) (cid:1) , X ; (cid:17) , where RM ( Y ) is a set of randomly selected words from Y , and ( ) replaces these selected words in Y with the [MASK] token.", "For example in Figure 1c, RM ( Y ) = { y 2 , y 3 } , (cid:0) Y, RM ( Y ) (cid:1) = { y 1 , [MASK] , [MASK] , y 4 , y 5 } .", "The number of masked tokens distributes uniformly from 1 to the total number of tokens in the target sequence.", "Such training objective is used to learn a refinement model that can predict the masked tokens given the source sentence X and words generated in the previous iteration.", "The vanilla NAT breaks word interdependency, while MLM requires multiple passes of decoding to re-establish the word interdependency.", "Our goal in this work is to design a better probability model and a training objective to enable word interdependency learning for single-pass parallel generation.", "In this section, we present GLAT in detail.", "GLAT uses the same encoder-decoder architecture as the vanilla NAT (Gu et al., 2018).", "GLAT differs from the vanilla NAT in that it explicitly encourages word interdependency via training with glancing language model (GLM).", "It differs from the iterative NAT with MLM in that it is trained to produce single pass parallel decoding while MLM is used for prediction refinement.", "Given the input source sentence X = { x 1 , x 2 , ..., x N } , the task is to predict Y = { y 1 , y 2 , ..., y T } .", "The glancing Transformer (GLAT) formulates a glancing language model (GLM) during training.", "It maximizes the following: LGLM = (cid:88) y t GS ( Y, Y ) log p ( y t | GS ( Y, Y ) , X ; ) (1) Where, Y is the initial predicted tokens, and GS ( Y, Y ) is a subset of tokens selected via the glancing sampling strategy (Figure 2, described in detail in the next section).", "The glancing sampling strategy selects those words from the target sentence by comparing the initial prediction against the ground-truth tokens.", "It selects more tokens and feeds the embeddings of these tokens into the decoder input if the network's initial prediction is less accurate.", "GS ( Y, Y ) is the remaining subset of tokens within the target Y but not selected.", "The training loss above is calculated against these remaining tokens.", "GLAT adopts similar encoder-decoder architecture as the Transformer with some modifica-tion (Figure 1d).", "Its encoder f enc is the same multihead attention layers.", "Its decoder f dec include multiple layers of multi-head attention where each layer attends to the full sequence of both encoder representation and the previous layer of decoder representation.", "During the initial prediction, the input to the decoder H = { h 1 , h 2 , ..., h T } are copied from the encoder output using either uniform copy or soft copy (Wei et al., 2019).", "The initial tokens Y are predicted using argmax decoding with f dec ( f enc ( X ; ) , H ; ) .", "To calculate the loss LGLM , we compare the initial prediction Y against the ground-truth to select tokens within the target sentence, i.e. GS ( Y, Y ) .", "We then replace those sampled indices of h 's with corresponding target word embeddings, H (cid:48) = RP ( Emb y t GS ( Y, Y ) ( y t ) , H ) , where RP replaces the corresponding indices.", "Namely, if a token in the target is sampled, its word embedding replaces the corresponding h .", "Here the word embeddings are obtained from the softmax embedding matrix of the decoder.", "The updated H (cid:48) is then fed into the decoder f dec again to calculate the output token probability.", "Specifically, the output probabilities of remaining tokens p ( y t | GS ( Y, Y ) , X ; ) are computed with f dec ( H (cid:48) , f enc ( X ; ); ) .", "One important component of GLM is to adaptively select the positions of tokens from the target sentence.", "Those selected tokens provide correct information from the ground-truth target, therefore it helps training the decoder to predict the rest nonselected tokens.", "Intuitively, our adaptive sampling strategy guides the model to first learn the generation of fragments and then gradually turn to the whole sentences.", "Our glancing sampling strategy selects many words at the start of the training, when the model is not yet well tuned.", "As the model gets better progressively, the sampling strategy will sample fewer words to enable the model to learn the parallel generation of the whole sentence.", "Note that the sampling strategy is crucial in the training of GLAT.", "As illustrated in Figure 2, the glancing sampling could be divided into two steps: first deciding a sampling number S , and then randomly selecting S words from the reference.", "The sampling number S will be larger when the model is poorly trained and decreases along the training process.", "Note that we choose to randomly select the S words from the reference.", "The random reference word selection is simple and yields good performance empirically.", "Formally, given the input X , its predicted sentence Y and its reference Y , the goal of glancing sampling function GS ( Y, Y ) is to obtain a subset of words sampled from Y : GS ( Y, Y ) = Random ( Y, S ( Y, Y )) (2) Here, Random ( Y, S ) is randomly selecting S tokens from Y , and S is computed by comparing the difference between Y and Y , S ( Y, Y ) = d ( Y, Y ) .", "The sampling ratio is a hyper-parameter to more flexibly control the number of sampled tokens.", "d ( Y, Y ) is a metric for measuring the differences between Y and Y .", "We adopt the Hamming distance (Hamming, 1950) as the metric, which is computed as d ( Y, Y ) = (cid:80) T t =1 ( y t (cid:54) = y t ) .", "With d ( Y, Y ) , the sampling number can be decided adaptively considering the current trained model's prediction capability.", "For situations that Y and Y have different lengths, d ( Y, Y ) could be other distances such as Levenshtein distance (Levenshtein, 1966).", "Alternative glancing sampling strategy can be adopted as well.", "For example, one simple alternative strategy is to set the number of sampled tokens to be proportional to the target sentence length, i.e. S = T .", "We will evaluate the effects of these variations in the experiment.", "GLAT only modifies the training procedure.", "Its inference is fully parallel with only a single pass.", "For parallel generation, we need to decide the output lengths before decoding.", "A simple way to decide the output lengths is predicting length with representations from the encoder.", "In GLAT, the length prediction is implemented as in Ghazvininejad et al. (2019).", "An additional [LENGTH] token is added to the source input, and the encoder output for the [LENGTH] token is used to predict the length.", "We also use two more complex methods to better decide the output lengths: noisy parallel decoding (NPD) and connectionist temporal classifica-tion (CTC).", "For NPD (Gu et al., 2018), we first predict m target length candidates, then generate output sequences with argmax decoding for each target length candidate.", "Then we use a pre-trained Models I dec WMT14 WMT16 Speed Up EN-DE DE-EN EN-RO RO-EN AT Models Transformer (Vaswani et al., 2017) T 27.30 / / / / Transformer (ours) T 27.48 31.27 33.70 34.05 1.0 Iterative NAT NAT-IR (Lee et al., 2018) 10 21.61 25.48 29.32 30.19 1.5 LaNMT (Shu et al., 2020) 4 26.30 / / 29.10 5.7 LevT (Gu et al., 2019) 6+ 27.27 / / 33.26 4.0 Mask-Predict (Ghazvininejad et al., 2019) 10 27.03 30.53 33.08 33.31 1.7 JM-NAT (Guo et al., 2020b) 10 27.31 31.02 / / 5.7 Fully NAT NAT-FT (Gu et al., 2018) 1 17.69 21.47 27.29 29.06 15.6 Mask-Predict (Ghazvininejad et al., 2019) 1 18.05 21.83 27.32 28.20 / imit-NAT (Wei et al., 2019) 1 22.44 25.67 28.61 28.90 18.6 NAT-HINT (Li et al., 2019) 1 21.11 25.24 / / / Flowseq (Ma et al., 2019) 1 23.72 28.39 29.73 30.72 1.1 NAT-DCRF (Sun et al., 2019) 1 23.44 27.22 / / 10.4 w/ CTC NAT-CTC (Libovick`y and Helcl, 2018) 1 16.56 18.64 19.54 24.67 / Imputer (Saharia et al., 2020) 1 25.80 28.40 32.30 31.70 18.6 w/ NPD NAT-FT + NPD (m=100) 1 19.17 23.20 29.79 31.44 2.4 imit-NAT + NPD (m=7) 1 24.15 27.28 31.45 31.81 9.7 NAT-HINT + NPD (m=9) 1 25.20 29.52 / / / Flowseq + NPD (m=30) 1 25.31 30.68 32.20 32.84 / NAT-DCRF + NPD (m=9) 1 26.07 29.68 / / 6.1 Ours NAT-base 1 20.36 24.81 28.47 29.43 15.3 CTC 1 25.52 28.73 32.60 33.46 14.6 GLAT 1 25.21 29.84 31.19 32.04 15.3 GLAT + CTC 1 26.39 29.54 32.79 33.84 14.6 GLAT + NPD (m=7) 1 26.55 31.02 32.87 33.51 7.9 Table 1: Results on WMT14 EN-DE/DE-EN and WMT16 EN-RO/RO-EN benchmarks.", "transformer to rank these sequences and identify the best overall output as the final output.", "For CTC (Graves et al., 2006), following Libovick`y and Helcl (2018), we first set the max output length to twice the source input length, and remove the blanks and repeated tokens after generation.", "In this section, we first introduce the settings of our experiments, then report the main results compared with several strong baselines.", "Ablation studies and further analysis are also included to verify the effects of different components used in GLAT.", "Datasets We conduct experiments on three machine translation benchmarks: WMT14 EN-DE (4.5M translation pairs), WMT16 EN-RO (610k translation pairs), and IWSLT16 DE-EN (150K translation pairs).", "These datasets are tokenized and segmented into subword units using BPE encodings (Sennrich et al., 2016).", "We preprocess WMT14 EN-DE by following the data preprocessing in Vaswani et al. (2017).", "For WMT16 EN-RO and IWSLT16 DE-EN, we use the processed data provided in Lee et al. (2018).", "Knowledge Distillation Following previous work (Gu et al., 2018; Lee et al., 2018; Wang et al., 2019), we also use sequence-level knowledge distillation for all datasets.", "We employ the transformer with the base setting in Vaswani et al. (2017) as the teacher for knowledge distillation.", "Then, we train our GLAT on distilled data.", "Baselines and Setup We compare our method with the base Transformer and strong representative NAT baselines in Table 1.", "For all our tasks, we obtain other NAT models' performance by directly using the performance figures reported in their papers if they are available.", "We adopt the vanilla model which copies source input uniformly in Gu et al. (2018) as our base model (NAT-base) and replace the Uni-formCopy with attention mechanism using positions.", "Note that the output length does not equal the length of reference in models using CTC.", "Therefore, for GLAT with CTC, we adopt longest common subsequence distance for compar-25 26 27 28 29 30 31 Performance (BLEU) 2.5 5.0 7.5 10.0 12.5 15.0 17.5 R e l a t i v e D e c o d i n g Sp ee d u p GLAT Transformer NAT-base NAT-DCRF imit-NAT CTC Mask-Predict Figure 3: The trade-off between speed-up and BLEU on WMT14 DE-EN ing Y and Y , and the glancing target is the target alignment that maximize the output probability arg max a B 1 ( Y ) P ( a | X ; ) .", "B 1 is the mapping proposed in (Graves et al., 2006), which expand the reference to the length of output by inserting blanks or repeating words.", "For WMT datasets, we follow the hyperparam-eters of the base Transformer in Vaswani et al. (2017).", "And we choose a smaller setting for IWSLT16, as IWSLT16 is a smaller dataset.", "For IWSLT16, we use 5 layers for encoder and decoder, and set the model size d model to 256.", "Using Nvidia V100 GPUs, We train the model with batches of 64k/8k tokens for WMT/IWSLT datasets, respectively.", "We set the dropout rate to 0.1 and use Adam optimizer (Kingma and Ba, 2014) with = (0 . 9 , 0 . 999) .", "For WMT datasets, the learning rate warms up to 5 e 4 in 4k steps and gradually decays according to inverse square root schedule in Vaswani et al. (2017).", "As for IWSLT16 DE-EN, we adopt linear annealing (from 3 e 4 to 1 e 5 ) as in Lee et al. (2018).", "For the hyper-parameter , we adopt linear annealing from 0.5 to 0.3 for WMT datasets and a fixed value of 0.5 for IWSLT16.", "The final model is created by averaging the 5 best checkpoints chosen by validation BLEU scores.", "We report tokenized BLEU for all the datasets used in experiment.", "We measure the average latency per sentence on a single Nvidia 1080TI GPU.", "The main results on the benchmarks are presented in Table 1.", "GLAT significantly improves the translation quality and outperforms strong baselines by a large margin.", "Our method introduces explicit word interdependency modeling for the decoder and gradually learns simultaneous generation of whole sequences, enabling the model to better capture the underlying data structure.", "Compared to models [0,20) [20,40) [40,60) >=60 Source Input Length 26 28 30 32 34 P e r f o r m a n c e ( BLEU ) NAT-base Transformer GLAT Figure 4: Performance under different source input length on WMT14 DE-EN with iterative decoding, our method completely maintains the inference efficiency advantage of fully non-autoregressive models, since GLAT generate with a single pass.", "Compared with the baselines, we highlight our empirical advantages: GLAT is highly effective.", "Compared with the vanilla NAT-base models, GLAT obtains significant improvements (about 5 BLEU) on EN-DE/DE-EN.", "Additionally, GLAT also outperforms other fully non-autoregressive models with a substantial margin (almost +2 BLEU points on average).", "The results are even very close to those of the AT model, which shows great potential.", "GLAT is simple and can be applied to other NAT models flexibly, as we only modify the training process by reference glancing while keeping inference unchanged.", "For comparison, NAT-DCRF utilizes CRF to generate sequentially; NAT-IR and Mask-Predict models need multiple decoding iterations.", "CTC and NPD use different approaches to determine the best output length, and they have their own advantages and disadvantages.", "CTC requires the output length to be longer than the exact target length.", "With longer output lengths, the training will consume more time and GPU memory.", "As for NPD, with a certain number of length reranking candidates, the inference speed will be slower than models using CTC.", "Note that NPD can use pretrained AT models or the non-autoregressive model itself to rerank multiple outputs.", "We also present a scatter plot in Figure 3, displaying the trend of speed-up and BLEU with different NAT models.", "It is shown that the point of 1 2 3 4 5 6 7 8 Decoding Iterations 26 27 28 29 30 31 32 BLEU WMT14 DE-EN WMT14 EN-DE Figure 5: The BLEU scores of GLAT with different decoding iterations Model WMT14 EN-DE DE-EN NAT-base 8.32% 7.10% GLAT 1.19% 1.05% GLAT w/ NPD 0.32% 0.16% Table 2: Token repetition ratio on WMT14 EN-DE and WMT14 DE-ENGLAT is located on the top-right of the competing methods.", "Obviously, GLAT outperforms our competitors in BLEU if speed-up is controlled, and in speed-up if BLEU is controlled.", "This indicates that GLAT outperforms previous NAT methods.", "Although iterative models like Mask-Predict achieves competitive BLEU scores, they only maintain minor speed advantages over AT.", "In contrast, fully non-autoregressive models remarkably improve the inference speed.", "Effect of Source Input Length To analyze the effect of source input length on the models' performance, we split the source sentences into different intervals by length after BPE and compute the BLEU score for each interval.", "The histogram of results is presented in Figure 4.", "NAT-base's performance drops sharply for long sentences, while the gradual learning process enables GLAT to boost the performance by a large margin, especially for long sentences.", "We also find that GLAT outperforms autoregressive Transformer when the source input length is smaller than 20.", "GLAT Reduces Repetition We also measure the percentage of repeated tokens on test set of WMT14 EN-DE and WMT14 DE-EN.", "Table 2 presents the token repetition ratio of sentences generated by NAT-base and GLAT.", "The results show that GLAT significantly reduces the occurrence of repetition, and the repetition ratio can be further Sampling Number BLEU Fixed 0.0 24.66 0.1 24.91 0.2 27.12 0.3 24.98 0.4 22.96 Adaptive -29.61 Table 3: Performances on IWSLT16 with fixed sampling ratio.", "reduced with NPD.", "We think an important cause of the improvement is better interdependency modeling.", "Since GLAT explicitly encourages word interdependency modeling to better capture the dependency between target tokens, wrong generation patterns, such as repetition, can be largely avoided.", "GLAT Achieves Strong Results without Multiple Iterations We conduct experiments of GLAT with more than one decoding iteration in inference.", "We adopt the inference algorithm in Mask-Predict for multiple-iteration decoding.", "The results are shown in Figure", "5. We find that GLAT can achieve decent performances with only one decoding iteration, while further iterations only obtain minor improvements of 0.2 0.3 BLEU.", "Effectiveness of the Adaptive Sampling Number To validate the effectiveness of the adaptive sampling strategy for the sampling number S ( Y, Y ) , we also introduce two fixed approaches for comparison.", "The first one decides the sampling number with T , where T is the length of Y , and is a constant ratio.", "The second one is relatively flexible, which sets a start ratio of s and an end ratio e , and linearly reduces the sampling number from s T to e T along the training process.", "As shown in Table 3 and Table 4, our adaptive approach (Adaptive in the table) outperforms the baseline models with big margins.", "The results con-firm our intuition that the sampling schedule affects the generation performance of our NAT model.", "The Selection Strategy GLAT GLAT w/ NPD random 25.21 26.55 p ref 24.87 25.83 1 p ref 25.37 26.52 most certain 24.99 26.22 most uncertain 24.86 26.13 Table 5: Performance on WMT14 EN-DE with different reference word selection strategies.", "sampling strategy, which first offers relatively easy generation problems and then turns harder, bene-fits the final performance.", "Besides, even with the simplest constant ratio, GLAT still achieves remarkable results.", "When set = 0 .", "2 , it even outperforms the baseline = 0 .", "0 by 2.5 BLEU points.", "The experiments potentially support that it is ben-eficial to learn the generation of fragments at the start and gradually transfer to the whole sequence.", "The flexible decreasing ratio method works better than the constant one, and our proposed adaptive approaches achieve the best results.", "Influence of Reference Word Selection To analyze how the strategies of selecting reference words affect glancing sampling, we conduct experiments with different selection strategies.", "By default, we assume all the words in the reference are equally important and randomly choose reference words for glancing.", "Besides the random strategy, we devise four other selection methods considering the prediction of first decoding.", "For p ref and 1 p ref , the sampling probability of each reference word is proportional to the output probability for the reference word p ref and the probability 1 p ref , respectively.", "Similar to the word selection strategy for masking words during inference in Mask-Predict, we also add two strategies related to the prediction confidence: \"most certain\" and \"most uncertain.\"", "We choose the positions where predictions have higher confidence for \"most certain\", and vise versa for \"most uncertain.\"", "The results for different selection methods are listed in Table", "5. In comparisons, the model with the selection Method WMT14 EN-DE DE-EN GLAT w/ uniform sampling 19.16 23.56 GLAT w/ [MASK] inputs 24.99 29.48 GLAT 25.21 29.84 Table 7: Ablation study for comparing GLAT and Mask-Predict on WMT14 EN-DE and DE-EN.", "strategy 1 p ref outperforms the one with p ref , indicating that words hard to predict are more important for glancing in training.", "And we find that the random strategy performs a little better than the two confidence-based strategies.", "We think this indicates that introducing more randomness in sampling enable GLAT to explore more interdependency among target words.", "We adopt the random strategy for its simplicity and good performance.", "Comparison of Different Distances for Glancing Sampling We conduct experiments with two distances for comparing the predictions of the first decoding and references, and the results are presented in Table", "6. Experimental results show that both distances can be used to improve the quality of one-iteration generation, and GLAT with Hamming distance is better than GLAT with Levenshtein distance.", "Especially when there is no target length reranking, GLAT with Hamming distance outperforms GLAT with Levenshtein distance by about 0.7 BLEU and 0.9 BLEU on WMT14 EN-DE and DE-EN respectively.", "We think Hamming distance is more strict than Levenshtein distance because only the same words on the corresponding positions are regarded as correct, which is more consistent with the training of GLAT.", "Advantages of GLAT over Mask-Predict To study the effects of sampling strategy and decoder inputs of GLAT, we conduct experiments for replacing these two modules in GLAT with the corresponding part in Mask-Predict, respectively.", "The results are presented in Table", "7. GLAT employs glancing sampling strategy instead of the uniform sampling strategy used in Mask-Predict, and replaces the [MASK] token inputs with source representations from the encoder.", "The results show that the glancing sampling strategy outperforms the uniform sampling strategy by 5 6 BLEU points, and feeding representations from the encoder as the decoder input could still improve the strong baseline by 0.2 0.3 BLEU points after adopting glancing sampling.", "To sum up, the adaptive glancing sampling approach contributes the most to the final improvement, and the use of representations from the encoder also helps a bit.", "Fully Non-Autoregressive Models A line of work introduces various forms of latent variables to reduce the model's burden of dealing with dependencies among output words (Gu et al., 2018; Ma et al., 2019; Bao et al., 2019; Ran et al., 2019; Bao et al., 2021).", "Another branch of work considers transferring the knowledge from autoregressive models to non-autoregressive models (Wei et al., 2019; Li et al., 2019; Guo et al., 2020a; Sun and Yang, 2020).", "Besides, there are also some work that apply different training objectives to train non-autoregressive models (Libovick`y and Helcl, 2018; Shao et al., 2020; Ghazvininejad et al., 2020a), add regularization terms (Wang et al., 2019; Guo et al., 2019).", "Non-Autoregressive Models with Structured Decoding To model the dependencies between words, Sun et al. (2019) introduces a CRF inference module in NAT and performs additional sequential decoding after the non-autoregressive computation in inference.", "Deng and Rush (2020) proposes cascaded CRF decoding.", "Since GLAT only performs single-pass non-autoregressive generation, our approach is orthogonal to the method proposed in Sun et al. (2019).", "We can also combine our approach with the structured decoding methods.", "Non-Autoregressive Models with Iterative Refinement A series of work are devoted to semi-autoregressive models that refine the outputs with multi-pass iterative decoding (Lee et al., 2018; Miao et al., 2019; Gu et al., 2019; Ghazvininejad et al., 2019, 2020b; Kasai et al., 2020; Li et al., 2020).", "Lee et al. (2018) proposed a method of iterative refinement based on denoising autoencoder.", "Gu et al. (2019) utilized insertion and deletion to refine the outputs in inference.", "Ghazvininejad et al. (2019) trained the model with the masked language model, and the model iteratively replaces masked tokens with new outputs.", "(Li et al., 2020) first predict the left token and right token for each position, and decode the final token at the current position conditioned on the left-and-right tokens predicted before.", "Despite the relatively better accuracy, the multiple decoding iterations reduce the inference efficiency of non-autoregressive models.", "Scheduled Sampling To alleviate exposure bias in autoregressive models, previous work attempts to close the gap between training and inference by scheduled sampling (Bengio et al., 2015; Mihaylova and Martins, 2019).", "Although scheduled sampling also modifies decoder inputs in training, there are mainly two differences between our work and scheduled sampling.", "Firstly, scheduled sampling mixes up the predicted sequence and the gold target sequence, and our method does not mix predicted sequences into decoder inputs.", "Besides, GLAT aims to learn word interdependency for single-pass parallel generation and scheduled sampling is designed for alleviating exposure bias.", "In this paper, we propose Glancing Transformer with a glancing language model to improve the performance of single-pass parallel generation models.", "With the glancing language model, the model starts from learning the generation of sequence fragments and gradually moving to whole sequences.", "Experimental results show that our approach significantly improves the performance of non-autoregressive machine translation with single-pass parallel generation.", "As GLAT achieves competitive performance compared with autoregressive models, applying our approach to other generation tasks is a promising direction for future work.", "We thank all the anonymous reviewers for their valuable comments.", "Hao Zhou and Lei Li are corresponding authors." ]
[ "abstain", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "other", "other", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "objective", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "abstain", "method", "other", "other", "other", "other", "other", "other", "other", "method", "method", "other", "objective", "abstain", "result", "result", "other", "other" ]
[ "Selecting input features of top relevance has become a popular method for building self-explaining models.", "In this work, we extend this selective rationalization approach to text matching, where the goal is to jointly select and align text pieces, such as tokens or sentences, as a justification for the downstream prediction.", "Our approach employs optimal transport (OT) to find a minimal cost alignment between the inputs.", "However, directly applying OT often produces dense and therefore uninterpretable alignments.", "To overcome this limitation, we introduce novel constrained variants of the OT problem that result in highly sparse alignments with controllable sparsity.", "Our model is end-to-end differentiable using the Sinkhorn algorithm for OT and can be trained without any alignment annotations.", "We evaluate our model on the StackExchange, MultiNews, e-SNLI, and MultiRC datasets.", "Our model achieves very sparse rationale selections with high fidelity while preserving prediction accuracy compared to strong attention baseline models.", "1 Introduction The growing complexity of deep neural networks has given rise to the desire for self-explaining models (Li et al., 2016; Ribeiro et al., 2016; Zhang et al., 2016; Ross et al., 2017; Sundararajan et al., 2017; Alvarez-Melis and Jaakkola, 2018b; Chen et al., 2018a).", "In text classification, for instance, one popular method is to design models that can perform classification using only a rationale , which is a subset of the text selected from the model input that fully explains the model's prediction (Lei et al., 2016; Bastings et al., 2019; Chang et al., 2019).", "This selective rationalization method, often * Denotes equal contribution.", "trained to choose a small yet sufficient number of text spans, makes it easy to interpret the model's prediction by examining the selected text.", "In contrast to classification, very little progress has been made toward rationalization for text matching models.", "The task of text matching encompasses a wide range of downstream applications, such as similar document recommendation (dos Santos et al., 2015), question answering (Lee et al., 2019), and fact checking (Thorne et al., 2018).", "Many of these applications can benefit from selecting and comparing information present in the provided documents.", "For instance, consider a similar post suggestion in a tech support forum as shown in Figure 1. The extracted rationales could provide deeper insights for forum users while also helping human experts validate and improve the model.", "In this work, we extend selective rationalization for text matching and focus on two new challenges that are not addressed in previous rationalization work.", "First, since text matching is fundamentally about comparing two text documents, rationale selection should be jointly modeled and optimized for matching.", "Second, the method should produce an interpretable alignment between the selected rationales showcasing their relations for the downstream prediction.", "This is very different from rationalization for text classification, where the selection is performed independently on each input text and an alignment between rationales is unnecessary.", "One popular method for aligning inputs is attention-based models (Bahdanau et al., 2015; Rocktschel et al., 2015; Rush et al., 2015; Xu et al., 2015; Kim et al., 2018).", "However, a limitation of neural attention is that the alignment is rarely sparse, thus making it difficult to interpret how the numerous relations among the text spans lead to the model's prediction.", "Recent work has explored sparse variants of attention (Martins and Astudillo, 2016; Niculae and Blondel, 2017; Lin et al., 2018; Malaviya et al., 2018; Niculae et al., 2018), but the number of non-zero alignments can still be large (Laha et al., 2018).", "Additionally, because of the heavy non-linearity following most attention layers, it is difficult to truly attribute the model's predictions to the alignment, which means that attention-based models lack fidelity.", "We propose to address these challenges by directly learning sparse yet sufficient alignments using optimal transport (OT).", "We use OT as a building block within neural networks for determining the alignment, providing a deeper mathematical justification for the rationale selection.", "In order to produce more interpretable rationales, we construct novel variants of OT that have provable and controllable bounds on the sparsity of the alignments.", "Selecting and aligning text spans can be jointly optimized within this framework, resulting in optimal text matchings.", "Our model is fully end-to-end differentiable using the Sinkhorn algorithm (Cu-turi, 2013) for OT and can be used with any neural network architecture.", "We evaluate our proposed methods on the StackExchange, MultiNews (Fabbri et al., 2019), e-SNLI (Camburu et al., 2018), and MultiRC (Khashabi et al., 2018) datasets, with tasks ranging from similar document identification to reading comprehension.", "Compared to other neural baselines, our methods show comparable task performance while selecting only a fraction of the number of alignments.", "We further illustrate the effectiveness of our method by analyzing how faithful the model's predictions are to the selected rationales and by comparing the rationales to human-selected rationales provided by DeYoung et al. (2019) on the e-SNLI and MultiRC datasets.", "Selective Rationalization.", "Model interpretability via selective rationalization has attracted considerable interest recently (Lei et al., 2016; Li et al., 2016; Chen et al., 2018a; Chang et al., 2019).", "Some recent work has focused on overcoming the challenge of learning in the selective rationalization regime, such as by enabling end-to-end differentiable training (Bastings et al., 2019) or by regularizing to avoid performance degeneration (Yu et al., 2019).", "Unlike these methods, which perform independent rationale selection on each input document, we extend selective rationalization by jointly learning selection and alignment, as it is better suited for text matching applications.", "Concurrent to this work, DeYoung et al. (2019) introduce the ERASER benchmark datasets with human-annotated rationales along with several rationalization models.", "Similarly to DeYoung et al. (2019), we measure the faithfulness of selected rationales, but our work differs in that we additionally emphasize sparsity as a necessary criterion for interpretable alignments.", "Alignment.", "Models can be made more interpretable by requiring that they explicitly align related elements of the input representation.", "In NLP, this is often achieved via neural attention (Bah-danau et al., 2015; Chen et al., 2015; Rush et al., 2015; Cheng et al., 2016; Parikh et al., 2016; Xie et al., 2017).", "Many variants of attention, such as temperature-controlled attention (Lin et al., 2018) and sparsemax (Martins and Astudillo, 2016), have been proposed to increase sparsity within the attention weights.", "However, it is still debatable whether attention scores are truly explanations (Jain and Wallace, 2019; Wiegreffe and Pinter, 2019).", "Distance-based methods of aligning text have also been proposed (Li et al., 2019), but they similarly cannot guarantee sparsity or explainability.", "In this work, we explicitly optimize rationale selection and alignment as an integral part of the model and evaluate the degree to which the alignment explains the model's predictions.", "Optimal Transport.", "The field of optimal transport (OT) began with Monge (1781), who explored the problem of determining a minimal cost assignment between sets of equal sizes.", "Kantorovich (1942) relaxed Monge's problem to that of determining an optimal transport plan for moving probability mass between two probability distributions.", "Since the introduction of a differentiable OT solver by Cuturi (2013), OT has seen many applications in deep learning and NLP, such as topic embedding (Kusner et al., 2015), text generation (Chen et al., 2018b), cross-lingual word embedding alignment (Alvarez-Melis and Jaakkola, 2018a), graph embedding (Xu et al., 2019), and learning permutations (Mena et al., 2018).", "Peyr and Cuturi (2019) provides an overview of the computational aspects of OT.", "Unlike prior work, we develop novel additional constraints on the OT problem that produce particularly sparse and interpretable alignments.", "Consider two related text documents D x and D y .", "These documents are broken down into two sets of text spans, S x and S y , where the text spans can be words, sentences, paragraphs, or any other chunking of text.", "These text spans are then mapped to vector representations using a function g ( ) (e.g., a neural network), which produces two sets of vectors representing the inputs, X = { x i } ni =1 = { g ( S xi ) } ni =1 and Y = { y i } mi =1 = { g ( S yi ) } mi =1 , where x i , y i R d .", "We define an interpretable text matching as an alignment between the text spans in X and Y that explains the downstream prediction.", "Following common practice for previous self-explaining models (Lei et al., 2016; Alvarez-Melis and Jaakkola, 2018b), we specify that a desirable model must produce alignments satisfying the following criteria of interpretability.", "Explicitness.", "The alignment between text spans generated by the model should be an observable and understandable component of the model.", "Our model explicitly encodes the alignment between X and Y as a matrix P R n m + where P i,j indicates the degree to which x i and y j are aligned.", "Sparsity.", "In order for the alignment to be interpretable, the alignment matrix P must be sparse, meaning there are very few non-zero alignments between the text spans.", "A sparser alignment is easier to interpret as fewer alignments between text spans need to be examined.", "Faithfulness.", "An interpretable text matching is only meaningful if the model's predictions are faithful to the alignment, meaning the predictions are directly dependent on it.", "Similarly to previous work, our model achieves faithfulness by using only the selected text spans (and their representations) for prediction.", "That is, the selected rationales and alignment should be sufficient to make accurate predictions.", "In addition to sufficiency, faithfulness also requires that the model output should be easily attributed to the choice of alignment 1 .", "For simple attribution, we define our model output as either a linear function of the alignment P or a shallow feed-forward network on top of P .", "In the following sections, we introduce optimal transport as a method to produce interpretable text matchings satisfying all three desiderata.", "An instance of the discrete optimal transport problem consists of two point sets, X = { x i } ni =1 and Y = { y i } mi =1 , with x i , y i R d .", "Additionally, X and Y are associated with probability distributions a n and b m , respectively, where n is the probability simplex n := (cid:8) p R n + : (cid:80) ni =1 p i = 1 (cid:9) .", "A cost function c ( x , y ) : R d R d R specifies the cost of aligning a pair of points x and y .", "The costs of aligning all pairs of points are summarized by the cost matrix C R n m , where C i,j = c ( x i , y j ) .", "The goal of optimal transport is to compute a mapping that moves probability mass from the points of X (distributed according to a ) to the points of Y (distributed according to b ) so that the total cost of moving the mass between points is minimized according to the cost function c .", "This mapping is represented by a transport plan, or alignment matrix, P R n m + , where P i,j indicates the amount of probability mass moved from x i to y j .", "The space of valid alignment matrices is the set U ( a , b ) := { P R n m + : P 1 m = a , PT 1 n = b } since P must marginalize out to the corresponding probability distributions a and b over X and Y .", "Under this formulation, the optimal transport problem is to find the alignment matrix P that minimizes the sum of costs weighted by the alignments: LC ( a , b ) := min P U ( a , b ) (cid:104) C , P (cid:105) = (cid:88) i,j C i,j P i,j .", "Note that this optimization is a linear programming problem over the convex set U ( a , b ) .", "As a result, one of the extreme points of U ( a , b ) must be an optimal solution.", "1 For example, a linear model achieves strong attribution because the importance of each input feature is a constant parameter.", "Optimal transport is known to produce alignments that are especially sparse.", "In particular, the following propositions characterize the extreme point solution P of LC ( a , b ) and will be important in designing interpretable alignments in Section 5. Proposition 1 (Brualdi (2006), Thm.", "8.1.2) .", "Any extreme point P that solves LC ( a , b ) has at most n + m 1 non-zero entries.", "Proposition 2 (Birkhoff (1946)) .", "If n = m and a = b = 1 n /n , then every extreme point of U ( a , b ) is a permutation matrix.", "In other words, while the total number of possible aligned pairs is n m , the optimal alignment P has O ( n + m ) non-zero entries.", "Furthermore, if n = m , then any extreme point solution P is a permutation matrix and thus only has O ( n ) nonzero entries.", "Figure 2 illustrates two alignments, including one that is a permutation matrix.", "Note that the optimal solution of LC ( a , b ) may not be unique in degenerate cases, such as when C i,j is the same for all i, j .", "In such degenerate cases, any convex combination of optimal extreme points is a solution.", "However, it is possible to modify any OT solver to guarantee that it finds an extreme point (i.e., sparse) solution.", "We provide a proof in Appendix D, although experimentally we find that these modifications are unnecessary as we nearly always obtain an extreme point solution.", "LC ( a , b ) is a linear programming problem and can be solved exactly with interior point methods.", "Recently, Cuturi (2013) proposed an entropy-regularized objective that can be solved using a fully differentiable, iterative algorithm, making it ideal for deep learning applications.", "Specifically, the entropy-regularized objective is L (cid:15) C ( a , b ) := min P U ( a , b ) (cid:104) C , P (cid:105) (cid:15) H ( P ) , where H ( P ) is the entropy of alignment matrix P and (cid:15) > 0 controls the amount of entropy regularization.", "In practice, (cid:15) can be set sufficiently small such that the solution to L (cid:15) C ( a , b ) is a good approximation of the solution to LC ( a , b ) .", "Conveniently, L (cid:15) C ( a , b ) has a solution of the form P = diag( u ) K diag( v ) , where K = e C /(cid:15) and ( u , v ) R n + R m + .", "The vectors u and v can be determined using the Sinkhorn-Knopp matrix scaling algorithm (Sinkhorn and Knopp, 1967),", "(a) Alignment 1 (graph)", "(b) Alignment 2 (graph)", "(c) Alignment 1 (matrix)", "Since each iteration consists only of matrix operations, the Sinkhorn algorithm can be used as a differentiable building block in deep learning models.", "For instance, in this work we take C as the distance between hidden representations given by a parameterized neural network encoder.", "Our model performs the Sinkhorn iterations until convergence (or a maximum number of steps) and then outputs the alignment P and the total cost (cid:104) C , P (cid:105) as inputs to subsequent components of the model.", "Using vanilla OT produces sparse alignments as guaranteed by Proposition 1, but the level of sparsity is insufficient to be interpretable.", "For instance, Alignment 1 in Figure 2 still has a significant number of non-zero alignment values.", "Motivated by this limitation, we propose to encourage greater sparsity and interpretability by constructing OT problems with additional constraints.", "General Recipe for Additional Constraints.", "Intuitively, an interpretable alignment should be sparse in two ways.", "First, each text span should be aligned to one or a very small number of spans in the other input text.", "Second, the total number of Figure 3: An illustration of the process of computing a one-to-two assignment between the points of X and Y .", "aligned pairs should be small enough so that the alignment can be easily examined by a human.", "We modify the OT problem in several ways to guarantee both aspects of sparsity.", "We start by forcing the solution to be an assignment , which is a one-to-one (or one-to-few) alignment such that every non-zero entry in the alignment matrix is equal, thereby simplifying interpretability.", "Alignment 2 in Figure 2 is an example of a one-to-one assignment.", "We also consider two other constructions, one that makes every text span in the alignment optional and another that directly limits the total number of aligned pairs.", "Replica points are exact copies of the original points in X or Y and can be used to control the sparsity of each point's alignment.", "Dummy points , also known as tariff-free reservoirs in prior work, are points that can be aligned to with 0 cost.", "Dummy points are used for absorbing unused probability mass in partial transport, where the constraints are relaxed to P 1 m a and PT 1 n b (Caffarelli and McCann, 2010; Figalli, 2010).", "The idea is to add an appropriate number of replica points and dummy points to create X and Y with | X | = | Y | = N for some N .", "Then by using uniform probability distributions a = b = 1 N /N , Proposition 2 implies that one of the solutions to the OT problem will be a permutation matrix, i.e., a one-to-one assignment between the points in X and Y .", "Since the points of X and Y are included in X and Y , we can directly extract an assignment between X and Y from the assignment between X and Y .", "Figure 3 illustrates the procedure.", "Note that the same solution can be attained without explicitly replicating any points by adjusting the probability distributions a and b , but we use replication for ease of exposition.", "Also note that the Sinkhorn algorithm is compatible with replica and dummy points and the model remains differentiable.", "We now describe three specific instances of this procedure that produce interpretable assignments with different sparsity patterns.", "Without loss of generality, we assume that n = | X | | Y | = m .", "One-tok Assignment.", "In this assignment, every point in the smaller set X should map to k points in the larger set Y , where k { 1 , 2 , . . . , (cid:98) mn (cid:99)} .", "This will result in a sparsity of kn (cid:98) mn (cid:99) n m .", "To compute such an assignment, we set Y = Y and we construct X with k copies of every point in X along with m kn dummy points.", "Since | X | = | Y | = m , applying OT to X and Y produces a one-to-one assignment between X and Y .", "As X contains k replicas of each point in X , each unique point in X is mapped to k points in Y , thus producing a one-tok assignment.", "The remaining m kn mappings to dummy points are ignored.", "Relaxed One-tok Assignment.", "In a relaxed one-tok assignment, each point in X can map to at most k points in Y .", "As with the one-tok assignment, we use k replicas of each point in X , but now we add m dummy points to X and kn dummy points to Y , meaning | X | = | Y | = m + kn .", "Because of the number of replicas, this will produce at most a one-tok assignment between X and Y .", "However, since there is now one dummy point in Y for every original point in X , every original point has the option of aligning to a dummy point, resulting in at most k alignments.", "Note that in this case, the cost function must take both positive and negative values to prevent all original points from Constraint #R of X #D in X (cid:48) #D in Y (cid:48) Sparsity ( s ) Vanilla 1 0 0 s n + m 1 One-tok k m kn 0 s = kn m R one-tok k m kn s kn m Exactk 1 m k n k s = k n Table 1: Summary of constrained alignment construction and sparsity.", "Exactk Assignment.", "An exactk assignment maps exactly k points in X to points in Y , where k n .", "An exactk assignment can be constructed by adding m k dummy points to X and n k dummy points to Y , meaning | X | = | Y | = n + m k .", "In this case, the cost function must be strictly positive so that original points map to dummy points whenever possible.", "This leaves exactly k alignments between original points in X and Y .", "Controllable Sparsity.", "Table 1 summarizes the differences between vanilla OT and the constrained variants.", "The freedom to select the type of constraint and the value of k gives fine-grained control over the level of sparsity.", "We evaluate the performance of all these variants in our experiments.", "Datasets.", "We evaluate our model and all baselines on four benchmarks: two document similarity tasks, MultiNews and StackExchange, and two classification tasks, e-SNLI and MultiRC.", "The e-SNLI and MultiRC tasks come from the ERASER benchmark (DeYoung et al., 2019), which was created to evaluate selective rationalization models.", "We chose those two datasets as they are best suited for our text matching setup.", "StackExchange 2 is an online question answering platform and has been used as a benchmark in previous work (dos Santos et al., 2015; Shah et al., 2018; Perkins and Yang, 2019).", "We took the June 2019 data dumps 3 of the AskUbuntu and SuperUser subdomains of the platform and combined them to form our dataset.", "MultiNews (Fabbri et al., 2019) is a multi-document summarization dataset where 2 to 10 news articles share a single summary.", "We consider 2 https://stackexchange.com/sites 3 https://archive.org/details/ stackexchange Metric StackExchange MultiNews # docs 730,818 10,130 # similar doc pairs 187,377 22,623 Avg sents per doc 3.7 31 Max sents per doc 54 1,632 Avg words per doc 87 680 Vocab size 603,801 299,732 Table 2: Statistics for the document ranking datasets.", "every pair of articles that share a summary to be a similar document pair.", "Table 2 shows summary statistics of the two document ranking datasets.", "e-SNLI (Camburu et al., 2018) is an extended version of the SNLI dataset (Bowman et al., 2015) for natural language inference where the goal is to predict the textual entailment relation (entailment, neutral, or contradiction) between premise and hypothesis sentences.", "Human rationales are provided as highlighted words in the two sentences.", "MultiRC (Khashabi et al., 2018) is a reading comprehension dataset with the goal of assigning a label of true or false to a question-answer pair depending on information from a multi-sentence document.", "We treat the concatenated question and answer as one input and the document as the other input for text matching.", "Human rationales are provided as highlighted sentences in the document.", "For StackExchange and MultiNews, we split the documents into 80% train, 10% validation, and 10% test, while for e-SNLI and MultiRC, we use the splits from DeYoung et al. (2019).", "Metrics.", "We evaluate models according to the following three criteria.", "1. Sparsity.", "To evaluate sparsity, we compute the average percentage of active alignments produced by each model, where an alignment is active if it exceeds a small threshold .", "This threshold is necessary to account for numerical imprecision in alignment values that are essentially zero.", "We set = 0 .", "01 n m unless otherwise specified, where n and m are the number of text spans in the two documents.", "2. Sufficiency.", "If a model makes a correct prediction given only the rationales, then the rationales are sufficient.", "We evaluate sufficiency by providing the model only with active alignments and the aligned text representations and by masking non-active inputs (using the threshold ).", "3. Relevance.", "The relevance of rationales is determined by whether a human would deem them valid and relevant.", "We compute relevance using the token-level F1 scores of model-generated rationales compared to human-selected rationales on the e-SNLI and MultiRC datasets.", "We also perform a qualitative human evaluation.", "Baselines and Implementation Details.", "We use the decomposable attention model (Parikh et al., 2016) as our baseline attention model.", "In addition, we compare our model to two attention variants that are designed to encourage sparsity.", "The temperature attention variant applies a temperature term T in the softmax operator (Lin et al., 2018).", "The sparse attention variant adopts the sparsemax operator (Martins and Astudillo, 2016) in place of softmax to produce sparse attention masks.", "Our constrained OT model operates as illustrated in Figure 4. After splitting the input documents into sentences, our model independently encodes each sentence and computes pairwise costs between the encoded representations 4 .", "Dummy and replica encodings are added as needed for the desired type of constrained alignment.", "Our model then applies OT via the Sinkhorn algorithm to the cost matrix C to produce an optimal alignment matrix P .", "For the document ranking tasks, the final score is simply (cid:104) C , P (cid:105) .", "For the classification tasks, we use the alignment P as a sparse mask to select encoded text representations, and we feed the aggregated representation to a shallow network to predict the output label, similar to our baseline attention models.", "For a fair comparison, our models and all baselines use the same neural encoder to encode text spans before the attention or OT operation is applied.", "Specifically, we use RoBERTa (Liu et al., 2019), a state-of-the-art pre-trained encoder, for 4 For the e-SNLI dataset, where documents are single sentences, we use the contextualized token representations from the output of the sentence encoder following previous work (Thorne et al., 2019).", "the StackExchange and MultiRC dataset.", "We use use bi-directional recurrent encoders (Lei et al., 2018) for the MultiNews and e-SNLI datasets 5 .", "The value of k for the OT constraints is chosen for each dataset by visually inspecting alignments in the validation set, though model performance is robust to the choice of k .", "In order to compare our models' rationales to human annotations, we use a binary thresholding procedure as described in Appendix C. We report results averaged over 3 independent runs for each model.", "Additional implementation details are provided in Appendix C. 7 Results Synthetic Visualizations.", "Before experimenting with the datasets, we first analyze the alignments obtained by different methods on a synthetic cost matrix in Figure 5. As shown in the figure, all attention baselines struggle to produce sufficiently sparse alignments, even with the use of a small temperature or the sparsemax operator.", "In contrast, our methods are very sparse, as a result of the provable sparsity guarantees of the constrained alignment 5 The input text in the MultiNews dataset is too long for large BERT models.", "The e-SNLI dataset in ERASER contains human-annotated rationales at the word level while BERT models use sub-word tokenization.", "problem.", "For instance, the relaxed one-tok assignment produces fewer active alignments than either the number of rows or columns, and the exactk assignment finds exactly k = 4 alignments.", "StackExchange & MultiNews.", "Table 3 presents the results of all models on the StackExchange and MultiNews datasets.", "We report standard ranking and retrieval metrics including area under the curve (AUC), mean average precision (MAP), mean reciprocal rank (MRR), and precision at 1 (P@1).", "The results highlight the ability of our methods to obtain high interpretability while retaining ranking performance comparable to strong attention baselines.", "For example, our model is able to use only 6 aligned pairs to achieve a P@1 of 96.6 on the MultiNews dataset.", "In comparison, the sparse attention model obtains a P@1 of 97.1 but uses more than 300 alignment pairs and is thus difficult to interpret.", "Model complexity and speed on the StackExchange dataset are reported in Table 7 in Appendix C. e-SNLI.", "Table 4 shows model performance on the e-SNLI dataset.", "As with document similarity ranking, we evaluate classification accuracy when the model uses only the active alignments.", "This is to ensure faithfulness, meaning the model truly and exclusively uses the rationales to make predictions.", "Since attention is not explicitly trained to use only active alignments, we also report the accuracy of attention models when using all attention weights.", "As shown in the table, the accuracy of attention methods decreases significantly when we remove attention weights other than those deemed active by the threshold .", "In contrast, our model retains high accuracy even with just the active alignments since sparsity is naturally modeled in our contrained optimal transport framework.", "Figure 6 visualizes the Figure 6: Model accuracy on the e-SNLI dataset when using different percentages of tokens as rationales.", "change to model accuracy when different proportions of tokens are selected by the models.", "Table 4 also presents the token-level F1 scores for the models' selected rationales compared to human-annotated rationales.", "Note that the rationale annotations for this task are designed for token selection rather than alignment and are sometimes only on one of the input sentences.", "Nevertheless, our model obtains F1 scores on par with recent work (DeYoung et al., 2019; Thorne et al., 2019).", "MultiRC.", "Table 5 presents the results on the MultiRC dataset.", "Compared to attention models, our OT-based models achieve similar task performance with a higher rationale F1 score, despite selecting fewer rationales.", "The model variants from DeYoung et al. (2019) in general achieve higher task F1 performance.", "However, their unsupervised model suffers from degeneration due to the challenges of end-to-end training without rationale supervision.", "We also create supervised versions of our models that learn from the human-annotated rationales Model Accuracy Task F1 % Token Premise F1 Hypothesis F1 P&H F1 OT (relaxed 1:1) 82.4 82.4 69.1 25.1 43.7 34.6 OT (exact k = 4 ) 81.4 81.4 38.7 24.3 45.0 35.4 OT (exact k = 3 ) 81.3 81.4 29.6 28.6 50.0 39.8 OT (exact k = 2 ) 81.3 81.3 21.6 24.8 30.6 27.8 Attention 76.3 (82.1) 76.2 37.9 26.6 37.6 32.2 Attention ( T = 0 . 1 ) 73.9 (81.5) 73.9 33.0 28.4 44.1 36.5 Attention ( T = 0 . 01 ) 70.2 (81.4) 69.9 30.6 26.1 38.0 32.2 Sparse Attention 63.5 (75.0) 63.1 12.5 8.8 24.5 17.2 Thorne et al. (2019) (81.0) -22.2 57.8 Lei et al. (2016) -90.3 --37.9 Lei et al. (2016) (+S) -91.7 --69.2 Bert-To-Bert (+S) -73.3 --70.1 Table 4: e-SNLI accuracy, macro-averaged task F1, percentage of tokens in active alignments, and token-level F1 of the model-selected rationales compared to human-annotated rationales for the premise, hypothesis, and both (P&H F1).", "during training.", "These supervised models achieve comparable task performance to and better rationale F1 scores than models from DeYoung et al. (2019), demonstrating the strength of a sparse rationale alignment.", "Supervised training details can be found in Appendix C. Qualitative Studies.", "We performed a human evaluation on documents from StackExchange that reveals that our model's alignments are preferred to attention.", "The results of the human evaluation, along with examples of StackExchange and e-SNLI alignments, are provided in Appendix A. 8 Conclusion Balancing performance and interpretability in deep learning models has become an increasingly important aspect of model design.", "In this work, we propose jointly learning interpretable alignments as part of the downstream prediction to reveal how neural network models operate for text matching applications.", "Our method extends vanilla optimal transport by adding various constraints that produce alignments with highly controllable sparsity patterns, making them particularly interpretable.", "Our models show superiority by selecting very few alignments while achieving text matching performance on par with alternative methods.", "As an added benefit, our method is very general in nature and can be used as a differentiable hard-alignment module in larger NLP models that compare two pieces of text, such as sequence-to-sequence models.", "Furthermore, our method is agnostic to the underlying nature of the two objects being aligned and can therefore align disparate objects such as images and captions, enabling a wide range of future applications within NLP and beyond.", "We thank Jesse Michel, Derek Chen, Yi Yang, and the anonymous reviewers for their valuable discussions.", "We thank Sam Altschul, Derek Chen, Amit Ganatra, Alex Lin, James Mullenbach, Jen Seale, Siddharth Varia, and Lei Xu for providing the human evaluation." ]
[ "abstain", "objective", "result", "abstain", "objective", "method", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "objective", "abstain", "method", "abstain", "result", "method", "other", "other", "other", "objective", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "other", "objective", "abstain", "abstain", "other", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "method", "method", "abstain", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "method", "other", "other" ]
[ "We propose knowledge internalization (KI), which aims to complement the lexical knowledge into neural dialog models.", "Instead of further conditioning the knowledge-grounded dialog (KGD) models on externally retrieved knowledge, we seek to integrate knowledge about each input token internally into the model's parameters.", "To tackle the challenge due to the large scale of lexical knowledge, we adopt the contrastive learning approach and create an effective token-level lexical knowledge retriever that requires only weak supervision mined from Wikipedia.", "We demonstrate the effectiveness and general applicability of our approach on various datasets and diversified model structures.", "Vacuous responses (Li et al., 2016; Ghazvininejad et al., 2018), such as, I don't know , are commonly observed in end-to-end neural dialog models (Shang et al., 2015; Sordoni et al., 2015).", "This is mostly because these models ignore the knowledge that resides in people's minds during a conversation.", "To bridge the gap, many existing works (Moghe et al., 2018; Dinan et al., 2018) have attempted to condition the dialog model on external knowledge, either a sentence or a paragraph, retrieved based on the utterance and/or previous context.", "This curates datasets with utterance-response-knowledge triples (see Fig", "1(a)).", "These knowledge-grounded dialog (KGD) models, despite demonstrated effectiveness, suffer from two major problems.", "First, equipping models with sentence-level knowledge alone will limit responses' informativeness and diversity.", "As shown in Fig", "1(a), with the knowledge retrieved giving the utterance, a KGD model can relate J.K Rowling to Khalsa Aid .", "However, retrieval based solely on sentence embeddings The majority of this work was done while the first author was interning at Tencent AI Lab.", "will result in ignorance of lexical knowledge associated with individual tokens.", "In this example, the knowledge about J.K Rowling , COVID-19 , donates , and India , is ignored during the retrieval, due to the semantic gaps between those lexical knowledge sentences (see Fig", "1(b)) and the utterance.", "This makes it rather difficult (if not impossible) for the model to generate responses carrying relevant information as shown in Fig", "1(b).", "Second, retrieving knowledge for open-domain dialogs during inference incurs heavier computation, often involving similarity search over tens of millions of passages (Petroni et al., 2021).", "Existing systems (Zhao et al., 2020; Zheng et al., 2020) alleviate this problem relying on pre-selecting a 7945 small candidate set based on TF-IDF (Schtze et al., 2008), in sacrifice of the diversity and the accuracy of the retriever.", "Directly conditioning the dialog model on the retrieved text, these models are easily effected by the quality of the constructed candidate set and are thus prone to errors (Dinan et al., 2018; Kim et al., 2020; Zhao et al., 2020).", "In this work, we propose to complement the lexical knowledge into neural dialog models by K nowledge I nternalization ( KI ), a training approach based on contrastive learning (Hadsell et al., 2006).", "The central idea of KI is to integrate more fine-grained lexical knowledge about each input token internally into model parameters (e.g., word embeddings), rather than further conditioning the model on externally retrieved knowledge (e.g., directly copy and/or modify tokens from external knowledge when decoding).", "Our research contributions include: a novel training objective (KI; 3.2) that infuses lexical semantics into word representations.", "With the knowledge internalized into the contextualized representation of every token, a dialog model can generate informative and diverse responses without engaging an external knowledge retrieval module during inference time, thus making the inference more efficient (6.1); an effective token-level lexical knowledge retriever (4) trained with weak supervision to contextually align tokens in dialog corpora to their related and possibly different knowledge (Ap-pendix C).", "a demonstration of the effectiveness and general applicability of KI with extensive experiments on diversified dialog models and on three benchmark datasets: DailyDialog (Li et al., 2017), Wizard of Wikipedia (Dinan et al., 2018), and Commonsense Reddit Dataset (Zhou et al., 2018).", "To address the vacuous responses problem in neural dialog models, researchers propose to ground dialogs on real world knowledge and construct new corpora that contain utterance-response-knowledge triples.", "Specifically, responses are grounded to external knowledge derived from different knowledge sources (Zhou et al., 2018; Liu et al., 2018; Wu et al., 2019; Dinan et al., 2018; Moghe et al., 2018; Ghazvininejad et al., 2018; Mostafazadeh et al., 2017; Meng et al., 2020; Zhang et al., 2020).", "Among different sources, textual knowledge (Di-nan et al., 2018; Parthasarathi and Pineau, 2018; Qin et al., 2019) receives the most attention as it is easy to obtain and scale.", "However, the construction of knowledge-grounded datasets is costly and time-consuming.", "To build a more practical system without assuming a given knowledge, recent studies enhance KGD models with an extra knowledge selection component (Dinan et al., 2018; Kim et al., 2020; Zheng et al., 2020; Zhao et al., 2020).", "Most existing KGD models can be viewed as models with externalized knowledge , where knowledge is explicitly used as part of the model input.", "The principle behind these models is to copy words and/or modify sentences from external knowledge when generating responses (Wu et al., 2020; Zhu et al., 2017; Zhao et al., 2019).", "Our KI, on the other hand, does not explicitly present knowledge to dialog models for reading and/or copying.", "Instead, we inject and store external knowledge into models' parameters and encourage models to elicit the encoded knowledge during generation.", "The idea of knowledge internalization has also been explored in language modeling.", "Factual knowledge (Zhang et al., 2019; Sun et al., 2020; Liu et al., 2020), visual knowledge (Tan and Bansal, 2020) and syntactic knowledge (Kuncoro et al., 2020) have been injected into language models (LMs) and shown great promise in improving the performance of downstream tasks.", "KI differs from those knowledge-enhanced LMs in two aspects:", "(i) KI can be trained end-to-end with dialog models, while applying LMs on dialog generation often requires multiple rounds of pre-train and fine-tune.", "(ii) KI is lightweight that barely introduces extra parameters to the dialog model while applying LMs usually introduces hundreds of millions of extra parameters.", "In this section, we illustrate how to train a dialog model with knowledge internalization.", "To infuse more fine-grained lexical knowledge to a neural dialog model, we assume a dialog corpus where each token is aligned with relevant knowledge (we will discuss the construction of such a corpus in 4).", "In particular, for an input sentence X in the corpus, we assume each token x i X is associated with a corresponding descriptive sentence K i .", "Given an utterance-response pair ( X, Y ) , where X = { x 1 , x 2 , . . . , x n } and Y = { y 1 , y 2 , . . . , y m } , neural dialog models generally minimize the negative log-likelihood loss:", "where P ( y i ) = P ( y i | y <i , X ) is the probability of generating the i -th response token y i given the utterance X and other tokens generated in previous steps y <i = { y 1 , y 2 , . . . , y i 1 } .", "P ( y i ) is generally modeled by a sequence-to-sequence model (Sutskever et al., 2014), which consists of an encoder and a decoder.", "The encoder represents X as a sequence of hidden vectors H ( X ) = h 1 , h 2 , ..., h n , where each h i is a low-dimensional representation of the token x i .", "The decoder generates y i based on H ( X ) and y <i , often with the attention mechanism (Bahdanau et al., 2014).", "Given a dialog corpus with token-level knowledge as discussed above, we now introduce a new training task: knowledge internalization ( KI ).", "In KI, we seek to boost dialog models by internalizing lexical knowledge into each token's representation.", "In particular, each token x i and its associated knowledge K i are first mapped into a low-dimensional space.", "We then adopt contrastive learning to shorten the distance between x i and K i in the space while enlarging that between x i and other irrelevant knowledge.", "Note that for each x i X , dialog models' encoder can embed it into a contextualized representation h i .", "Therefore, we only need an extra knowledge encoder to represent K i as g ( K i ) (details will be given in 4.2).", "After h i and g ( K i ) are computed, we calculate the similarity between x i and K i by the inner product: s ( x i , K i ) = f 1 ( h i ) (cid:62) f 2 ( g ( K i )) , (2) where f 1 and f 2 are the functions that map the h i and g ( K i ) into the same vector space and normalize them.", "For each ( x i , K i ) pair, we randomly sample an in-batch unrelated knowledge K i associated with other input sentences, where K i (cid:54) = K i , to construct a negative sample pair ( x i , K i ) in contrastive learning.", "Finally, the objective function of KI is defined by the contrastive loss between positive and negative sample pairs: LKI ( X )= n (cid:88) i =1 max (cid:8) 0 , m s ( x i , K i )+ s ( x i , K i ) (cid:9) , (3) where m denotes the margin.", "We now illustrate how to deploy KI on a neural dialog model.", "We use a sequence-to-sequence dialog model based on Transformer (Vaswani et al., 2017) as an example.", "The original model is trained to minimize the negative log-likelihood loss of response tokens, i.e., LNLL ( X, Y ) (see Eq. 1).", "We can conveniently integrate KI into the model by reusing the contextualized representations generated by the model's encoder.", "The training objective of a knowledge-internalized dialog model can then be formulated as: L = LNLL ( X, Y ) + LKI ( X ) (4) where is a hyperparameter.", "Note that the token-level knowledge is only required during training to compute LKI ( X ) .", "At the inference time, those relevant knowledge is no longer required as they have been internalized into model by KI, making inference more efficient.", "In this section, we present how to train an effective retriever to mine knowledge for each token in the dialog corpora.", "Given a dialog utterance X = { x 1 , x 2 , . . . , x n } , the trained retriever will retrieve a relevant knowledge K i for each token x i in X .", "The constructed token-knowledge alignments can then be used to train a knowledge-internalized neural dialog model as in 3. 4.1 Training Data Collection To train such a retriever, we need a corpus with token-level knowledge annotated.", "However, to our best knowledge, no human annotated data exist and it is prohibitively expensive to build one.", "We therefore seek to train the retriever with distant supervision.", "A straight-forward solution is to align the noun words in an utterance to certain knowledge graph triples using entity linking tools (Shen et al., 2014).", "The problem of that is it can only cover 7947 about 15% words in human conversations (Biber et al., 2000).", "To address this issue, we propose to mine token-knowledge distant annotations from Wikipedia.", "In each Wiki article, the first sentence S = { s 1 , s 2 , ..., s n } is mostly declarative that gives a high-level summary on the topic of the article.", "Thus this sentence can used as a lexical knowledge item, denoted as K (note that K and S refer to the same sentence here).", "Inspired by Tan and Bansal (2020), we then further associate every token in the sentence with this knowledge item.", "These constructed alignments (e.g., ( s i , K ) ) can then be used to train a token-level knowledge retriever.", "The core of the retriever's training is to learn a scoring function r ( s i | S, K ) to measure the relevance between a token s i and a knowledge item K , giving s i 'context S .", "Similar as Eq.", "2, we implement the scoring function r ( s i | S, K ) as the inner product between s i 'contextualized token representation f ( h i ) and the knowledge representation f ( g ( K )) .", "Here, we use a pre-trained BERT (Devlin et al., 2019) model to obtain h i ; we apply another pre-trained BERT model to encode knowledge K and further generate g ( K ) with an average-pooling operator.", "Two BERT models will be fine-tuned with the retriever.", "Our training objective is to maximize the relevance score of aligned token-knowledge pairs while minimizing that of unaligned pairs.", "We also adopt the hinge loss similar as in Eq 3 by replacing x i in the dialog corpus to s i in the constructed token-knowledge pairs.", "Once the retriever is trained, we can use it to mine token-level lexical knowledge required in KI.", "We first construct a candidate knowledge base K that consists of 6.4 million knowledge items (first sentence) extracted from Wikipedia articles.", "Given a dialog utterance X = { x 1 , x 2 , . . . , x n } , we retrieve a lexical knowledge K i for each token x i by searching for the knowledge item that has the largest relevance score with x i .", "To improve the retrieval results, we further employ two useful strategies:", "(i) Stopword Masking , where we discard knowledge associated with stopwords;", "(ii) Exact Matching , where if an utterance token exactly matches the title of a Wikipedia article, we will directly return the first sentence of this article as the retrieval result.", "The retrieval process has two properties that can significantly improve dialog corpora's knowledge coverage.", "First, the retrieval is contextualized such that a token can be aligned to different knowledge items when it occurs in different contexts.", "Second, the retrieval is at token-level that enables us to associate each dialog sentence with multiple knowledge items (See Appendix C).", "In this section, we present the datasets and metrics used for evaluation.", "Datasets We use three datasets from various domains (statistics in Appendix A).", "The first one is DailyDialog (Li et al., 2017), a multi-turn dialog benchmark that contains daily dialogs recorded as utterance-response pairs.", "However, there is no knowledge associated with the dialogs in DailyDialog, making it difficult to evaluate the informativeness of generated responses.", "To fully illustrate the strength of KI, we further consider two knowledge-grounded datasets:", "(i) Wizard of Wikipedia (WoW) (Dinan et al., 2018), a multi-turn dataset that contains utterance-response-knowledge triples.", "For each dialog, a sentence retrieved from Wikipedia is selected to guide response generation.", "WoW contains two test sets: Test Seen/Unseen, where the latter includes topics that never appear in Train and Valid set.", "(ii) Commonsense Reddit Dataset (CRD) (Zhou et al., 2018): a weakly knowledge-grounded single-turn dataset.", "Each dialog in the dataset is paired with at least one triple automatically extracted from ConceptNet (Speer et al., 2017).", "Metrics We conduct both automatic evaluation and human annotations.", "For automatic evaluation, we evaluate the generated responses from three perspectives 1 : Appropriateness : we employ Perplexity (PPL), corpus-level BLEU-4 (Papineni et al., 2002) and ROUGE-l (Lin, 2004).", "Diversity : the ratio of distinct uni/bi-grams in all generated texts, i.e., Distinct-1/2 (Li et al., 2016).", "1 For PPL and %safe, smaller is better, while for all other metrics, larger is better.", "Informativeness : For WoW, we consider wikiF1 (Dinan et al., 2018), the overlapping F1 between the generated response and the grounded knowledge.", "For CRD, we calculate entity score (Ent.) (Zhou et al., 2018), the average number of entities per response.", "To further measure the likelihood of generating safe responses, we define %safe : the percentage of responses that contains I'm not sure or I don't know.", "2 We also report the accuracy of knowledge selection (ACC) following Zheng et al. (2020).", "We further perform human annotations by randomly sampling 200/200/300/300 examples from WoW Test Seen/WoW Test Unseen/ CRD/DailyDialog, respectively.", "We recruit 5 annotators from a commercial annotation company to rate each response on a scale of 1-5 for its appropriateness (Zhang et al., 2020; Zheng et al., 2020) and informativeness (Young et al., 2018; Zhu et al., 2019).", "The former measures whether the topic of the response fits that of the utterance, while the latter evaluates whether a response provides new information.", "A response is scored 1 if it is not appropriate/informative at all, 3 if part of the response is appropriate/informative, 5 if it is highly related to utterance and context or it can provide rich information to deepen the discussion.", "2 and 4 are for decision dilemmas.", "it with three sets of baselines:", "2. We then investigate whether KI can complement or even further improve the state-of-the-art KGD model's performance.", "All model structures and training setups are given in Appendix B. 2 Upon manual inspection, we find that these two are the most common safe responses generated.", "1. We first investigate the effectiveness and general applicability of KI by applying KI on conventional dialog models that are randomly initialized and trained with utterance-response pairs only.", "3. As discussed in 2, although LMs differ from KI in many aspects, they also capture knowledge in their parameters.", "We thus compare KI with LMs to investigate its effectiveness in encouraging informative and appropriate responses.", "We first deploy KI on two representative neural dialog models that do not directly condition on any external knowledge:", "(i) Seq2Seq : a LSTM-based (Hochreiter and Schmidhuber, 1997) sequence-to-sequence model with the attention mechanism (Bahdanau et al., 2014);", "(ii) Transformer (Vaswani et al., 2017): an encoder-decoder architecture relying solely on the attention mechanisms.", "Effectiveness As shown in Table 1's Setup 1 (rows 1-8), dialog models with KI consistently outperform their counterparts without KI on almost all the metrics across the datasets used.", "We want to point out the advantage of KI from two perspectives: (1) Promoting informativeness.", "We first observe that applying KI can significantly improve the wikiF1 and Ent.", "scores.", "Unlike KGD models that can generate informative responses by explicitly copying words from given knowledge, models discussed here are not provided with any external knowledge during testing (thus copy mechanism is not applicable for them).", "This suggests that the improvement in informativeness should be attributed to the effectiveness of KI in injecting knowledge into models' parameters.", "The Info.", "scores from human evaluation in Table 2 can also substantiate our findings.", "(2) Promoting diversity and reducing occurrence of safe response.", "Compared with the plain models, models with KI can significantly improve the Distinc-1/2 scores on all the test sets (some-times doubled, even tripled).", "We also see a significant reduction of safe responses by the gap in %safe scores.", "Those improvements are powered by the rich lexical knowledge used in KI (see Appendix C).", "Efficiency Besides the improvements in responses' quality, KI is also very efficient during inference.", "We report the decoding speed of Transformer and Transformer+KI in Table 3. As we can see, KI does not incur any extra computation during inference.", "We then apply KI on DiffKS (Zheng et al., 2020) a state-of-the-art model that uses a knowledge-aware decoder to generate a response based on", "utterance and the knowledge retrieved from a set of candidates.", "In the empirical study, DiffKS has outperformed many KGD models like CCM (Zhou et al., 2018) 4 and ITDD (Li et al., 2019).", "We enhance DiffKS by applying KI on its context encoder.", "The rest of the model remains unchanged.", "Table 1 Rows 9-10 show that DiffKS with KI improves ACC over the plain DiffKS model.", "The reason is that with the injection of token-level knowledge, DiffKS can better understand the utterance, which leads to more accurate knowledge selection and thus less noisy external knowledge.", "As a result, we observe clear gains on overlapping-based metrics (BLEU and ROUGE).", "These results emphasize the importance of more fine-grained knowledge in KGD.", "Human evaluation results (Table 2) also suggest that KI can help KGD models in generating more informative and appropriate responses.", "We follow previous practice (Rothe et al., 2020) to replace the Transformer's encoder with LMs and keep the decoder the same as the Transformer above.", "5 We consider two baselines:", "(i) Bert2Rnd : Initializing Transformer's encoder with a pre-trained BERT, which has been shown capturing rich factual knowledge during pre-training (Petroni et al., 2019; Jiang et al., 2020).", "(ii) Ernie2Rnd : Initializing the encoder with ERNIE 2.0 (Sun et al., 2020), a knowledge-enhanced BERT which is pre-trained with novel objectives that injecting lexical, 4 Comparison with CCM is in Appendix D 5 We keep the hidden state dimension of decoder consistent with the LMs to enable encoder-decoder attention.", "syntactic, and semantic knowledge into its parameters (Zhang et al., 2019).", "From Table 4, we see that parameters of LM-based models (Bert2Rnd and Ernie2Rnd) are more than three times than that of the Transformer baseline.", "But they do not seem to help improve informativeness (based on wikiF1, BLEU-4, and Info.) of responses.", "This indicates that although pre-trained LMs can encode knowledge in their parameters, eliciting the encoded knowledge for response generation is difficult when we only have utterance-response pairs for training.", "Another reason might be that previously learned knowledge is forgotten due to catastrophic forgetting (McCloskey and Cohen, 1989).", "Comparing with knowledge-enhanced LMs, KI is more lightweight and more effective.", "In addition, we observe that introducing LMs can significantly improve responses' diversity as KI does.", "However, according to Appr.", "metric and upon manual examination, we find that although the generated responses are diverse, they are often inconsistent with the context or hallucinating non-existing facts (e.g., \"Yea, Canada is the largest country in the US.\").", "These are known issues for LMs as discussed in Dou et al. (2021); Shuster et al. (2021); Chen et al. (2020).", "We also apply KI on Bert2Rnd/Ernie2Rnd, but we do not observe significant improvements as when applied on randomly initialized models.", "This could be due to the fact that we implement KI using knowledge from Wikipedia, which is already part of LMs' training corpora.", "We leave it as future work to investigate how to use KI to elicit knowledge from LMs better (e.g., use adapters (Xu et al., 7950 Model DailyDialog CRD WoW Seen WoW Unseen Model WoW Seen WoW Unseen Appr.", "In this section, we perform an in-depth analysis to understand the effectiveness of KI.", "We investigate the working principle of KI by visualizing the token embeddings learned on WoW.", "We use principal component analysis (PCA) to map embeddings into a two-dimensional space as shown in Fig 2. Since there is no co-occurrence of British and Rowling in WoW, their embeddings learned by Transformer are distant (see Fig", "2(a)).", "However, their embeddings learned by Transformer+KI (see Fig", "2(b)) are much closer.", "This is because KI injects lexical knowledge (i.e., a British author) into the embedding of Rowling.", "Specifically, the Euclidean distances between British and Rowling are 0.37 for Transformer and 0.22 for Transformer+KI, respectively.", "This observation sheds light on the working principle of KI: the contrastive learning objective shortens the embedding distance between a token and tokens from its lexical knowledge.", "Thus when decoding, if a token is predicted (e.g. Rowl-ing), its relevant knowledge tokens (e.g., British) are likely to receive high probabilities and be selected in the following steps (see the J.K Rowling example in Fig", "1(b).", "Firstly, we experiment with a model variant (de-noted as Random), which randomly assign knowledge to each utterance token.", "Results in Table 5 (Row 2) validate the effectiveness of the proposed token-knowledge retriever.", "To further show the advantage of token-level knowledge, we consider a model variant in which we degenerate token-level KI to sentence-level by assigning all utterance tokens to a same lexical knowledge (we denote it as Sentence-level knowledge ).", "Given the lexical knowledge retrieved for each token in an utterance, the sentence-level knowledge is chosen as the most-frequent one among all token-level knowledge.", "The results are summarized in Table 5 (Row 3).", "Note that token-level knowledge results in better performance than sentence-level knowledge.", "This shows that fine-grained information is useful in promoting more informative and diverse responses.", "Lastly, we dive deep into the lexical knowledge retrieved to investigate which type of knowledge is most helpful in response generation.", "We classify a retrieved knowledge into two types: factual knowledge, which describes a real-world subject (e.g., knowledge about J.K Rowling), and is often associated with noun words in the utterance; linguistic knowledge, which explains the meaning of certain words (e.g., knowledge about donate, see Fig", "1(b), and is often associated with words except nouns.We use part-of-speech (POS) tags to classify tokens and their associated knowledge.", "We consider two model variants that only use fac-tual/linguistic knowledge in KI respectively, denoted as factual and linguistic .", "In Fig 3, we compare these two model variants to a vanilla model without KI (denoted as base ), and a full model that uses both knowledge (denoted as both ).", "We find that injecting factual knowledge brings significant improvements on BLEU-4 and ROUGE-l.", "We also observe similar, albeit smaller improvements when equipping with linguistic knowledge.", "More interestingly, these two types of knowledge can complement one another to further improve the model 7951 0.6 0.4 0.2 0.0 0.2 0.4 0.6 0.4 0.2 0.0 0.2 0.4 rowling british author harry potter", "performance.", "This emphasizes the need to consider non-factual knowledge in KGD, which is usually ignored in previous study.", "To understand what causes the difference between using factual and linguistic knowledge, we compute Knowledge Coverage : the percentage of ground truth response tokens that have been recalled in the retrieved knowledge.", "As we can see from Fig", "3(c), factual knowledge is more helpful because people tend to respond based on knowledge related to subjects (usually nouns) appearing in the dialog.", "We show an example case in Appendix E to demonstrate how KI improves dialog generation and what the limitation is.", "We propose knowledge internalization (KI), which aims to incorporate the lexical knowledge into neural dialog models.", "Models with KI can generate informative and diverse responses without explicitly conditioning on external knowledge.", "To provide the fine-grained knowledge needed in KI, we also build an effective token-level lexical knowledge retriever that contextually align tokens in a sentence to their related knowledge.", "We show the effectiveness and general applicability of KI by evaluating KI on various datasets and diversified model structures.", "This project is supported by the Tencent AI Lab Rhino-Bird Focused Research Program, the Shanghai Committee of Science and Technology, China (Grant No. 21DZ1100100).", "This research is also partly supported by the HKU-TCL Joint Research Center for Artificial Intelligence fund (Project 200009430)." ]
[ "objective", "method", "method", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "method", "method", "other", "other", "other", "other", "other", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "result", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "objective", "abstain", "method", "result", "other", "other" ]
[ "In zero-shot multilingual extractive text summarization, a model is typically trained on English summarization dataset and then applied on summarization datasets of other languages.", "Given English gold summaries and documents, sentence-level labels for extractive summarization are usually generated using heuristics.", "However, these monolingual labels created on English datasets may not be optimal on datasets of other languages, for that there is the syntactic or semantic discrepancy between different languages.", "In this way, it is possible to translate the English dataset to other languages and obtain different sets of labels again using heuristics.", "To fully leverage the information of these different sets of labels, we propose NLSSum ( N eural L abel S earch for Sum marization), which jointly learns hierarchical weights for these different sets of labels together with our summarization model.", "We conduct multilingual zero-shot summarization experiments on MLSUM and WikiLingua datasets, and we achieve state-of-the-art results using both human and automatic evaluations across these two datasets.", "The zero-shot multilingual tasks, which aim to transfer models learned on a high-resource language (e.g., English) to a relatively low-resource language (e.g., Turkish) without further training, are challenging (Ruder et al., 2019).", "Recently, large pre-trained multilingual transformers such as M-BERT (Devlin et al., 2019), XLM (Lample and Conneau, 2019), and XLM-R (Conneau et al., 2020) have shown remarkable performance on zero-shot multilingual natural language understanding tasks.", "During pre-training, these transformer models project representations of different languages Work done during the first author's internship at Microsoft Research Asia.", "Sentence (English, Label 1): He was never charged in that Caribbean Nation.", "Reference Summary (English): He was arrested twice, but never charged in Natalee Holloway's disappearance.", "Translated Sentence (German, Label 0): Er wurde jedoch nie in dieser karibischen Nation angeklagt .", "Translated Reference Summary (German): Beim Verschwinden von Natalee Holloway wurde er zweimal verhaftet, aber nie angeklagt .", "into the same vector space, which makes the transfer learning across different languages easier during fine-tuning (Gong et al., 2021).", "In zero-shot extractive summarization, we train an extractive model (based on a pre-trained multilingual transformer) on English summarization dataset, which selects important sentences in English documents.", "Then, we apply this trained model to documents of a different language (i.e., extracting sentences of documents in another language).", "In this paper, we aim to enhance the zero-shot capabilities of multilingual sentence-level extractive summarization.", "In text summarization, most datasets only contain human-written abstractive summaries as ground truth.", "We need to transform these datasets into extractive ones.", "Thus, a greedy heuristic algorithm (Nallapati et al., 2017) is employed to add one sentence at a time to the candidate extracted summary set, by maximizing the ROUGE (Lin, 2004) between candidate summary set and the gold summary.", "This process stops when none of the remaining sentences in the document can increase the ROUGE anymore.", "These selected sentences are labelled as one and all the other sentences labeled as zero.", "While the labels obtained from this greedy algorithm are monolingual-oriented and may not be suitable for multilingual transfer.", "For the example in Table 1, the English sentence is quite likely to be selected as a summary sentence, since it greatly overlaps with the English reference (high ROUGE).", "While when the document and the summary are translated into German, the ROUGE between the sentence and the summary is significantly lower 561 (fewer n -gram overlap).", "Then, another sentence will be selected as substitution.", "The greedy algorithm yields different labels on the English data and the translated data and these labels may complement for each other.", "We define this discrepancy as monolingual label bias , and it is the key to further improve the performance of zero-shot multilingual summarization.", "To address the above problem, we design a method to create multiple sets of labels with different machine translation methods according to the English summarization dataset, and we employ NLSSum ( N eural L abel S earch for Sum marization) to search suitable weights for these labels in different sets.", "Specically, in NLSSum, we try to search the hierarchical weights (sentence-level and set-level) for these labels with two neural weight predictors and these label weights are used to train our summarization model.", "During training, the two neural weight predictors are jointly trained with the summarization model.", "NLSSum is used only during training and during inference, we simply apply the trained summarization model to documents in another language.", "Experimental results demonstrate the effectiveness of NLSSum, which significantly outperforms original XLMR by 2.25 ROUGE-L score on MLSUM (Scialom et al., 2020).", "The human evaluation also shows that our model is better compared to other models.", "To sum up, our contributions in this work are as follows: To the best of our knowledge, it is the first work that studies the monolingual label bias problem in zero-shot multilingual extractive summarization.", "We introduce the multilingual label generation algorithm (Section 3.5) to improve the performance of multilingual zero-shot models.", "Meanwhile, we propose the NLSSum architecture (Section 3.6) to search suitable weights for different label sets.", "Extensive experiments are conducted with detailed analysis, and the results across different datasets demonstrate the superior performance on multilingual datasets.", "In MLSUM, the zero-shot performance on Russian is even close to its supervised counterpart.", "There has been a surge of research on multilingual pretrained models, such as multilingual BERT (Devlin et al., 2019), XLM (Lample and Conneau, 2019) and XLM-RoBERTa (Conneau et al., 2020).", "For multilingual summarization, the summarize-then-translate and translate-then-summarize are widely used approaches in prior studies Lim et al. (2004).", "There is another effective multi-lingual data augmentation, a method that replaces a segment of the input text with its translation in another language (Singh et al., 2019).", "On the other hand, large-scale multilingual summarization datasets have been introduced (Scialom et al., 2020; Ladhak et al., 2020), which enable new research directions for the multilingual summarization.", "Nikolov and Hahnloser (2020) applies an alignment approach to collect large-scale parallel resources for low-resource domains and languages.", "In this paper, we aim to advance the multilingual zero-shot transferability, by training extractive summarization on English and inferring on other languages.", "Let D = ( s 1 , s 2 , ..., s N ) denotes a document with N sentences, where s i = ( w i 1 , w i 2 , ..., w i | s i | ) is a sentence in D with | s i | words.", "S is the human-written summary.", "Extractive summarization can be considered as a sequence labeling task that assigns a label y i { 0 , 1 } to each sentence s i , where y i = 1 indicates the i -th sentence should be included in the extracted summary.", "The gold labels of sentences in D are obtained from ( D , S ) by the 562 \u0000: (\u0000 \u00006 ,\u0000 \u00006 ) \u0000: (\u0000,\u0000) \u0000: (\u0000 \u00006 ,\u0000 \u00006 ) \u0000: (\u0000,\u0000 ' ) MT \u0000 \u00006 , FR \u0000, EN \u0000, EN \u0000 \u00006 , FRWR (100%) \u0000, EN \u0000 \u00006 , FRMT WR (100%) \u0000 ' , EN \u0000, EN Figure 2: Four Sets of Multilingual Label.", "greedy heuristic algorithm (Nallapati et al., 2017), which adds one sentence at a time to the extracted summary, skipping some sentences to maximize the ROUGE score of S and the extracted sentences.", "In multi-lingual zero-shot setting, the summarization model is trained on English dataset and is finally applied on documents of other languages.", "Our sentence encoder builds upon the recently proposed XLMR (Conneau et al., 2020) architecture, which is based on the deep bidirectional Transformer (Vaswani et al., 2017) and has achieved state-of-the-art performance in many multilingual zero-shot understanding tasks.", "Our extractive model is composed of a sentence-level Transformer TS (initialized with XLMR) and a document-level Transformer TD (a two-layer Transformer).", "For each sentence s i in the input document D , TS is applied to obtain a contextual representation for each word w ij : [ u 11 , u 12 , ..., u N | s N | ] = TS ([ w 11 , w 12 , ..., w N | s N | ]) (1) Similar to Liu and Lapata (2019), the representation of a sentence s i is acquired by taking the representation of the first token in the sentence u i 1 .", "The document-level Transformer TD (a two-layer inter-sentence Transformer), which is stacked to TS , takes s i as input and yields a contextual representation v i for each sentence.", "We intend this process to further captures the sentence-level features for extractive summarization: [ v 1 , v 2 , ..., v N ] = TD ([ u 11 , u 21 , ..., u N 1 ]) (2) For sentence s i , the final output prediction of the extractive model y i (i.e., the probability of being selected as summary) is obtained through a linear and a sigmoid classifier layer: y i = ( W o v i + b o ) (3) where W o and b o are the weight matrix and bias term.", "Next we introduce how we obtain the neural labels for model training.", "includes five steps as follows.", "(II)", "Multilingual Label Generation : The extractive model is supervised by multilingual label, which consists of four sets of labels, according to different strategies.", "(III)", "Neural Label Search : In this step, we design the hierarchical sentence-level and set-level weights for labels of different strategies.", "The final weights are calculated with a weighted average and assigned to corresponding sentences.", "(IV)", "Fine-Tuning : We fine-tune our extractive model the augmented English document (gen-erated in Step I) with supervision from the weighted multilingual labels (generated in Step III), as shown in Figure 1. (V) Zero-Shot : We apply the model fine-tuned on English data (Step IV) to extract sentences on documents of the target language.", "(I) Multilingual Data Augmentation : This step aims to enhance the multilingual transfer capability of our extractive model and alleviate the discrepancy between training (on English) and inference (on unseen languages).", "In the training process, only the raw English documents and its paired summary labels are available.", "We use the following two methods for multilingual data argumentation of English documents, which we intend the model to align its English representations with representations in other languages.", "Word Replacement (WR) Similar to Qin et al. (2020), we enhance multilingual transferability by constructing Word Replacement data in multiple languages dynamically .", "Let FR denote a foreign language.", "Specifically, a set of words are randomly 563 chosen in raw English documents and replaced with words in FR using the bilingual dictionary MUSE (Conneau et al., 2018).", "This approach can in some degree align the replaced word representations in FR with their English counterpart by mixing with the English context.", "Machine Translation (MT) The above augmentation method is applied dynamically during training, and Machine Translation yet is another offline strategy to augment data.", "First, we translate documents and their paired summaries from English into the target language FR using the MarianMT system 1 (Junczys-Dowmunt et al., 2018).", "Then, the labels are generated on the translated data with the same greedy algorithm as on English data.", "Finally, the extractive model is fine-tuned on the translated documents with the supervision of new labels, and inferred on the original FR document.", "Unfortunately, the performance of machine translation is instable with the noise or error propagation (Wan et al., 2010).", "Therefore, we choose the word replacement method here to enhance the input document and the argumented document is served as the input of our extractive model.", "Note that we do use both the word replacement and machine translation methods to generate multilingual labels (see the next section).", "Given an English article D and its summary S , we can obtain its extractive labels using the greedy algorithm introduced in Section 3.1.", "Label Set U a Let U a = GetPosLabel ( D , S ) denote the indices of sentences with positive labels, where GetPosLabel ( D , S ) returns the indices of positive labeled sentences in the original English document D using the greedy algorithm.", "The labels created on English data ( D , S ) may not be optimal in multilingual settings (inference on a different language).", "As shown in Figure 2, we therefore create yet another three label sets using the WR and MT methods introduced earlier to simulate the multilingual scenario during inference time.", "Label Set U b To create labels based foreign language (FR) data, we translate both the English document D and its summary S to FR using the MT method in Section 3.4, resulting DMT and SMT (also see Figure 2).", "Again by using the greedy al-1 https://github.com/marian-nmt/marian gorithm, we obtain the indices of sentences with positive labels U b = GetPosLabel ( DMT , SMT ) .", "Label Set U c Label set U c is also based on FR data.", "To make label set U c different from U b , we translate D to DMT using the MT method, while we translate S to SWR using the WR method (we do 100% word replacement) with the EN-FR dictionary.", "The resulting label set U c = GetPosLabel ( DMT , SWR ) .", "Label Set U d Label set U d is based on English data.", "The idea is to create a paraphrased English summary S (cid:48) using the back translation technology.", "We first translate S to SMT using MT method and translate SMT back to English S (cid:48) using the WR method (100% word replacement).", "We use different translation method for forward and backward translations to maximize the different between S and S (cid:48) .", "Finally, U d = GetPosLabel ( D , S (cid:48) ) .", "Note that there are also many other possible strategies for creating multilingual labels and we only use these four strategies above as examples to study the potential of multilingual labels.", "Intuitively, the contributions of these four label sets for multilingual transferability are different, and the MT and WR translation methods may introduce translation errors, which result noisy labels.", "Therefore, we introduce the Neural Label Search in the next section to find suitable weights for these multilingual labels.", "In this section, we assign a weight for each sentence in a document and the weight will be used as the supervision to train our extractive model.", "Note that the weight is a multiplication of a sentence level weight and a label set level weight.", "Let T denote the sentence level weight predictor and T the set level weight predictor.", "The implementation of T ( ) = ( g ( T (cid:48) ( ))) is a two-layer transformer model T (cid:48) ( ) followed by a linear layer g ( ) and a sigmoid function.", "The implementation of T is the same as T , but with different parameters.", "[ 1 , 2 , ..., N ] = T ([ u 11 , u 21 , ..., u N 1 ]) i = (cid:40) i , if i U 0 , otherwise (4)", "where U = U a U b U c U d .", "Note that we only predict weights for sentences with non-zero labels, since we believe that these sentences, which are the minority, are more informative than zero-label sentences.", "The computation of T is similar, but we first do a mean pooling over sentences in each label set.", "where n a , n b , n c , n d are sizes of the four label sets.", "The final weight l i for sentence s i is 0 when i / U ( i does not belong to any label set).", "Otherwise, the computation of l i is as follows.", "where if i U j , i j is j , else i j is 0 and m i is the number of label sets containing i .", "Note that one sentence may belong to multiple label sets, so we normalize its ij weights in Equation (5).", "Weight Normalization In this paper, we only calculate the multilingual weights for multilingual labels, in which the corresponding sentences are all selected as summary sentences by different document-summary pairs, as shown in the Figure 2. The label weights l i are used to train our summarization model, whose output y i is through a sigmoid function (Equation 3).", "y i > 0 .", "5 means sentence s i could be selected as in summary.", "Therefore, when i U , we rescale l i to [0 . 5 , 1 . 0] : l i = l i l min 2 ( l max l min ) + 0 .", "where l max and l min are the maximum and minimum value of l i , when i U .", "In this section, we present how we train our extractive model as well as the two weight predictors T and T .", "Note that we train the components above jointly.", "We train the extractive model using both the English labels y a (created using the greedy algorithm) as well as the label weights generated in Section 3.6.", "To train T , we use binary labels y , where in one document, y i = 1 when i U , otherwise y i = 0 .", "To train T , we again use binary labels y , but these labels are on set level rather Datasets # Docs (Train / Val / Test) CNN/DM, English 287,227 / 13,368 / 11,490 MLSUM, German 220,887 / 11,394 / 10,701 MLSUM, Spanish 266,367 / 10,358 / 13,920 MLSUM, French 392,876 / 16,059 / 15,828 MLSUM, Russian 25,556 / 750 / 757 MLSUM, Turkish 249,277 / 11,565 / 12,775 WikiLingua, English 99,020 / 13,823 / 28,614 WikiLingua, German 40,839 / 5,833 / 11,669 WikiLingua, Spanish 79,212 / 11,316 / 22,632 WikiLingua, French 44,556 / 6,364 / 12,731 Table 2: Data Statistics: CNN/Daily Mail, MLSUM and WikiLingua.", "than sentence level.", "Defining positive examples for T is straight-forward and we set y q = 1 when q { U a , U b , U c , U d } (each label set corresponds to one positive example).", "For negative examples in one particular document, we randomly sample three sentence indices from sentences with zero labels as one negative example.", "We finally make the numbers of positive and negative examples for T close to 1:1.", "The final loss is a sum of the four losses above: L = CE ( y, y a ) + CE ( y, l )+ CE ( , y ) + CE ( , y ) (7) where CE is the cross entropy loss; l is the weighted multilingual label (Section 3.6); y a , y , and y are the binary labels for the supervision of y , , and .", "Specifically, = [ 1 , 2 , . . . , N ] and = [ a , b , c , d ] (just as Equation 4 and 5).", "During the zero-shot inference, we simply apply the model trained on the English dataset using the objectives above to other languages.", "MLSUM & CNN/DM MLSUM is the first large-scale multilingual summarization dataset (Scialom et al., 2020), which is obtained from online newspapers and contains 1.5M+ document/summary pairs in five different languages, namely, French(Fr), German(De), Spanish(Es), Russian(Ru), and Turk-ish(Tr).", "The English dataset is the popular CNN/Daily mail (CNN/DM) dataset (Hermann et al., 2015).", "Our model is trained on CNN/DM.", "WikiLingua A large-scale, cross-lingual dataset for abstractive summarization (Ladhak et al., 2020).", "The dataset includes 770K article and summary 565 Models MLSUM De Es Fr Ru Tr avg Oracle (cid:63) 52.30 35.78 37.69 29.80 45.78 40.27 Lead-2 (cid:63) 33.09 13.70 19.69 5.94 28.90 20.26 Supervised Pointer-Generator 35.08 17.67 23.58 5.71 32.59 22.99 mBERTSum-Gen 42.01 20.44 25.09 9.48 32.94 25.99 XLMRSum (cid:63) 41.28 21.99 24.12 10.44 33.29 26.22 MARGE (Train One) 42.60 22.31 25.91 10.85 36.09 27.55 MARGE (Train All) 42.77 22.72 25.79 11.03 35.90 27.64 Zero-Shot MARGE 30.01 17.81 19.39 8.67 29.39 21.05 mBERTSum (cid:63) 17.36 17.27 19.64 8.37 19.30 16.39 XLMRSum (cid:63) 32.05 19.49 22.20 8.70 27.64 22.02 XLMRSum-MT (cid:63) w/ U a 29.34 21.14 23.82 8.68 24.23 21.44 XLMRSum-MT (cid:63) w/ U b 29.70 21.18 23.62 9.37 24.27 21.63 XLMRSum-WR (cid:63) 32.37 21.03 23.67 9.34 30.10 23.30 NLSSum-Sep (cid:63) 34.21 21.24 23.92 10.09 31.68 24.23 NLSSum (cid:63) 34.95 21.20 23.59 10.13 31.49 24.27 Table 3: ROUGE-L on MLSUM dataset.", "pairs in 18 languages from WikiHow 2 .", "Our training setting is identical to that of MLSUM, our extractive model is trained on the English data and inferred on other three languages (French, German, Spanish).", "MLSUM and WikiLingua are described in detail in Table 2. 4.2 Evaluation Similar to Liu and Lapata (2019), we also select the top three sentences as the summary, with Trigram Blocking to reduce redundancy.", "Following Scialom et al. (2020), we report the F1 ROUGE-L score of NLSSum with a full Python implemented ROUGE metric 3 , which calculates the overlap lexical units between extracted sentences and ground-truth.", "Following Lin (2004), to assess the significance of the results, we applied bootstrap resampling technique (Davison and Hinkley, 1997) to estimate 95% con-fidence intervals for every correlation computation.", "Our implementation is based on Pytorch (Paszke et al., 2019) and transformers.", "The pre-trained model employed in NLSSum is XLMR-Large .", "We train NLSSum on one Tesla V100 GPU for 100,000 steps (2 days) with a batch size of 4 and gradient accumulation every two steps.", "Adam with 1 = 0 .", "9 , 2 = 0 .", "999 is used as optimizer.", "The learning rate is linearly increased from 0 to 1 e 4 in the first 2,500 steps (warming-up) and linearly decreased thereafter.", "For the source document data augmentation, we use a 0.5 word replacement rate 2 https://www.wikihow.com 3 https://github.com/pltrdy/rouge Models WikiLingua De Es Fr avg Oracle 30.81 36.52 34.64 33.99 Lead-3 16.32 19.78 18.40 18.17 mBERTSum 18.83 22.49 20.91 20.74 XLMRSum 22.10 26.73 25.06 24.63 XLMRSum-MT 21.92 26.41 24.75 24.36 XLMRSum-WR 22.20 26.78 25.10 24.69 NLSSum 22.45 26.98 25.34 24.92 Table 4: Zero-Shot ROUGE-L Results of WikiLingua with a bilingual dictionary (Conneau et al., 2018).", "Oracle sentences are extracted by the greedy algorithm introduced in Section 3.1.", "Lead-K is a simple baseline to choose the first k sentences in a document as its summary.", "We use k = 2 on MLSUM and k = 3 on WikiLingua, which lead to the best results.", "Pointer-Generator augments the standard Seq2Seq model with copy and coverage mechanisms (See et al., 2017).", "mBERTSum-Gen is based on the multilingual version BERT (mBERT; Devlin et al. 2019) and it is extended to do generation with a unified masking method in UniLM (Dong et al., 2019).", "MARGE is a pre-trained seq2seq model learned with an unsupervised multilingual paraphrasing objective (Lewis et al., 2020).", "mBERTSum , XLMRSum , XLMRSum-MT and XLMRSum-WR are all extractive models described in Section 3.2 and their sentence encoders are either initialized from mBERT or XLMR-Large .", "They are all trained on the Enlgish dataset.", "XLMRSum-MT is trained on the English training data argumented with machine translation.", "While XLMRSum-WR is trained on the English training data argumented with bilingual dictionary word replacement.", "ROUGE Results on MLSUM Table 3 shows results on MLSUM.", "The first block presents the Oracle upper bound and the Lead-2 baseline, while the second block includes the supervised summarization results.", "Results of Pointer-Generator, mBERTSum-Gen are reported in Scialom et al. (2020), while results of MARGE are reported in Lewis et al. (2020).", "The results of MARGE training on all languages jointly (Train All) are slightly better than its counterpart when training on each language separately (Train One).", "While we see a different trend with other models.", "Comparing ex-566 Models 1st 2nd 3rd 4th MeanR mBERTSum 0.07 0.25 0.31 0.37 2.98 XLMRSum 0.16 0.28 0.27 0.29 2.69 NLSSum 0.28 0.32 0.2 0.2 2.32 Oracle 0.49 0.15 0.22 0.14 2.01 Table 5: Human Evaluation on MLSUM, German tractive models against abstractive models in the supervised setting, the abstractive paradigm is still the better choice.", "We present the zero-shot results in the third block.", "All models are trained on the Enlgish summarization dataset and infered on dataset of other languages.", "With a decent multi-lingual pre-trained model, the extractive XLMRSum performs better than the abstractive MARGE, which demonstrates the superiority of extractive approaches in zero-shot summarization.", "When applying machine translation based (XLMRSum-MT) and multi-lingual word replacement based (XLMRSum-WR) data argumentation method to XLMR (see Section 3.4), we obtain further improvements.", "With MT based argumentation method (XLMRSum-MT), we could re-generate extractive labels using the translated doucments and summaries (the U b set-ting).", "We do observe that the re-generated labels could slightly improve the results, but the resulting XLMRSum-MT is still worse than XLMRSum and XLMRSum-WR.", "With the neural label search method, NLSSum-Sep outperforms all models in comparison.", "For faster feedback, we train a separate model for each language in XLMRSum-MT and XLMRSum-WR and NLSSum-Sep (models for different languages can be trained in parallel), which is to do data argumentation only to one target language.", "In our final model NLSSum, we train one model for all languages (we do data argumentation from English to all target languages) and we observe that the results of NLSSum-Sep and NLSSum are similar.", "Compared with the original XLMRSum, NLSSum achieves 2.27 improvements on the average R-L score, which is a remarkable margin in summarization.", "It indicates that our multilingual neural label search method significantly improves the multilingual zero-shot transferability.", "The differences between NLSSum and other models in comparison except NLSSum-Sep are significant (p < 0.05).", "Specifically, the performance XLMRSum-MT is worse than that of XLMRSum.", "For more in-depth analysis, we note that: 1) As the input of a model, the translation-based documents are prone to the error propagation, therefore, we should avoid Models MLSUM De Es Fr Ru Tr avg XLMRSum 30.35 20.67 22.85 9.39 31.55 22.81 NLSSum w/o T 33.13 21.21 23.09 9.72 32.68 23.97 NLSSum 33.51 21.74 24.10 9.91 32.58 24.37 Train with Different Label Sets XLMRSum-WR w/ U a 32.09 21.04 23.33 9.69 32.04 23.58 XLMRSum-WR w/ U b 30.39 20.71 23.17 9.83 31.37 23.05 XLMRSum-WR w/ U c 29.66 20.64 22.96 9.32 31.63 22.76 XLMRSum-WR w/ U d 30.22 20.16 22.90 9.61 31.90 21.78 Train with All Label Sets and with Fixed Weights XLMRSum-WR, w=0.6 32.12 21.05 23.30 9.31 32.51 23.65 XLMRSum-WR, w=0.7 32.46 20.73 23.67 9.77 32.72 23.82 XLMRSum-WR, w=0.8 32.86 20.98 23.42 9.64 32.93 23.91 XLMRSum-WR, w=0.9 32.41 20.48 23.27 9.57 32.63 23.65 Train with Different Replacement Rates NLSSum w/ 0.45 33.09 21.75 24.13 9.84 32.42 24.25 NLSSum w/ 0.50 33.43 21.78 24.17 9.99 32.31 24.34 NLSSum w/ 0.55 33.51 21.74 24.10 9.91 32.58 24.37 NLSSum w/ 0.60 33.50 21.81 23.98 9.86 32.32 24.29 Table 6: Ablation Study, Zero-Shot ROUGE-L Results on Validation Dataset of MLSUM to encode these noise documents.", "2) Fortunately, our multilingual label only applies the translation method when converting document/summary pair into labels, instead of encoding.", "ROUGE Results on WikiLingua To further evaluate the performance of NLSSum, we design additional zero-shot experiments for all our extractive models on WikiLingua.", "These models are trained on English and inferred on other three languages.", "The results are in Table 4.", "We observe that our NLSSum still performs better than all the other extractive models.", "Meanwhile, compared with the results on MLSUM, the improvement on WikiLingua is not remarkable.", "Probably because the documents and summaries in WikiLingua are a series of how-to steps, which are more platitudinous than news summarization.", "To investigate the influence of each components in NLSSum, we conduct experiments on the validation set of MLSUM and the results are in Table 6. In neural label search, we have two weight predictors, the sentence level predictors T and the label set level predictor T (Section 3.6).", "We can see from the first block of Table 6 that without T , the result of NLSSum drops.", "NLSSum leverages four label sets ( U a , U b , U c and U d ) to train T and T .", "In the second block, we study the effect of each label set separately (note that XLMRSum-WR is the backbone of NLSSum and we therefore build label set baselines upon it).", "U a works best overall.", "However, U b is better on Russian compared to 567 b c d 0.7 0.8 0.7 0.7 32.46 32.51 0.8 32.33 32.49 0.8 0.7 32.94 33.03 0.8 32.62 32.86 Table 7: ROUGE-L Results for Different Weights U a , which indicates these different label sets can compensate for each other.", "Not surprisingly, using one label set performs worse than NLSSum.", "In the third block, we use all the label sets, but we use fixed weights instead of using weight predicted from neural label search 4 .", "We can see using multiple label sets can improve variants with only one label set, but there is still a gap to NLSSum, which learns these weights for each sentence automatically.", "It is also possible to use different weights for different label sets.", "To make the number of experiments tractable, we conduct experiments on German only and search weight around our optimal value (i.e., 0.8).", "Results are in Table 7. There is slight gain by using different weights, but the result is still worse than NLSSum.", "In the last block, we train NLSSum with different word replacement rates.", "We observe that 55% is the best choice for the bilingual dictionary word replacement and the word replacement rate is not sensitive.", "In practice, we set the rate to 50% directly instead of tuning it, in order to make the our experiments in true zero-shot settings (Perez et al., 2021).", "The human evaluation is important for summarization tasks, since the ROUGE can only determine the textual representation overlapping.", "In this subsection, we design the ranking experiment (Cheng and Lapata, 2016) with system outputs of different systems on the German test set of MLSUM.", "First, we randomly select 20 samples from the test set of German.", "Then, we extract summary sentences from the original document with four mBERTSum, XLMRSum, NLSSum, and Oracle.", "Third, we translate the document and summaries into English by Machine Translation.", "Finally, the human participants are presented with one translated English document and a list of corresponding translated summaries produced by different approaches.", "Each 4 Fixed weight means a fixed weight for label sets U b , U c and U d , instead of the label search in Section 3.6.", "Weight of original English labels U a is set to 1.0, since the second block shows the quality of U a is the highest.", "example is reviewed by five different participants separately.", "Participants are requested to rank these summaries by taking the importance and redundancy into account.", "To measure the quality of MT System, we first translate the English document into German and then back-translate it into English.", "We observed that there are almost no changes in meanings between the original English documents and the back-translated English documents.", "We therefore conclude the German to English translation quality is acceptable.", "As shown in Table 5, NLSSum is ranked 1st 28% of the time and considered best in the extractive models except for Oracle.", "In Figure 3, we calculate the positions of oracle sentence and plot the kernel density 5 .", "Specically, we translate the test set of CNN/DM from English into Turkish and Russian, and re-calculate the oracle labels for each language.", "Then, we collect all of the oracle sentences and keep its relative positions.", "It is obvious that: 1) The oracle sentences of English are mainly located in the head of document, and the Russian takes the second place, and then the Turkish.", "That is why the Turkish achieves more improvement than Russian, by comparing the results of NLSSum and XLMRSum in the in Part III of Table 3.", "2) Multilingual labels pay more attention to the latter sentences, which is more suitable in multilingual summarization.", "We first study the monolingual label bias, that when translate the (document, summary) from English", "into other language, the re-converted labels will change along with the transformation of textual representation.", "Then we propose NLSSum to improve the performance of multilingual zero-shot extractive summarization, by introducing multilingual labels.", "Finally, the summarization model is trained on English with the weighted multilingual labels and achieves great improvement on other languages.", "This work is supported by the Youth Innovation Promotion Association of the Chinese Academy of Sciences (No. 2018192)." ]
[ "abstain", "abstain", "abstain", "abstain", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "objective", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "method", "method", "abstain", "method", "abstain", "result", "objective", "result", "objective", "objective", "objective", "other", "other", "other", "other", "other", "objective", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "other", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "method", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "abstain", "other" ]
[ "Recent studies have revealed that reading comprehension (RC) systems learn to exploit annotation artifacts and other biases in current datasets.", "This prevents the community from reliably measuring the progress of RC systems.", "To address this issue, we introduce R 4 C , a new task for evaluating RC systems' internal reasoning.", "R 4 C requires giving not only answers but also derivations: explanations that justify predicted answers.", "We present a reliable, crowdsourced framework for scalably annotating RC datasets with derivations.", "We create and publicly release the R 4 C dataset, the first, quality-assured dataset consisting of 4.6k questions, each of which is annotated with 3 reference derivations (i.e. 13.8k derivations).", "Experiments show that our automatic evaluation metrics using multiple reference derivations are reliable, and that R 4 C assesses different skills from an existing benchmark.", "Reading comprehension (RC) has become a key benchmark for natural language understanding (NLU) systems, and a large number of datasets are now available (Welbl et al., 2018; Kocisk`y et al., 2018; Yang et al., 2018, i.a.).", "However, it has been established that these datasets suffer from annotation artifacts and other biases, which may allow systems to cheat: Instead of learning to read and comprehend texts in their entirety, systems learn to exploit these biases and find answers via simple heuristics, such as looking for an entity with a particular semantic type (Sugawara et al., 2018; Mudrakarta et al., 2018) (e.g. given a question starting with Who , a system finds a person entity found in a document).", "To address this issue, the community has introduced increasingly more difficult Question Answering (QA) problems, for example, so that answer-Title: Return to Olympus [1] Return to Olympus is the only album by the alternative rock band Malfunkshun.", "related information is scattered across several articles (Welbl et al., 2018; Yang et al., 2018) (i.e. multi-hop QA ).", "However, recent studies show that such multi-hop QA also has weaknesses (Chen and Durrett, 2019; Min et al., 2019; Jiang et al., 2019), e.g. combining multiple sources of information is not always necessary to find answers.", "Another direction, which we follow, includes evaluating a systems' reasoning (Jansen, 2018; Yang et al., 2018; Thorne and Vlachos, 2018; Camburu et al., 2018; Fan et al., 2019; Rajani et al., 2019).", "In the context of RC, Yang et al. (2018) propose HotpotQA, which requires systems not only to give an answer but also to identify supporting facts (SFs), sentences containing information that supports the answer.", "SFs are defined as sentences containing information that supports the answer (see Support-ing facts in Fig. 1 for an example).", "As shown in SFs [1] , [2] , and [7] , however, only a subset of SFs may contribute to the necessary reasoning.", "For example, [1] states two facts:", "(a) Return to Olympus is an album by Malfunkshun ; and", "(b) Malfunkshun is a rock band .", "Among these, only", "(b) is related to the necessary reasoning.", "Thus, achieving a high accuracy in the SF detection task does not fully prove a RC systems's reasoning ability.", "This paper proposes R 4 C , a new task of RC that requires systems to provide an answer and derivation 1 : a minimal explanation that justifies predicted answers in a semi-structured natural language form (see Derivation in Fig. 1 for an example).", "Our main contributions can be summarized as follows: We propose R 4 C , which enables us to quantitatively evaluate a systems' internal reasoning in a finer-grained manner than the SF detection task.", "We show that R 4 C assesses different skills from the SF detection task.", "We create and publicly release the first dataset of R 4 C consisting of 4,588 questions, each of which is annotated with 3 high-quality derivations (i.e. 13,764 derivations), available at https://naoya-i.github.io/r4c/ .", "We present and publicly release a reliable, crowdsourced framework for scalably annotating existing RC datasets with derivations in order to facilitate large-scale dataset construction of derivations in the RC community.", "We build R 4 C on top of the standard RC task.", "Given a question q and articles R , the task is", "(i) to find the answer a from R and", "(ii) to generate a derivation D that justifies why a is believed to be the answer to q .", "There are several design choices for derivations, including whether derivations should be structured, whether the vocabulary should be closed, etc.", "This 1 R 4 C is short for Right for the Right Reasons RC. leads to a trade-off between the expressivity of reasoning and the interpretability of an evaluation metric.", "To maintain a reasonable trade-off, we choose to represent derivations in a semi-structured natural language form.", "Specifically, a derivation is defined as a set of derivation steps .", "Each derivation step d i D is defined as a relational fact, i.e. d i (cid:104) d hi , d ri , d ti (cid:105) , where d hi , d ti are entities (noun phrases), and d ri is a verb phrase representing a relationship between d ti and d hi (see Fig. 1 for an example), similar to the Open Information Extraction paradigm (Etzioni et al., 2008).", "d hi , d ri , d ti may be a phrase not contained in R (e.g. is lead singer of in Fig. 1).", "While the output derivations are semi-structured, the linguistic diversity of entities and relations still prevents automatic evaluation.", "One typical solution is crowdsourced judgement, but it is costly both in terms of time and budget.", "We thus resort to a reference-based similarity metric.", "Specifically, for output derivation D , we assume n sets of golden derivations G 1 , G 2 , ..., G n .", "For evaluation, we would like to assess how well derivation steps in D can be aligned with those in G i in the best case.", "For each golden derivation G i , we calculate c ( D ; G i ) , an alignment score of D with respect to G i or a soft version of the number of correct derivation steps in D (i.e. 0 c ( D ; G i ) min( | D | , | G i | ) ).", "We then find a golden derivation G that gives the highest c ( D ; G ) and define the precision, recall and f 1 as follows: pr( D ) = c ( D ; G ) | D | , rc( D ) = c ( D ; G ) | G | f 1 ( D ) = 2 pr( D ; G ) rc( D ; G ) pr( D ; G ) + rc( D ; G ) An official evaluation script is available at https: //naoya-i.github.io/r4c/ .", "Alignment score To calculate c ( D ; G i ) , we would like to find the best alignment between derivation steps in D and those in G i .", "See Fig. 2 for an example, where two possible alignments A 1 , A 2 are shown.", "As derivation steps in D agree with those in G i with A 2 more than those with A 1 , we would like to consider A 2 when evaluating.", "We first define c ( D ; G i , A j ) , the correctness of D given a specific alignment A j , and then pick the [Malfunkshun] is [a rock band] [Andrew Wood] is lead singer of [Malfunkshun] [Andrew Wood] died just before the release of [Apple] [Andrew Wood] is a member of [Mother Love Bone] [Malfunkshun]is former of [Mother Love Bone] [Return to Olympus] is [an album] [Andrew Wood] died before the release of [Apple] [Malfunkshun]is former of [Mother Love Bone] 0.1 1.0 0.8 0.05 0.1 0.2 Output D Golden G i A 2 A 2 A 2 A 1 A 1 A 1 Figure 2: Two possible alignments A 1 and A 2 between D and G i with their alignment scores a ( , ) .", "best alignment as follows: c ( D ; G i , A j ) = (cid:88) ( d j ,g j ) A j a ( d j , g j ) c( D ; G i ) = max A j A ( D,G i ) c ( D ; G i , A j ) , where a ( d j , g j ) is a similarity [0 , 1] between two derivation steps d j , g j , and A ( D, G i ) denotes all possible one-to-one alignments between derivation steps in D and those in G i .", "For a ( d j , g j ) , we consider three variants, depending on the granularity of evaluation.", "We first introduce two fine-grained scorer, taking only entities or relations into account (henceforth, entity scorer and relation scorer ): a ent ( d j , g j ) = 1 2(s( d hj , g hj ) + s( d tj , g tj )) a rel ( d j , g j ) = s( d rj , g rj ) , where s( , ) denotes an arbitrary similarity measure [0 , 1] between two phrases.", "In this study, we employ a normalized Levenshtein distance.", "Finally, as a rough indication of overall performance, we also provide a full scorer as follows: a full ( d j , g j ) = 1 3(s( d hj , g hj )+s( d rj , g rj )+s( d tj , g tj )) 3 Data collection The main purpose of R 4 C is to benchmark an RC systems' internal reasoning.", "We thus assume a semi-supervised learning scenario where RC systems are trained to answer a given question on a Figure 3: Crowdsourcing interface for derivation annotation.", "large-scale RC dataset and then fine-tuned to give a correct reasoning on a smaller reasoning-annotated datasets.", "To acquire a dataset of derivations, we use crowdsourcing (CS).", "We design our interface to annotate existing RC datasets with derivations, as a wide variety of high quality RC datasets are already available (Welbl et al., 2018; Yang et al., 2018, etc.).", "We assume that RC datasets provide", "(i) a question,", "(ii) the answer, and", "(iii) supporting articles , articles that support the answer (optionally with SFs).", "Initially, in order to encourage crowdworkers (henceforth, workers ) to read the supporting articles carefully, we ask workers to answer to the question based on the supporting articles (see Appendix A).", "To reduce the workload, four candidate answers are provided.", "2 We also allow for neither as RC datasets may contain erroneous instances.", "Second, we ask workers to write derivations for their answer (see Fig. 3).", "They click on a sentence (either a SF or non-SF) in a supporting article (left) and then input their derivation in the form of triplets (right).", "They are asked to input entities and relations through free-form textboxes.", "To reduce the workload and encourage annotation consistency, 2 The correct answer and three incorrect answers randomly chosen from the titles of the supporting articles.", "we also provide suggestions.", "These suggestions include predefined prepositions, noun phrases, and verb phrases automatically extracted from supporting articles.", "3 We also highlight SFs if they are available for the given RC dataset.", "To discourage noisy annotations, we first deploy a qualification test.", "We provide the same task described in 3.1 in the test and manually identify competent workers in our task.", "The final annotation is carried out solely by these qualified workers.", "We deploy the task on Amazon Mechanical Turk (AMT).", "4 We allow workers with 5,000 Human Intelligence Tasks experience and an approval rate of 95.0% to take the qualification test.", "For the test, we pay 15 as a reward per instance.", "For the final annotation task, we assign 3 workers per instance and pay 30 to each worker.", "There are a large number of choices of RC datasets that meet the criteria described in 3.1 including SQuAD (Rajpurkar et al., 2016) and Wiki-Hop (Welbl et al., 2018).", "Our study uses HotpotQA (Yang et al., 2018), one of the most actively used multi-hop QA datasets.", "5 The multi-hop QA setting ensures that derivation steps are spread across documents, thereby posing an interesting unsolved research problem.", "For annotation, we sampled 3,000 instances from 90,564 training instances and 3,000 instances from 7,405 development instances.", "For the qualification test and interface development, we sampled another 300 instances from the training set.", "We used the annotations of SFs provided by HotpotQA.", "We assume that the training set is used for fine-tuning RC systems' internal reasoning, and the development set is used for evaluation.", "In the qualification test, we identified 45 competent workers (out of 256 workers).", "To avoid noisy annotations, we filter out submissions", "(i) with a wrong answer and", "(ii) with a neither answer.", "After the filtering, we retain only instances with exactly three derivations annotated.", "Finally, we obtained 7,137 derivations for 2,379 instances in the training set and 7,623 derivations for 2,541 instances in the dev set.", "See Appendix B for annotation examples.", "To check whether annotated derivations help humans recover answers, we setup another CS task on AMT ( answerability judgement ).", "Given a HotpotQA question and the annotated derivation, 3 workers are asked whether or not they can answer the question solely based on the derivation at three levels.", "We evaluate all 7,623 derivations from the dev set.", "For reliability, we targeted only qualified workers and pay 15 as a reward per instance.", "To see if each derivation step can actually be derived from its source SF, we asked two expert annotators (non co-authors) to check 50 derivation steps from the dev set ( derivability judgement ).", "For the answerability judgement, we obtained Krippendorff's of 0.263 (a fair agreement).", "With majority voting, we obtained the following results: YES : 95.2%, LIKELY : 2.2%, and NO : 1.3% (split: 1.3%).", "6 For the derivability judgement, 96.0% of the sampled derivation steps (48/50) are judged as derivable from their corresponding SFs by both expert annotators.", "Despite the complexity of the annotation task, the results indicate that the proposed annotation pipeline can capture competent workers and produce high-quality derivation annotations.", "For the final dev set, we retain only instances with YES answerability judgement.", "The final R 4 C dataset includes 4,588 questions from HotpotQA (see Table 1), each of which is annotated with 3 reference derivations (i.e. 13,764 derivations).", "This is the first dataset of RC annotated with semi-structured, multiple reference derivations.", "The most closest work to our dataset is the WorldTree corpus (Jansen et al., 2018), the largest QA dataset annotated with explanations, 6 We also evaluated 1,000 training instances: 96.0% with YES judgement with Krippendorff's of 0.173.", "which contains 1,680 questions.", "Jansen et al. (2018) use experts for annotation, and the annotated explanations are grounded on a predefined, structured knowledge base.", "In contrast, our work proposes a non-expert-based annotation framework and grounds explanations using unstructured texts.", "Effect of multiple references Do crowdsourced multiple golden derivations help us to evaluate output derivations more accurately?", "To verify this, we evaluated oracle derivations using one, two, or all three references.", "The derivations were written by qualified workers for 100 dev instances.", "Table 2 shows that having more references increases the performance, which indicates that references provided by different workers are indeed diverse enough to capture oracle derivations.", "The peak performance with # rf= 3 establishes the upper bound performance on this dataset.", "The larger improvement of the relation-level performance (+14.5) compared to that of the entity-level performance (+8.0) also suggests that relations are linguistically more diverse than entities, as we expected (e.g. is in , is a town in , and is located in are annotated for a locational relation).", "Baseline models To analyze the nature of R 4 C , we evaluate the following heuristic models.", "IE : extracting all entity relations from SFs.", "7 CORE : extracting the core information of SFs.", "Based on the dependency structure of SFs (with article title t ), it extracts a root verb v and the right, first child c r of v , and outputs (cid:104) t , v , c r (cid:105) as a derivation step.", "Table 3 shows a large performance gap to the human upper bound, indicating that R 4 C is different to the HotpotQA's SF detection taskit does not simply require systems to exhaustively extract information nor to extract core information from SFs.", "The errors from these baseline models include generating entity relations irrelevant to reasoning (e.g. Return to Olympus is an album in Fig. 2) or missing implicit entity relations (e.g. Andrew Wood is 7 We use Stanford OpenIE (Angeli et al., 2015).", "a member of Mother Love Bone in Fig. 1).", "R 4 C introduces a new research problem for developing RC systems that can explain their answers.", "Towards evaluating RC systems' internal reasoning, we have proposed R 4 C that requires systems not only to output answers but also to give their derivations.", "For scalability, we have carefully developed a crowdsourced framework for annotating existing RC datasets with derivations.", "Our experiments have demonstrated that our framework produces high-quality derivations, and that automatic evaluation metrics using multiple reference derivations can reliably capture oracle derivations.", "The experiments using two simple baseline models highlight the nature of R 4 C , namely that the derivation generation task is not simply the SF detection task.", "We make the dataset, automatic evaluation script, and baseline systems publicly available at https://naoya-i.github.io/r4c/ .", "One immediate future work is to evaluate state-of-the-art RC systems' internal reasoning on our dataset.", "For modeling, we plan to explore recent advances in conditional language models for jointly modeling QA with generating their derivations.", "This work was supported by the UCL-Tohoku University Strategic Partnership Fund, JSPS KAK-ENHI Grant Number 19K20332, JST CREST Grant Number JPMJCR1513 (including the AIP challenge program), the European Union's Horizon 2020 research and innovation programme under grant agreement No 875160, and the UK Defence Science and Technology Laboratory (Dstl) and Engineering and Physical Research Council (EPSRC) under grant EP/R018693/1 (a part of the collaboration between US DOD, UK MOD, and UK EPSRC under the Multidisciplinary University Research Initiative (MURI)).", "The authors would like to thank Paul Reisert, Keshav Singh, other members of the Tohoku NLP Lab, and the anonymous reviewers for their insightful feedback." ]
[ "abstain", "abstain", "objective", "abstain", "method", "objective", "result", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "result", "other", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "method", "other", "other", "method", "other", "method", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "objective", "objective", "objective", "abstain", "other", "method", "objective", "other", "other" ]
[ "TACRED (Zhang et al., 2017) is one of the largest, most widely used crowdsourced datasets in Relation Extraction (RE).", "But, even with recent advances in unsupervised pretraining and knowledge enhanced neural RE, models still show a high error rate.", "In this paper, we investigate the questions: Have we reached a performance ceiling or is there still room for improvement?", "And how do crowd annotations, dataset, and models contribute to this error rate?", "To answer these questions, we first validate the most challenging 5K examples in the development and test sets using trained annotators.", "We find that label errors account for 8% absolute F1 test error, and that more than 50% of the examples need to be relabeled.", "On the relabeled test set the average F1 score of a large baseline model set improves from 62.1 to 70.1.", "After validation, we analyze misclassifications on the challenging instances, categorize them into linguistically motivated error groups, and verify the resulting error hypotheses on three state-of-the-art RE models.", "We show that two groups of ambiguous relations are responsible for most of the remaining errors and that models may adopt shallow heuristics on the dataset when entities are not masked.", "Relation Extraction (RE) is the task of extracting relationships between concepts and entities from text, where relations correspond to semantic categories such as per:spouse , org:founded by or org:subsidiaries (Figure 1).", "This makes RE a key part of many information extraction systems, and its performance determines the quality of extracted facts for knowledge base population (Ji and Grishman, 2011), or the quality of answers in question answering systems (Xu et al., 2016).", "Standard benchmarks such as SemEval 2010 Task 8 (Hen-drickx et al., 2010) and the more recent TACRED [...] included Aerolineas's domestic subsidiary, Austral.", "(Zhang et al., 2017) are essential to evaluate new RE methods and their limitations, and to establish open challenges.", "TACRED is one of the largest and most widely used RE datasets.", "It contains more than 106k examples annotated by crowd workers.", "The methods best performing on the dataset use some form of pre-training to improve RE performance: fine-tuning pre-trained language representations (Alt et al., 2019; Shi and Lin, 2019; Joshi et al., 2019) or integrating external knowledge during pre-training, e.g. via joint language modelling and linking on entity-linked text (Zhang et al., 2019; Peters et al., 2019; Baldini Soares et al., 2019); with the last two methods achieving a state-of-the-art performance of 71.5 F1.", "While this performance is impressive, the error rate of almost 30% is still high.", "The question we ask in this work is: Is there still room for improvement, and can we identify the underlying factors that contribute to this error rate?", "We analyse this question from two separate viewpoints: (1) to what extent does the quality of crowd based annotations contribute to the error rate, and (2) what can be attributed to dataset and models?", "Answers to these questions can provide insights for improving crowdsourced annotation in RE, and suggest directions for future research.", "To answer the first question, we propose the following approach: We first rank examples in the development and test sets according to the misclassifications of 49 RE models and select the top 5k instances for evaluation by our linguists.", "This procedure limits the manual effort to only the most challenging examples.", "We find that a large fraction of the examples are mislabeled by the crowd.", "Our first contribution is therefore a extensively relabeled TACRED development and test set.", "To answer the second question, we carry out two analyses: (1) we conduct a manual explorative analysis of model misclassifications on the most challenging test instances and categorize them into several linguistically motivated error categories; (2) we formulate these categories into testable hypotheses, which we can automatically validate on the full test set by adversarial rewriting removing the suspected cause of error and observing the change in model prediction (Wu et al., 2019).", "We find that two groups of ambiguous relations are responsible for most of the remaining errors.", "The dataset also contains clues that are exploited by models without entity masking, e.g. to correctly classify relations even with limited access to the sentential context.", "We limit our analysis to TACRED, but want to point out that our approach is applicable to other RE datasets as well.", "We make the code of our analyses publicly available.", "1 In summary, our main contributions in this paper are: We validate the 5k most challenging examples in the TACRED development and test sets, and provide a revised dataset 2 that will improve the accuracy and reliability of future RE method evaluations.", "We evaluate the most challenging, incorrectly predicted examples of the revised test set, and develop a set of 9 categories for common RE errors, that will also aid evaluation on other datasets.", "We verify our error hypotheses on three state-of-the-art RE models and show that two groups of ambiguous relations are responsible for most of the remaining errors and that models exploit cues in the dataset when entities are unmasked.", "The TAC R elation E xtraction D ataset 3 , introduced by Zhang et al. (2017), is a fully supervised dataset of sentence-level binary relation mentions.", "It consists of 106k sentences with entity mention pairs collected from the TAC KBP 4 evaluations 2009 2014, with the years 2009 to 2012 used for training, 2013 for development, and 2014 for testing.", "Each sentence is labeled with one of 41 personand organization-oriented relation types, e.g. per:title , org:founded , or the label no relation for negative instances.", "Table 1 summarizes key statistics of the dataset.", "All relation labels were obtained by crowdsourc-ing, using Amazon Mechanical Turk.", "Crowd workers were shown the example text, with head (sub-ject) and tail (object) mentions highlighted, and asked to select among a set of relation label suggestions, or to assign no relation .", "Label suggestions were limited to relations compatible with the head and tail types.", "5 The data quality is estimated as relatively high by Zhang et al. (2017), based on a manual verifi-cation of 300 randomly sampled examples (93.3% validated as correct).", "The inter-annotator kappa label agreement of crowd workers was moderate at = 0 .", "54 for 761 randomly selected mention pairs.", "In order to identify the impact of potentially noisy, crowd-generated labels on the observed model performance, we start with an analysis of TACRED's label quality.", "We hypothesize that while comparatively untrained crowd workers may on average produce relatively good labels for easy relation mentions, e.g. those with obvious syntactic and/or 3 https://catalog.ldc.upenn.edu/ LDC2018T24 4 https://tac.nist.gov/2017/KBP/index.", "html 5 See the supplemental material provided by Zhang et al. (2017) for details of the dataset creation and annotation process.", "lexical triggers, or unambiguous entity type signatures such as per:title , they may frequently err on challenging examples, e.g. highly ambiguous ones or relation types whose scope is not clearly defined.", "An analysis of the complete dataset using trained annotators would be prohibitively expensive.", "We therefore utilize a principled approach to selecting examples for manual analysis (Section 3.1).", "Based on the TAC-KBP annotation guidelines, we then validate these examples (Section 3.2), creating new Dev and Test splits where incorrect annotations made by crowd workers are revised (Section 3.3).", "Since we are interested in identifying potentially incorrectly labeled examples, we implement a selection strategy which is based upon ordering examples by the difficulty of predicting them correctly.", "6 We use a set of 49 different RE models to obtain predictions on the development and test sets, and rank each example according to the number of models predicting a different relation label than the ground truth.", "7 Intuitively, examples with large disagreement, between all models or between models and the ground truth, are either difficult, or incorrectly annotated.", "We select the following examples for validation:", "(a) Challenging all examples that were misclassified by at least half of the models, and", "(b) Control a control group of (up to) 20 random examples per relation type, including no relation , from the set of examples classified correctly by at least 39 models.", "The two groups cover both presumably hard and easy examples, and allow us to contrast validation results based on example difficulty.", "In total we selected 2,350 (15.2%) Test examples and 3,655 (16.2%) Dev examples for validation.", "Of these, 1,740 ( Test ) and 2,534 ( Dev ) were assigned a positive label by crowd workers.", "We validate the selected examples on the basis of the TAC KBP guidelines.", "8 We follow the approach of Zhang et al. (2017), and present each example by showing the example's text with highlighted head and tail spans, and a set of relation label suggestions.", "We differ from their setup by showing 6 A similar approach was used e.g. by Barnes et al. (2019).", "8 https://tac.nist.gov/2014/KBP/ ColdStart/guidelines/TAC_KBP_2014_Slot_Descriptions_V1.4.pdf more label suggestions to make the label choice less restrictive:", "(a) the original, crowd-generated ground truth label,", "(b) the set of labels predicted by the models,", "(c) any other relation labels matching the head and tail entity types, and", "(d) no relation .", "The suggested positive labels are presented in an alphabetical order and are followed by no relation , with no indication of a label's origin.", "Annotators are asked to assign no relation or up to two positive labels from this set.", "A second label was allowed only if the sentence expressed two relations, according to the guidelines, e.g. per:city of birth and per:city of residence .", "Any disagreements are subsequently resolved by a third annotator, who is also allowed to consider the original ground truth label.", "All annotators are educated in general linguistics, have extensive prior experience in annotating data for information extraction tasks, and are trained in applying the task guidelines in a trial annotation of 500 sentences selected from the development set.", "Table 2 shows the results of the validation process.", "In total, the annotators revised 960 (49.9%) of the Challenging Test examples, and 1,610 (52.1%) of the Challenging Dev examples, a very large fraction of label changes for both dataset splits.", "Revision rates for originally positive examples are lower at 47.3% ( Test ) and 49.1% ( Dev ).", "Approximately 57% of the negative examples were relabeled with a positive relation label (not shown).", "Two labels were assigned to only 3.1% of the Test , and 2.4% of the Dev examples.", "The multi-labeling mostly occurs with location relations, e.g. the phrase [Gross] head : per , a 60-year-old native of [Potomac] tail : city is labeled with per:city of birth and per:city of residence , which is justified by the meaning of the word native .", "As expected, the revision rate in the Control groups is much lower, at 8.9% for Test and 8.1% for Dev .", "We can also see that the fraction of negative examples is approximately one-third in the Challenging group, much lower than the dataset average of 79.5%.", "This suggests that models have more difficulty predicting positive examples correctly.", "The validation inter-annotator agreement is shown in Table 3.", "It is very high at Test = 0 .", "87 and Dev = 0 .", "80 , indicating a high annotation quality.", "For both Test and Dev , it is higher for the easier Control groups than for the Challenging Dev Test Challenging Control Challenging Control # Examples (# positive) 3,088 (1,987) 567 (547) 1,923 (1,333) 427 (407) # Revised (# positive) 1,610 (976) 46 (46) 960 (630) 38 (38) # Revised (% positive) 52.1 ( 49.1 ) 8.1 ( 8.4 ) 49.9 ( 47.3 ) 8.9 ( 9.3 ) Table 2: Re-annotation statistics for TACRED Dev and Test splits.", "groups.", "In contrast, the average agreement between our annotators and the crowdsourced labels is much lower at Test = 0 .", "55 , Dev = 0 .", "53 , and lowest for Challenging examples (e.g., Test = 0 . 44 ).", "Frequently erroneous crowd labels are per:cities of residence , org:alternate names , and per:other family .", "Typical errors include mislabeling an example as positive which does not express the relation, e.g. labeling [Alan Gross] head : per was arrested at the [Havana] tail : loc airport. as per:cities of residence , or not assigning a positive relation label, e.g. per:other family in [Benjamin Chertoff] head : per is the Editor in Chief of Popular Mechanics magazine, as well as the cousin of the Director of Homeland Security, [Michael Chertoff] tail : per .", "Approximately 49% of the time an example's label was changed to no relation during validation, 36% of the time from no relation to a positive label, and the remaining 15% it was changed to or extended with a different relation type.", "To measure the impact of dataset quality on the performance of models, we evaluated all 49 models on the revised test split.", "The average model F1 score rises to 70.1%, a major improvement of 8% over the 62.1% average F1 on the original test split, corresponding to a 21.1% error reduction.", "Discussion The large number of label corrections and the improved average model performance show that the quality of crowdsourced annotations is a major factor contributing to the overall error rate of models on TACRED.", "Even though our selection strategy was biased towards examples challenging for models, the large proportion of changed labels suggests that these examples were difficult to label for crowd workers as well.", "To put this number into perspective Riedel et al. (2010) showed that, for a distantly supervised dataset, about 31% of the sentence-level labels were wrong, which is less than what we observe here for human-supervised data.", "9 The low quality of crowd-generated labels in the Challenging group may be due to their complexity, or due to other reasons, such as lack of detailed annotation guidelines, lack of training, etc.", "It suggests that, at least for Dev and Test splits, crowdsourc-ing, even with crowd worker quality checks as used by Zhang et al. (2017), may not be sufficient to produce high quality evaluation data.", "While models may be able to adequately utilize noisily labeled data for training, measuring model performance and comparing progress in the field may require an investment in carefully labeled evaluation datasets.", "This may mean, for example, that we need to employ well-trained annotators for labeling evaluation splits, or that we need to design better task definitions and task presentations setups as well as develop new quality control methods when using crowd-sourced annotations for complex NLP tasks like RE.", "After revising the dataset, we focus on the two open questions: which of the remaining errors can be attributed to the models, and what are potential reasons for misclassifications?", "To answer these, we first create an annotation task instructing the 9 Riedel et", "al.'s estimate is an average over three relations with 100 randomly sampled examples each, for similar news text.", "Two of the relations they evaluated, nationality and place of birth , are also contained in TACRED, the third is contains (location).", "linguists to annotate model misclassifications with their potential causes (Section 4.1).", "We then categorize and analyze the causes and formulate testable hypotheses that can be automatically verified (Sec-tion 4.2).", "For the automatic analysis, we implemented a baseline and three state-of-the-art models (Section 4.3).", "The goal of the annotation is to identify possible linguistic aspects that cause incorrect model predictions.", "We first conduct a manual exploratory analysis on the revised Control and Challenging test instances that are misclassified by the majority of the 49 models.", "Starting from single observations, we iteratively develop a system of categories based on the existence, or absence, of contextual and entity-specific features that might mislead the models (e.g. entity type errors or distracting phrases).", "Following the exploration, we define a final set of categories, develop guidelines for each, and instruct two annotators to assign an error category to each misclassified instance in the revised test subset.", "In cases where multiple categories are applicable the annotator selected the most relevant one.", "As in the validation step, any disagreements between the two annotators are resolved by a third expert.", "In a next step, we extend the misclassification categories to testable hypotheses, or groups, that are verifiable on the whole dataset split.", "For example, if we suspect a model to be distracted by an entity in context of same type as one of the relation arguments, we formulate a group has distractor .", "The group contains all instances, both correct and incorrect, that satisfy a certain condition, e.g. there exists at least one entity in the sentential context of same type as one of the arguments.", "The grouping ensures that we do not mistakenly prioritize groups that are actually well-handled on average.", "We follow the approach proposed by Wu et al. (2019), and extend their Errudite framework 10 to the relation extraction task.", "After formulating a hypothesis, we assess the error prevalence over the entire dataset split to validate whether the hypothesis holds, i.e. the group of instances shows an above average error rate.", "In a last step, we test the error hypothesis explicitly by adversarial rewriting of a group's ex-10 https://github.com/uwdata/errudite amples, e.g. by replacing the distracting entities and observing the models' predictions on the rewritten examples.", "In our example, if the has distractor hypothesis is correct, removing the entities in context should change the prediction of previously incorrect examples.", "We evaluate our error hypotheses on a baseline and three of the most recent state-of-the-art RE models.", "None of the models were part of the set of models used for selecting challenging instances (Section 3.1), so as not to bias the automatic evaluation.", "As the baseline we use a single layer CNN (Zeng et al., 2014; Nguyen and Grish-man, 2015) with max-pooling and 300-dimensional GloVe (Pennington et al., 2014) embeddings as input.", "The state-of-the-art models use pre-trained language models (LM) fine-tuned to the RE task and include: TRE (Alt et al., 2019), which uses the unidirectional OpenAI Generative Pre-Trained Transformer (GPT) (Radford et al., 2018); SpanBERT (Joshi et al., 2019), which employs a bidirectional LM similar to BERT (Devlin et al., 2019) but is pre-trained on span-level; and KnowBERT (Peters et al., 2019), which is an extension to BERT that integrates external knowledge.", "In particular, we use KnowBERT-W+W, which is trained by joint entity linking and language modelling on Wikipedia and WordNet.", "In this section, we present our analysis results, providing an answer to the question: which of the remaining errors can be attributed to the models, and what are the potential reasons for these errors?", "We first discuss the findings of our manual misclassification analysis (Section 5.1), followed by the results of the automatic analysis (Section 5.2).", "Table 4 summarizes the linguistic misclassification categories we developed.", "We distinguish between errors resulting from (1) relation argument errors, and (2) context misinterpretation.", "11 The category relation argument errors refers to misclassifications resulting from incorrectly assigned 11 The manual analysis focused on the sentence semantics, and left aspects such such as sentence length, distance between entities, etc. for the automatic analysis, which can handle the analysis of surface features more effectively.", "entity spans or entity types of arguments.", "We always labeled type annotation errors, but tolerated minor span annotation errors if they did not change the interpretation of the relation or the entity.", "The category context misinterpretation refers to cases where the sentential context of the arguments is misinterpreted by the model.", "We identify the following context problems: (1) Inverted arguments : the prediction is inverse to the correct relation, i.e. the model's prediction would be correct if head and tail were swapped.", "(2) Wrong arguments : the model incorrectly predicts a relation that holds between head or tail and an un-annotated entity mention in the context, therefore misinterpreting one annotated argument.", "(3) Linguistic distractor : the example contains words or phrases related to the predicted relation, however they do not connect to any of the arguments in a way justifying the prediction.", "(4) Factuality : the model ignores negation, speculation, future tense markers, etc. (5) Context ignored : the example does not contain sufficient linguistic evidence for the predicted relation except for the matching entity types.", "(6) Relation definition : the predicted relation could be inferred from the context using common sense or world knowledge, however the inference is prohibited by the guidelines (e.g. the spokesperson of an organization is not a top member/employee, or a work location is not a pointer to the employee's resi-dence).", "(7) No Relation : the model incorrectly predicts no relation even though there is sufficient linguistic evidence for the relation in the sentential context.", "Discussion The relation label predicted most frequently across the 49 models disagreed with the ground truth label of the re-annotated Challenging and Control Test groups in 1017 (43.3%) of the cases.", "The inter-annotator agreement of error categories assigned to these examples is high at Test = 0 .", "83 ( Test = 0 . 67 if the category No Relation is excluded).", "Argument errors accounted for only 43 (4.2%) misclassifications, since the entities seem to be mostly correctly assigned in the dataset.", "In all entity type misclassification cases except one, the errors originate from false annotations in the dataset itself.", "Context misinterpretation caused 974 (95.8%) false predictions.", "No relation is incorrectly assigned in 646 (63.6%) of misclassified instances, even though the correct relation is often explicitly and unambiguously stated.", "In 134 (13.2%) of the erroneous instances the misclassification resulted from inverted or wrong argument assignment, i.e. the predicted relation is stated, however the arguments are inverted or the predicted relation involves an entity other than the annotated one.", "In 96 (9.4%) instances the error results from TAC KBP guidelines prohibiting specific inferences, affecting most often the classification of the relations per:cities of residence and Figure 2: Error rates for different groups (example subsets) on the revised TACRED test set, for four different models.", "org:top member/employee .", "Furthermore, in 52 (5.1%) of the false predictions models seem to ignore the sentential context of the arguments, i.e. the predictions are inferred mainly from the entity types.", "Sentences containing linguistic distractors accounted for 35 (3.4%) incorrect predictions.", "Factuality recognition causes only 11 errors (1.1%).", "However, we assume that this latter low error rate is due to TACRED data containing an insufficient number of sentences suitable for extensively testing a model's ability to consider the missing factuality of relations.", "Surface structure Groups for argument distance ( argdist=1, argdist > 10 ) and sentence length ( sentlen > 30 ) Arguments Head and tail mention NER type ( same nertag, per:*, org:*, per:loc ), and pronominal head/tail ( has coref ) Context Existence of distracting entities ( has distractor ) Ground Truth Groups conditioned on the ground truth ( positive, negative, same nertag&positive )", "Figure 2 shows the error rates of different groups on the revised TACRED test set.", "The plot shows error rates across four representative models.", "Each chart displays groups on the y-axis, and the fraction and number of correct (blue) vs. incorrect (orange) instances in the respective group on the x-axis.", "The average error rate of each model on the full test set is shown for reference in the top-most column titled all .", "Groups with higher than average error rate may indicate characteristics of examples that make classification difficult.", "On the other hand, groups with lower than average error rate comprise examples that the given model performs especially well on.", "What is the error rate for different groups?", "In Figure 2, we can see that KnowBERT has the lowest error rate on the full test set (7.9%), and the masked CNN model the highest (11.9%).", "SpanBERT's and TRE's error rates are in between the two.", "Overall, all models exhibit a similar pattern of error rates across the groups, with KnowBERT performing best across the board, and the CNN model worst.", "We can see that model error rates e.g. for the groups has distractor , argdist > 10 , and has coref do not diverge much from the corresponding overall model error rate.", "The presence of distracting entities in the context therefore does not seem to be detrimental to model performance.", "Similarly, examples with a large distance between the relation arguments, or examples where co-referential information is required, are generally predicted correctly.", "On the other hand, we can see that all models have above-average error rates for the group positive , its subgroup same nertag&positive , and the group per:loc .", "The above-average error rate for positive may be explained by the fact that the dataset contains much fewer positive than negative training instances, and is hence biased towards predicting no relation .", "A detailed analysis shows that the groups per:loc and same nertag&positive are the most ambiguous.", "per:loc contains relations such as per:cities of residence , per:countries of residence and per:origin , that may be expressed in a similar context but differ only in the fine-grained type of the tail argument (e.g. per:city vs. per:country).", "In contrast, same nertag contains all person-person relations such as per:parents , per:children and per:other family , as well as e.g. org:parent and org:subsidiaries that involve the same argument types (per:per vs. org:org) and may be only distinguishable from context.", "How important is context?", "KnowBERT and SpanBERT show about the same error rate on the groups per:loc and same nertag&positive .", "They differ, however, in which examples they predict correctly: For per:loc , 78.6% are predicted by both models, and 21.4% are predicted by only one of the models.", "For same nertag&positive , 12.8% of the examples are predicted by only of the models.", "The two models thus seem to identify complementary information.", "One difference between the models is that KnowBERT has access to entity information, while SpanBERT masks entity spans.", "To test how much the two models balance context and argument information, we apply rewriting to alter the instances belonging to a group and observe the impact on performance.", "We use two strategies: (1) we remove all tokens outside the span between head and tail argument ( outside ), and (2) we remove all tokens between the two arguments ( between ).", "We find that SpanBERT's performance on per:loc drops from 62.1 F1 to 57.7 (outside) and 43.3 (between), whereas Know-BERT's score decreases from 63.7 F1 to 60.9 and 50.1, respectively.", "On same nertag&positive , we observe a drop from 89.2 F1 to 58.2 (out-side) and 47.7 (between) for SpanBERT.", "KnowBERT achieves a score of 89.4, which drops to 83.8 and 49.0.", "The larger drop in performance on same nertag&positive suggests that SpanBERT, which uses entity masking, focuses more on the context, whereas KnowBERT focuses on the entity content because the model has access to the arguments.", "Surprisingly, both models show similar Original Revised Weighted Model CNN, masked 59.5 66.5 34.8 TRE 67.4 75.3 48.8 SpanBERT 70.8 78.0 61.9 KnowBERT 71.5 79.3 58.7 Table 5: Test set F1 score on TACRED, our revised version, and weighted by difficulty (on revised).", "Should instance difficulty be considered?", "Another question is whether the dataset contains instances that can be solved more easily than others, e.g. those with simple patterns or patterns frequently observed during training.", "We assume that these examples are also more likely to be correctly classified by our baseline set of 49 RE models.", "To test this hypothesis, we change the evaluation setup and assign a weight to each instance based on the number of correct predictions.", "An example that is correctly classified by all 49 baseline models would receive a weight of zero and thus effectively be ignored whereas an instance misclassified by all models receives a weight of one.", "In Table 5, we can see that SpanBERT has the highest score on the weighted test set (61.9 F1), a 16% decrease compared to the unweighted revised test set.", "KnowBERT has the second highest score of 58.7, 3% less than SpanBERT.", "The performance of TRE and CNN is much worse at 48.8 and 34.8 F1, respectively.", "The result suggests that SpanBERT's span-level pre-training and entity masking are ben-eficial for RE and allow the model to generalize better to challenging examples.", "Given this observation, we propose to consider an instance's difficulty during evaluation.", "Relation Extraction on TACRED Recent RE approaches include PA-LSTM (Zhang et al., 2017) and GCN (Zhang et al., 2018), with the former combining recurrence and attention, and the latter leveraging graph convolutional neural networks.", "Many current approaches use unsupervised or semi-supervised pre-training: fine-tuning of language representations pre-trained on token-level (Alt et al., 2019; Shi and Lin, 2019) or span-level (Joshi et al., 2019), fine-tuning of knowledge enhanced word representations that are pre-trained on entity-linked text (Zhang et al., 2019; Peters et al., 2019), and matching the blanks pre-training (Bal-dini Soares et al., 2019).", "Dataset Evaluation Chen et al. (2016) and Barnes et al. (2019) also use model results to assess dataset difficulty for reading comprehension and sentiment analysis.", "Other work also explores bias in datasets and the adoption of shallow heuristics on biased datasets in natural language inference (Niven and Kao, 2019) and argument reasoning comprehension (McCoy et al., 2019).", "Analyzing trained Models Explanation methods include occlusion or gradient-based methods, measuring the relevance of input features to the output (Zintgraf et al., 2017; Harbecke et al., 2018), and probing tasks (Conneau et al., 2018; Kim et al., 2019) that probe the presence of specific features e.g. in intermediate layers.", "More similar to our approach is rewriting of instances (Jia and Liang, 2017; Ribeiro et al., 2018) but instead of evaluating model robustness we use rewriting to test explicit error hypotheses, similar to Wu et al. (2019).", "In this paper, we conducted a thorough evaluation of the TACRED RE task.", "We validated the 5k most challenging examples in development and test set and showed that labeling is a major error source, accounting for 8% absolute F1 error on the test set.", "This clearly highlights the need for careful evaluation of development and test splits when creating datasets via crowdsourcing.", "To improve the evaluation accuracy and reliability of future RE methods, we provide a revised, extensively relabeled TACRED.", "In addition, we categorized model misclassifications into 9 common RE error categories and observed that models are often unable to predict a relation, even if it is expressed explicitly.", "Models also frequently do not recognize argument roles correctly, or ignore the sentential context.", "In an automated evaluation we verified our error hypotheses on the whole test split and showed that two groups of ambiguous relations are responsible for most of the remaining errors.", "We also showed that models adopt heuristics when entities are unmasked and proposed that evaluation metrics should consider an instance's difficulty.", "We would like to thank all reviewers for their thoughtful comments and encouraging feedback, and Matthew Peters for providing KnowBERT predictions on TACRED.", "We would also like to thank Elif Kara, Ulli Strohriegel and Tatjana Zeen for the annotation of the dataset.", "This work has been supported by the German Federal Ministry of Education and Research as part of the projects DEEPLEE (01IW17001) and BBDC2 (01IS18025E), and by the German Federal Ministry for Economic Affairs and Energy as part of the project PLASS (01MD19003E)." ]
[ "abstain", "abstain", "objective", "abstain", "objective", "result", "abstain", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "objective", "objective", "abstain", "objective", "abstain", "result", "objective", "other", "result", "abstain", "method", "abstain", "objective", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "method", "method", "abstain", "method", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "method", "method", "result", "abstain", "result", "objective", "abstain", "result", "objective", "other", "other", "other" ]
[ "Despite the widespread success of self-supervised learning via masked language models (MLM), accurately capturing fine-grained semantic relationships in the biomedical domain remains a challenge.", "This is of paramount importance for entity-level tasks such as entity linking where the ability to model entity relations (especially synonymy) is pivotal.", "To address this challenge, we propose SAPBERT , a pretraining scheme that self-aligns the representation space of biomedical entities.", "We design a scalable metric learning framework that can leverage UMLS, a massive collection of biomedical ontologies with 4M+ concepts.", "In contrast with previous pipeline-based hybrid systems, SAPBERT offers an elegant one-model-for-all solution to the problem of medical entity linking (MEL), achieving a new state-of-the-art (SOTA) on six MEL benchmarking datasets.", "In the scientific domain, we achieve SOTA even without task-specific supervision.", "With substantial improvement over various domain-specific pretrained MLMs such as BIOBERT , SCIBERT and PUBMEDBERT , our pretraining scheme proves to be both effective and robust.", "1 1 Introduction Biomedical entity 2 representation is the foundation for a plethora of text mining systems in the medical domain, facilitating applications such as literature search (Lee et al., 2016), clinical decision making (Roberts et al., 2015) and relational knowledge discovery (e.g. chemical-disease, drug-drug and protein-protein relations, Wang et al. 2018).", "The heterogeneous naming of biomedical concepts Work conducted prior to joining Amazon.", "1 For code and pretrained models, please visit: https: //github.com/cambridgeltl/sapbert .", "2 In this work, biomedical entity refers to the surface forms of biomedical concepts, which can be a single word (e.g. fever ), a compound (e.g. sars-cov-2 ) or a short phrase (e.g. abnormal retinal vascular development ).", "poses a major challenge to representation learning.", "For instance, the medication Hydroxychloroquine is often referred to as Oxichlorochine (alternative name), HCQ (in social media) and Plaquenil (brand name).", "MEL addresses this problem by framing it as a task of mapping entity mentions to unified concepts in a medical knowledge graph.", "3 The main bottleneck of MEL is the quality of the entity representations (Basaldella et al., 2020).", "Prior works in this domain have adopted very sophisticated text pre-processing heuristics (D'Souza and Ng, 2015; Kim et al., 2019; Ji et al., 2020; Sung et al., 2020) which can hardly cover all the variations of biomedical names.", "In parallel, self-supervised learning has shown tremendous success in NLP via leveraging the masked language modelling (MLM) 3 Note that we consider only the biomedical entities themselves and not their contexts, also known as medical concept normalisation/disambiguation in the BioNLP community.", "objective to learn semantics from distributional representations (Devlin et al., 2019; Liu et al., 2019).", "Domain-specific pretraining on biomedical corpora (e.g. BIOBERT , Lee et al. 2020 and BIOMEGATRON , Shin et al. 2020) have made much progress in biomedical text mining tasks.", "Nonetheless, representing medical entities with the existing SOTA pretrained MLMs (e.g. PUBMEDBERT , Gu et al. 2020) as suggested in Fig. 1 (left) does not lead to a well-separated representation space.", "To address the aforementioned issue, we propose to pretrain a Transformer-based language model on the biomedical knowledge graph of UMLS (Boden-reider, 2004), the largest interlingua of biomedical ontologies.", "UMLS contains a comprehensive collection of biomedical synonyms in various forms (UMLS 2020AA has 4M+ concepts and 10M+ synonyms which stem from over 150 controlled vocabularies including MeSH, SNOMED CT, RxNorm, Gene Ontology and OMIM).", "4 We design a self-alignment objective that clusters synonyms of the same concept.", "To cope with the immense size of UMLS, we sample hard training pairs from the knowledge base and use a scalable metric learning loss.", "We name our model as S elfa ligning p retrained BERT (SAPBERT ).", "Being both simple and powerful, SAPBERT obtains new SOTA performances across all six MEL benchmark datasets.", "In contrast with the current systems which adopt complex pipelines and hybrid components (Xu et al., 2020; Ji et al., 2020; Sung et al., 2020), SAPBERT applies a much simpler training procedure without requiring any preor post-processing steps.", "At test time, a simple nearest neighbour's search is sufficient for making a prediction.", "When compared with other domain-specific pretrained language models (e.g. BIOBERT and SCIBERT ), SAPBERT also brings substantial improvement by up to 20% on accuracy across all tasks.", "The effectiveness of the pretraining in SAPBERT is especially highlighted in the scientific language domain where SAPBERT outperforms previous SOTA even without fine-tuning on any MEL datasets.", "We also provide insights on pretraining's impact across domains and explore pretraining with fewer model parameters by using a recently introduced ADAPTER module in our training scheme.", "https://www.nlm.nih.gov/research/umls/knowledge_ sources/metathesaurus/release/statistics.html", "We design a metric learning framework that learns to self-align synonymous biomedical entities.", "The framework can be used as both pretraining on UMLS, and fine-tuning on task-specific datasets.", "We use an existing BERT model as our starting point.", "In the following, we introduce the key components of our framework.", "Formal Definition.", "Let ( x, y ) X Y denote a tuple of a name and its categorical label.", "For the self-alignment pretraining step, X Y is the set of all (name, CUI 5 ) pairs in UMLS, e.g. ( Remdesivir , C4726677); while for the fine-tuning step, it is formed as an entity mention and its corresponding mapping from the ontology, e.g. ( scratchy throat , 102618009).", "Given any pair of tuples ( x i , y i ) , ( x j , y j ) X Y , the goal of the self-alignment is to learn a function f ( ; ) : X R d parameterised by .", "Then, the similarity (cid:104) f ( x i ) , f ( x j ) (cid:105) (in this work we use cosine similarity) can be used to estimate the resemblance of x i and x j (i.e., high if x i , x j are synonyms and low otherwise).", "We model f by a BERT model with its output [CLS] token regarded as the representation of the input.", "6 During the learning, a sampling procedure selects the informative pairs of training samples and uses them in the pairwise metric learning loss function (introduced shortly).", "5 In UMLS, CUI is the C oncept U nique I dentifier.", "6 We tried multiple strategies including first-token, meanpooling, [CLS] and also NOSPEC (recommended by Vulic et al. 2020) but found no consistent best strategy (optimal strategy varies on different *B ERT s).", "informative training examples (i.e. hard posi-tive/negative pairs) within a mini-batch for efficient training, Fig. 2.", "For biomedical entities, this step can be particularly useful as most examples can be easily classified while a small set of very hard ones cause the most challenge to representation learning.", "7 We start from constructing all possible triplets for all names within the mini-batch where each triplet is in the form of ( x a , x p , x n ) .", "Here x a is called anchor , an arbitrary name in the mini-batch; x p a positive match of x a (i.e. y a = y p ) and x n a negative match of x a (i.e. y a (cid:54) = y n ).", "Among the constructed triplets, we select out all triplets that violate the following condition: (cid:107) f ( x a ) f ( x p ) (cid:107) 2 < (cid:107) f ( x a ) f ( x n ) (cid:107) 2 + , (1) where is a pre-set margin.", "In other words, we only consider triplets with the negative sample closer to the positive sample by a margin of .", "These are the hard triplets as their original representations were very far from correct.", "Every hard triplet contributes one hard positive pair ( x a , x p ) and one hard negative pair ( x a , x n ) .", "We collect all such positive & negative pairs and denote them as P , N .", "A similar but not identical triplet mining condition was used by Schroff et al. (2015) for face recognition to select hard negative samples.", "Switching-off this mining process, causes a drastic performance drop (see Tab. 2).", "Loss Function.", "We compute the pairwise cosine similarity of all the BERT -produced name representations and obtain a similarity matrix S R |X b ||X b | where each entry S ij corresponds to the cosine similarity between the i -th and j -th names in the mini-batch b .", "We adapted the Multi-Similarity loss (MS loss, Wang et al. 2019), a SOTA metric learning objective on visual recognition, for learning from the positive and negative pairs: L = 1 |X b | |X b | (cid:88) i =1 (cid:32) 1 log (cid:16) 1 + (cid:88) n N i e ( S in (cid:15) ) (cid:17) + 1 log (cid:16) 1 + (cid:88) p P i e ( S ip (cid:15) ) (cid:17)(cid:33) , (2) where , are temperature scales; (cid:15) is an offset applied on the similarity matrix; P i , N i are indices 7 Most of Hydroxychloroquine 's variants are easy: Hydrox-ychlorochin , Hydroxychloroquine (substance) , Hidroxicloroquina , but a few can be very hard: Plaquenil and HCQ .", "of positive and negative samples of the anchor i .", "8 While the first term in Eq.", "2 pushes negative pairs away from each other, the second term pulls positive pairs together.", "This dynamic allows for a re-calibration of the alignment space using the semantic biases of synonymy relations.", "The MS loss leverages similarities among and between positive and negative pairs to re-weight the importance of the samples.", "The most informative pairs will receive more gradient signals during training and thus can better use the information stored in data.", "Data Preparation Details for UMLS Pretraining.", "We download the full release of UMLS 2020AA version.", "9 We then extract all English entries from the MRCONSO.RFF raw file and convert all entity names into lowercase (dupli-cates are removed).", "Besides synonyms defined in MRCONSO.RFF , we also include tradenames of drugs as synonyms (extracted from MRREL.RRF ).", "After pre-processing, a list of 9,712,959 (name, CUI) entries is obtained.", "However, random batching on this list can lead to very few (if not none) positive pairs within a mini-batch.", "To ensure sufficient positives present in each mini-batch, we generate offline positive pairs in the format of (name 1 , name 2 , CUI) where name 1 and name 2 have the same CUI label.", "This can be achieved by enumerating all possible combinations of synonym pairs with common CUIs.", "For balanced training, any concepts with more than 50 positive pairs are randomly trimmed to 50 pairs.", "In the end we obtain a training list with 11,792,953 pairwise entries.", "UMLS Pretraining Details.", "During training, we use AdamW (Loshchilov and Hutter, 2018) with a learning rate of 2e-5 and weight decay rate of 1e-2 .", "Models are trained on the prepared pairwise UMLS data for 1 epoch (approximately 50k iterations) with a batch size of 512 (i.e., 256 pairs per mini-batch).", "We train with Automatic Mixed Precision (AMP) 10 provided in PyTorch 1.7.0.", "This takes approximately 5 hours on our machine (con-8 We explored several loss functions such as InfoNCE (Oord et al., 2018), NCA loss (Goldberger et al., 2005), simple cosine loss (Phan et al., 2019), max-margin triplet loss (Basaldella et al., 2020) but found our choice is empirically better.", "See App.", "B.2 for comparison.", "9 https://download.nlm.nih.gov/umls/kss/2020AA/ umls-2020AA-full.zip 10 https://pytorch.org/docs/stable/amp.html scientific language social media language model NCBI BC5CDR-d BC5CDR-c MedMentions AskAPatient COMETA @1 @5 @1 @5 @1 @5 @1 @5 @1 @5 @1 @5 vanilla BERT (Devlin et al., 2019) 67.6 77.0 81.4 89.1 79.8 91.2 39.6 60.2 38.2 43.3 40.4 47.7 + SAPBERT 91.6 95.2 92.7 95.4 96.1 98.0 52.5 72.6 68.4 87.6 59.5 76.8 BIOBERT (Lee et al., 2020) 71.3 84.1 79.8 92.3 74.0 90.0 24.2 38.5 41.4 51.5 35.9 46.1 + SAPBERT 91.0 94.7 93.3 95.5 96.6 97.6 53.0 73.7 72.4 89.1 63.3 77.0 BLUEBERT (Peng et al., 2019) 75.7 87.2 83.2 91.0 87.7 94.1 41.6 61.9 41.5 48.5 42.9 52.9 + SAPBERT 90.9 94.0 93.4 96.0 96.7 98.2 49.6 73.1 72.4 89.4 66.0 78.8 CLINICALBERT (Alsentzer et al., 2019) 72.1 84.5 82.7 91.6 75.9 88.5 43.9 54.3 43.1 51.8 40.6 61.8 + SAPBERT 91.1 95.1 93.0 95.7 96.6 97.7 51.5 73.0 71.1 88.5 64.3 77.3 SCIBERT (Beltagy et al., 2019) 85.1 88.4 89.3 92.8 94.2 95.5 42.3 51.9 48.0 54.8 45.8 66.8 + SAPBERT 91.7 95.2 93.3 95.7 96.6 98.0 50.1 73.9 72.1 88.7 64.5 77.5 UMLSBERT (Michalopoulos et al., 2020) 77.0 85.4 85.5 92.5 88.9 94.1 36.1 55.8 44.4 54.5 44.6 53.0 + SAPBERT 91.2 95.2 92.8 95.5 96.6 97.7 52.1 73.2 72.6 89.3 63.4 76.9 PUBMEDBERT (Gu et al., 2020) 77.8 86.9 89.0 93.8 93.0 94.6 43.9 64.7 42.5 49.6 46.8 53.2 + SAPBERT 92.0 95.6 93.5 96.0 96.5 98.2 50.8 74.4 70.5 88.9 65.9 77.9 supervised SOTA 91.1 93.9 93.2 96.0 96.6 97.2 OOM OOM 87.5 -79.0 PUBMEDBERT 77.8 86.9 89.0 93.8 93.0 94.6 43.9 64.7 42.5 49.6 46.8 53.2 + SAPBERT 92.0 95.6 93.5 96.0 96.5 98.2 50.8 74.4 70.5 88.9 65.9 77.9 + SAPBERT (ADAPTER 13% ) 91.5 95.8 93.6 96.3 96.5 98.0 50.7 75.0 67.5 87.1 64.5 74.9 + SAPBERT (ADAPTER 1% ) 90.9 95.4 93.8 96.5 96.5 97.9 52.2 74.8 65.7 84.0 63.5 74.2 + SAPBERT (FINE-TUNED ) 92.3 95.5 93.2 95.4 96.5 97.9 50.4 73.9 89.0 96.2 75.1 ( 81.1 ) 85.5 ( 86.1 ) BIOSYN 91.1 93.9 93.2 96.0 96.6 97.2 OOM OOM 82.6 87.0 71.3 77.8 + (init. w/) SAPBERT 92.5 96.2 93.6 96.2 96.8 98.4 OOM OOM 87.6 95.6 77.0 84.2 Table 1: Top : Comparison of 7 BERT -based models before and after SAPBERT pretraining (+ SAPBERT ).", "Evaluation Data and Protocol.", "We experiment on 6 different English MEL datasets: 4 in the scientific domain (NCBI, Dogan et al. 2014; BC5CDR-c and BC5CDR-d, Li et al. 2016; MedMentions, Mohan and Li 2018) and 2 in the social media domain (COMETA, Basaldella et al. 2020 and AskAPatient, Limsopatham and Collier 2016).", "Descriptions of the datasets and their statistics are provided in App.", "A.", "We report Acc @1 and Acc @5 (denoted as @1 and @5 ) for evaluating performance.", "In all experiments, SAPBERT denotes further pretraining with our self-alignment method on UMLS.", "At the test phase, for all SAPBERT models we use nearest neighbour search without further fine-tuning on task data (unless stated otherwise).", "Except for numbers reported in previous papers, all results are the average of five runs with different random seeds.", "Fine-Tuning on Task Data.", "The red rows in Tab.", "1 are results of models (further) fine-tuned on the training sets of the six MEL datasets.", "Similar to pretraining, a positive pair list is generated through traversing the combinations of mention and all ground truth synonyms where mentions are from the training set and ground truth synonyms are from the reference ontology.", "We use the same optimiser and learning rates but train with a batch size of 256 (to accommodate the memory of 1 GPU).", "On scientific language datasets, we train for 3 epochs while on AskAPatient and COMETA we train for 15 and 10 epochs respectively.", "For BIOSYN on social media language datasets, we empirically found that 10 epochs work the best.", "Other configurations are the same as the original BIOSYN paper.", "*B ERT + SAPBERT (Tab. 1, top).", "We illustrate the impact of SAPBERT pretraining over 7 existing BERT -based models (*B ERT = {B IOBERT , PUBMEDBERT , ...}).", "SAPBERT obtains consistent improvement over all *B ERT models across all datasets, with larger gains (by up to 31.0% absolute Acc @1 increase) observed in the social media domain.", "While SCIBERT is the leading model before applying SAPBERT , PUBMEDBERT +S APBERT performs the best afterwards.", "SAPBERT vs. SOTA (Tab. 1, bottom).", "We take PUBMEDBERT +S APBERT (w/wo fine-tuning) and compare against various published SOTA results (see App. C.1 for a full listing of 10 baselines) which all require task supervision.", "For the scientific language domain, the SOTA is BIOSYN (Sung et al., 2020).", "For the social media domain, the SOTA are Basaldella et al. (2020) and GENRANK (Xu et al., 2020) on COMETA and AskAPatient respectively.", "All these SOTA methods combine BERT with heuristic modules such as tf-idf, string matching and information retrieval system (i.e. Apache Lucene) in a multi-stage manner.", "Measured by Acc @1 , SAPBERT achieves new SOTA with statistical significance on 5 of the 6 datasets and for the dataset (BC5CDR-c) where SAPBERT is not significantly better, it performs on par with SOTA (96.5 vs. 96.6).", "Interestingly, on scientific language datasets, SAPBERT outperforms SOTA without any task supervision (fine-tuning mostly leads to overfitting and performance drops).", "On social media language datasets, unsupervised SAPBERT lags behind supervised SOTA by large margins, highlighting the well-documented complex nature of social media language (Baldwin et al., 2013; Limsopatham and Collier, 2015, 2016; Basaldella et al., 2020; Tutubalina et al., 2020).", "However, after fine-tuning on the social media datasets (using the MS loss introduced earlier), SAPBERT outperforms SOTA significantly, indicating that knowledge acquired during the self-aligning pretraining can be adapted to a shifted domain without much effort.", "The ADAPTER Variant.", "As an option for parameter efficient pretraining, we explore a variant of SAPBERT using a recently introduced training module named ADAPTER (Houlsby et al., 2019).", "While maintaining the same pretraining scheme with the same SAPBERT online mining + MS loss, instead of training from the full model of PUBMEDBERT , we insert new ADAPTER layers between Transformer layers of the fixed PUBMEDBERT , and only train the weights of these ADAPTER layers.", "In our experiments, we use the enhanced ADAPTER configuration by Pfeiffer et al. (2020).", "We include two variants where trained parameters are 13.22% and 1.09% of the full SAPBERT variant.", "The ADAPTER variant of SAPBERT achieves comparable performance to full-model-tuning in scientific datasets but lags behind in social media datasets, Tab.", "1.", "The results indicate that more parameters are needed in pretraining for knowledge transfer to a shifted domain, in our case, the social media datasets.", "The Impact of Online Mining (Eq.", "(1)).", "As suggested in Tab.", "2, switching off the online hard pairs mining procedure causes a large performance drop in @1 and a smaller but still significant drop in @5 .", "This is due to the presence of many easy and already well-separated samples in the mini-batches.", "These uninformative training examples dominated the gradients and harmed the learning process.", "Integrating SAPBERT in Existing Systems.", "SAPBERT can be easily inserted into existing BERT -based MEL systems by initialising the systems with SAPBERT pretrained weights.", "We use the SOTA scientific language system, BIOSYN (originally initialised with BIOBERT weights), as an example and show the performance is boosted across all datasets (last two rows, Tab. 1).", "We present SAPBERT , a self-alignment pretraining scheme for learning biomedical entity representations.", "We highlight the consistent performance boost achieved by SAPBERT , obtaining new SOTA in all six widely used MEL benchmarking datasets.", "Strikingly, without any fine-tuning on task-specific labelled data, SAPBERT already outperforms the previous supervised SOTA (sophisticated hybrid entity linking systems) on multiple datasets in the scientific language domain.", "Our work opens new avenues to explore for general domain self-alignment (e.g. by leveraging knowledge graphs such as DB-pedia).", "We plan to incorporate other types of relations (i.e., hypernymy and hyponymy) and extend our model to sentence-level representation learning.", "In particular, our ongoing work using a combination of SAPBERT and ADAPTER is a promising direction for tackling sentence-level tasks.", "We thank the three reviewers and the Area Chair for their insightful comments and suggestions.", "FL is supported by Grace & Thomas C.H. Chan Cambridge Scholarship.", "NC and MB would like to acknowledge funding from Health Data Research UK as part of the National Text Analytics project." ]
[ "abstain", "abstain", "objective", "method", "abstain", "result", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "other", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "method", "objective", "abstain", "objective", "objective", "method", "other", "other", "other" ]
[ "This paper describes and tests a method for carrying out quantified reproducibility assessment (QRA) that is based on concepts and definitions from metrology.", "QRA produces a single score estimating the degree of reproducibility of a given system and evaluation measure, on the basis of the scores from, and differences between, different reproductions.", "We test QRA on 18 system and evaluation measure combinations (involving diverse NLP tasks and types of evaluation), for each of which we have the original results and one to seven reproduction results.", "The proposed QRA method produces degree-of-reproducibility scores that are comparable across multiple reproductions not only of the same, but of different original studies.", "We find that the proposed method facilitates insights into causes of variation between reproductions, and allows conclusions to be drawn about what changes to system and/or evaluation design might lead to improved reproducibility.", "Reproduction studies are becoming more common in Natural Language Processing (NLP), with the first shared tasks being organised, including REPROLANG (Branco et al., 2020) and ReproGen (Belz et al., 2021b).", "In NLP, reproduction studies generally address the following question: if we create and/or evaluate this system multiple times, will we obtain the same results?", "To answer this question for a given specific system, typically (Wieling et al., 2018; Arhiliuc et al., 2020; Popovic and Belz, 2021) an original study is selected and repeated more or less closely, before comparing the results obtained in the original study with those obtained in the repeat, and deciding whether the two sets of results are similar enough to support the same conclusions.", "This framing, whether the same conclusions can be drawn, involves subjective judgments and different researchers can come to contradictory conclusions: e.g. the four papers (Arhiliuc et al., 2020; Bestgen, 2020; Caines and Buttery, 2020; Huber and ltekin, 2020) reproducing Vajjala and Rama (2018) in REPROLANG all report similarly large differences, but only Arhiliuc et al. conclude that reproduction was unsuccessful.", "There is no standard way of going about a reproduction study in NLP, and different reproduction studies of the same original set of results can differ substantially in terms of their similarity in system and/or evaluation design (as is the case with the Vajjala and Rama (2018) reproductions, see Section 4 for details).", "Other things being equal, a more similar reproduction can be expected to produce more similar results, and such (dis)similarities should be factored into reproduction analysis and conclusions, but NLP lacks a method for doing so.", "Being able to assess reproducibility of results objectively and comparably is important not only to establish that results are valid, but to provide evidence about which methods have better/worse reproducibility and what may need to be changed to improve reproducibility.", "To do this, assessment has to be done in a way that is also comparable across reproduction studies of different original studies, e.g. to develop common expectations of how similar original and reproduction results should be for different types of system, task and evaluation.", "In this paper, we", "(i) describe a method for quantified reproducibility assessment (QRA) directly derived from standard concepts and definitions from metrology which addresses the above issues, and", "(ii) test it on diverse sets of NLP results.", "Following a review of related research (Section 2), we present the method (Section 3), tests and results (Section 4), discuss method and results (Section 5), and finish with some conclusions (Section 6).", "some code you read about in a paper and liked the sound of, you run it on the data provided, only to find that the results are not the same as reported in the paper, in fact they are likely to be worse (Belz et al., 2021a).", "When both data and code are provided, the number of potential causes of such differences is limited, and the NLP field has shared increasingly detailed information about system, dependencies and evaluation to chase down sources of differences.", "Sharing code and data together with detailed information about them is now expected as standard, and checklists and datasheets have been proposed to standardise information sharing (Pineau, 2020; Shimorina and Belz, 2021).", "Reproducibility more generally is becoming more of a research focus.", "There have been several workshops and initiatives on reproducibility, including workshops at ICML 2017 and 2018, the reproducibility challenge at ICLR 2018 and 2019, and at NeurIPS 2019 and 2020, the REPROLANG (Branco et al., 2020) initiative at LREC 2020, and the ReproGen shared task on reproducibility in NLG (Belz et al., 2021b).", "Despite this growing body of research, no consensus has emerged about standards, terminology and definitions.", "Particularly for the two most frequently used terms, reproducibility and replicability , multiple divergent definitions are in use, variously conditioned on same vs. different teams, methods, artifacts, code, and data.", "For example, for Rougier et al. (2017), reproducing a result means running the same code on the same data and obtaining the same result, while replicating the result is writing and running new code based on the information provided by the original publication.", "For Wieling et al. (2018), reproducibility is achieving the same results using the same data and methods.", "According to the ACM's definitions (Associa-tion for Computing Machinery, 2020), results have been reproduced if obtained in a different study by a different team using artifacts supplied in part by the original authors, and replicated if obtained in a different study by a different team using artifacts not supplied by the original authors.", "The ACM originally had these definitions the other way around until asked by ISO to bring them in line with the scientific standard (ibid.).", "Conversely, in Drummond's view 2009 obtaining the same result by re-running an experiment in the same way as the original is replicability , while reproducibility is obtaining it in a different way.", "Whitaker (2017), followed by Schloss (2018), defines four concepts rather than two, basing definitions of reproducibility, replicability, robustness and generalisability on the different possible combinations of same vs. different data and code.", "None of these definitions adopt the general scientific concepts and definitions pertaining to reproducibility, codified in the International Vocabulary of Metrology, VIM (JCGM, 2012).", "One issue is that they all reduce the in principle open-ended number of dimensions of variation between measurements accounted for by VIM to just two or three (code, data and/or team).", "Another, that unlike VIM, they don't produce comparable results.", "NLP does not currently have a shared approach to deciding reproducibility, and results from reproductions as currently reported are not comparable across studies and can, as mentioned in the introduction, lead to contradictory conclusions about an original study's reproducibility.", "There appears to be no work at all in NLP that aims to estimate degree of reproducibility which would allow cross-study comparisons and conclusions.", "Metrology is a meta-science: its subject is the stan-dardisation of measurements across all of science to ensure comparability.", "Computer science has long borrowed terms, most notably reproducibility, from metrology, albeit not adopting the same definitions (as discussed in Section 2 above).", "In this section, we describe quantified reproducibility assessment (QRA), an approach that is directly derived from the concepts and definitions of metrology, adopting the latter exactly as they are, and yields assessments of the degree of similarity between numerical results and between the studies that produced them.", "We start below with the concepts and definitions that QRA is based on, followed by an overview of the framework (Section 3.2) and steps in applying it in practice (Section 3.3).", "The International Vocabulary of Metrology (VIM) (JCGM, 2012) defines repeatability and reproducibility as follows (defined terms in bold, see VIM for subsidiary defined terms):", "for short) is measurement precision under a set of repeatability conditions of measurement .", "2.20 a repeatability condition of measurement (repeatability condition) is a condition of measurement , out of a set of conditions that includes the same measurement procedure , same operators, same measuring system , same operating conditions and same location, and replicate measurements on the same or similar objects over a short period of time.", "2.25 measurement reproducibility (reproducibil-ity) is measurement precision under reproducibility conditions of measurement .", "2.24 a reproducibility condition of measurement (reproducibility condition) is a condition of measurement , out of a set of conditions that includes different locations, operators, measuring systems , etc.", "A specification should give the conditions changed and unchanged, to the extent practical.", "In other words, VIM considers repeatability and reproducibility to be properties of measurements (not objects, scores, results or conclusions), and defines them as measurement precision, i.e. both are quantified by calculating the precision of a set of measured quantity values.", "Both concepts are defined relative to a set of conditions of measurement: the conditions have to be known and specified for assessment of repeatability and reproducibility to be meaningful.", "In repeatability, conditions are the same, whereas in reproducibility, they differ.", "In an NLP context, objects are systems, and measurements involve applying an evaluation method to a system usually via obtaining a sample of its outputs and applying the method to the sample (further details of how concepts map to NLP are provided in Section 3.3).", "The VIM definitions translate directly to the following definition of repeatability R 0 (where all conditions of measurement C are the same across measurements):", "and the M i are repeat measurements for measurand m performed on object O at different times t i under (the same) set of conditions C , producing measured quantity values v i .", "Below, the coefficient of variation is used as the precision measure, but other measures are possible.", "Conditions of measurement are attribute/value pairs each consisting of a name and a value (for examples, see following section).", "Reproducibility R is defined in the same way as R 0 except that condition values (but not names) differ for one or more of the conditions of measurement C i : R ( M 1 , M 2 , ...M n ) := Precision ( v 1 , v 2 , ...v n ) , where M i : ( m, O, t i , C i ) (cid:55) v i (2) Precision is typically reported in terms of some or all of the following: mean, standard deviation with 95% confidence intervals, coefficient of variation, and percentage of measured quantity values within n standard deviations.", "We opt for the coefficient of variation (CV), 1 because it is a general measure, not in the unit of the measurements (unlike mean and standard deviation), providing a quantification of precision (degree of reproducibility) that is comparable across studies (Ahmed, 1995, p. 57).", "This also holds for percentage within n standard deviations but the latter is a less recognised measure, and likely to be the less intuitive for many.", "In reproduction studies in NLP/ML, sample sizes tend to be very small (a sample size of 8, one original study plus 7 reproductions, as in Table 6 is currently unique).", "We therefore need to use de-biased sample estimators: we use the unbiased sample standard deviation, denoted s , with confidence intervals calculated using a t-distribution, and standard error (of the unbiased sample standard deviation) approximated on the basis of the standard error of the unbiased sample variance se ( s 2 ) as se s 2 ( s ) 12 se ( s 2 ) (Rao, 1973).", "Assuming measured quantity values are normally distributed, we calculate the standard error of the sample variance in the usual way: se ( s 2 ) = (cid:113) 2 4 n 1 .", "Finally, we also use a small sample correction (indicated by the star) for the coefficient of variation: CV = (1+ 14 n ) CV (Sokal and Rohlf, 1971).", "2 Before applying CV to values on scales that do not start at 0 (mostly in human evaluations) we shift values to start at 0 to ensure comparability.", "3 This means that to calculate the CV scores in the tables below, measurements are first shifted.", "Using the defined VIM terms and the notations from Section 3.2, we can refine the question from the start of this paper as follows: if we perform multiple measurements of object O and measurand m under reproducibility conditions of measurement C i , what is the precision of the measured quantity values we obtain?", "For NLP, this means calculating the precision of multiple evaluation scores for the same system and evaluation measure.", "Focusing here on reproducibility assessment where we start from an existing set of results (rather than a set of experiments specifically designed to test reproducibility), the steps in performing QRA are as follows: 1. For a set of n measurements to be assessed, identify the shared object and measurand.", "2. Identify all conditions of measurement C i for which information is available for all measurements, and specify values for each condition, including measurement method and procedure.", "3. Gather the n measured quantity values v 1 , v 2 , ...v n .", "4. Compute precision for v 1 , v 2 , ...v n , giving reproducibility score R .", "5. Report resulting R score and associated confidence statistics, alongside the C i .", "In NLP terms, the object is the ready-to-use system (binaries if available; otherwise code, dependencies, parameter values, how the system was compiled and trained) being evaluated (e.g. the NTS-default system variant in Table 1), the measurand is the quantity intended to be measured (e.g. BLEU-style modified n-gram precision), and measurement method and procedure capture how to evaluate the system (e.g. obtaining system outputs for a specified set of inputs, and applying preprocessing and a given BLEU implementation to the latter).", "VIM holds that reproducibility assessment is only meaningful if the reproducibility conditions of measurement are specified for a given test.", "Conditions of measurement cover every aspect and detail of how a measurement was performed and how the measured quantity value was obtained.", "The key objective is to capture all respects in which the measurements to be assessed are known to be either the same or different.", "If QRA is performed for a set of existing results, it is often not possible to discover every aspect and detail of how a measurement was performed, so a reduced set may have to be used (unlike in experiments designed to test reproducibility where such details can be gathered as part of the experimental design).", "The reproducibility and evaluation checklists mentioned in Section 2 (Pineau, 2020; Shimorina and Belz, 2021) capture properties that are in effect conditions of measurement, and in combination with code, data and other resources serve well as a way of specifying conditions of measurement, if they have been completed by authors.", "However, at the present time, completed checklists are not normally available.", "The following is a simple set of conditions of measurement the information required for which is typically available for existing work (we include object and measurand for completeness although strictly they are not conditions, as they must be the same in each measurement in a given QRA test): 1. Object : the system (variant) being evaluated.", "4 E.g. a given MT system.", "2. Measurand : the quantity intended to be evaluated.", "5 E.g. BLEU-style n-gram precision or human-assessed Fluency.", "3. Object conditions:", "(a) System code : source code including any parameters.", "E.g. the complete code implementing an MT system.", "(b) Compile/training information : steps from code plus parameters to fully compiled and trained system, including dependencies and environment.", "E.g. complete information about how the MT system code was compiled and the system trained.", "(a) Method specification : full description of method used for obtaining values quantifying the measurand.", "E.g. a formal definition of BLEU.", "(b) Implementation : the method implemented in a form that can be applied to the object in order to obtain measured quantity values.", "E.g. a full implementation of BLEU.", "4 VIM doesn't define object' but refers to it as that which is being measured.", "(a) Procedure : specification of how system outputs (or other system characteristics) are obtained and the measurement method is applied to them.", "E.g. running a BLEU tool on system outputs and reference outputs.", "(b) Test set : the data used in obtaining and evaluating system outputs (or other system characteristics).", "E.g. a test set of source-language texts and reference translations.", "(c) Performed by : who performed the measurement procedure and any additional information about how they did it.", "E.g. the team applying the BLEU tool, and the run-time environment they used.", "The names of the conditions of measurement used in this paper are boldfaced above.", "The values for each condition characterise how measurements differ in respect of the condition.", "In reporting results from QRA tests in the following section, we use paper identifiers as shorthand for each distinct condition value (full details in each case being available from the referenced papers).", "Table 1 provides an overview of the 18 object/ measurand pairs (corresponding to 116 individual mea-7", "surements) for which we performed QRA tests in this study.", "For each object/measurand pair, the columns show, from left to right, information about the system evaluated (object), the evaluation measure applied (measurand), the number of scores (measured quantity values) obtained, the papers in which systems and scores were first reported, and the NLP task and type of evaluation involved.", "There are three sets of related systems:", "(i) the (single) PASS football report generator (van der Lee et al., 2017),", "(ii) Vajjala and Rama (2018)'s 11 multilingual essay scoring system variants, and", "(iii) two variants of Nisioi et al. (2017)'s neural text simplifier (NTS).", "PASS is evaluated with three evaluation measures (human-assessed Clarity, Fluency and Stance Identifiability), the essay scoring systems with one (weighted F1), and the NTS systems with two (BLEU and SARI).", "For PASS we have one reproduction study, for the essay scorers seven, and for the NTS systems, from three to six.", "The PASS reproduction was carried out as part of ReproGen (Belz et al., 2021b), the reproductions of the essay-scoring systems and of one of the NTS systems as part of REPROLANG (Branco et al., 2020), and we carried out an additional reproduction study of the NTS systems for this paper.", "8 The PASS text generation system is rule-based, the essay classifiers are theory-guided and data-driven' hybrids, and the text simplifiers are end-to-end neural systems.", "This gives us a good breadth 8 Authors of original studies gave permission for their work to be reproduced (Branco et al., 2020; Belz et al., 2021b).", "The neural text simplification systems reported by Nisioi et al. (2017) were evaluated with BLEU (n-gram similarity between outputs and multiple reference texts) and SARI (based on word added/retained/deleted in outputs compared to both inputs and reference texts, summing over addition and retention F-scores and deletion Precisions).", "Table 4 shows BLEU and SARI scores for the two system variants from the original paper and the two reproduction studies, alongside the four corresponding CV values.", "In their reproduction, Cooper and Shardlow (2020) regenerated test outputs for NTS-w2v_def, but not for NTS_def, which explains the missing scores in Column 4. The different numbers of scores in different rows in Columns 69 are due to our own reproduction using Nisioi et", "al.'s SARI script, but two different BLEU scripts:", "(i) Nisioi et", "al.'s script albeit with the tokeniser replaced by our own because the former did not work due to changes in the NLTK library; and", "(ii) SacreBLEU (Xu et al., 2016).", "Table 5 shows the conditions of measurement for each of the 22 individual measurements.", "The measured quantity values for those measurements where", "Comp./trained by=Nisioi et al. are identical for the SARI metric (scores highlighted by green/lighter shading and italics), but differ by up to 1.4 points for BLEU (scores highlighted by blue/darker shading).", "Because Test set=Nisioi et al. in all cases, the differences in these BLEU scores can only be caused by differences in BLEU scripts and how they were run.", "The corresponding CV is as big as 0.838 for (just) the four NTS_def BLEU scores, and 1.314 for (just) the three NTS-w2v_def BLEU scores, reflecting known problems with non-standardised BLEU scripts (Post, 2018).", "If we conversely look just at those measurements (identifiable by boldfaced measured quantity values in Table 5) where the reproducing team regenerated outputs (with the same system code) and evaluation scripts were the same, SARI CV is 3.11 for the NTS_def variants, and 4.05 for the NTS-w2v_def variants (compared in both cases to 0 (perfect) when the same outputs are used).", "BLEU CV is 2.154 for the NTS_def variants (compared to 0.838 for same outputs but different evaluation scripts, as above), and 6.598 for the NTS-w2v_def variants (compared to 1.314 for same outputs but different evaluation scripts).", "These differences arise simply from running the system in different environments.", "The overall higher (worse) CV values for NTS-w2v_def variants (compared to NTS_def) are likely to be partly due to the NTS models using one third party tool (openNMT), and the NTS-w2v models using two (openNMT and word2vec), i.e. the latter are more susceptible to changes in dependencies.", "The PASS system, developed by van der Lee et al. (2017), generates football match reports from the perspective of each of the competing teams.", "The original study evaluated the system for Clarity, Fluency and Stance Identifiability in an evaluation with 20 evaluators and a test set of 10 output pairs.", "The evaluation was repeated with a slightly different evaluation interface and a different cohort of evaluators by Mille et al. (2021).", "Table 2 shows the results from the original and reproduction evaluations (columns 3 and 4), where the Clarity and Fluency results are the mean scores from 7-point agreement scales, and Identifiability results are the percentage of times the evaluators correctly guessed the team whose supporters a report was written for.", "Columns 69 show the corresponding sample size (number of reproductions plus original study), mean, standard deviation (stdev), the confidence interval (CI) for the standard deviation, and CV , all calculated on the shifted scores (see Section 3.2).", "Table 3 shows the values (here, paper identifiers) for the nine conditions of measurement introduced in Section 3.3, for each of the six individual measurements (three evaluation measures times two studies).", "Note that both object conditions and the 22 test set condition are the same, because Mille et al. used the system outputs shared by van der Lee et al.", "The values for the Implemented by , Procedure and Performed by conditions reflect the differences in the two evaluations in design, evaluator cohorts, and the teams that performed them.", "The scores vary to different degrees for the three measurands, with CV lowest (reproducibility best) for Stance Identifiability, and highest (worst) for Fluency.", "These CV results are likely to reflect that evaluators agreed more on Clarity than Fluency.", "Moreover, the binary stance identification assessment has better reproducibility than the other two criteria which are assessed on 7-point rating scales.", "The 11 multilingual essay scoring system variants reported by Vajjala and Rama (2018) were evaluated by weighted F1 (wF1) score.", "Table 6 shows wF1 scores for the 11 multilingual system variants from each of the five papers, alongside the 11 corresponding CV values.", "Table 7 in the appendix shows the corresponding conditions of measurement.", "The baseline classifier (mult-base) uses document length (number of words) as its only feature.", "For the other variants, +/indicates that the multilingual classifier was / was not given information about which language the input was in; the mult-word variants use word n-grams only; mult-word uses POS (part of speech) tag n-grams only; mult-dep uses n-grams over dependency relation, dependent POS, and head POS triples; mult-dom uses domain-specific linguistic features including document length, lexical richness and errors; mult-emb uses word and character embeddings.", "The mult-base and mult-dom models are logistic regressors, the others are random forests.", "A very clear picture emerges: system variant pairs that differ only in whether they do or do not use language information have very similar CV scores.", "For example, mult-POS (POS n-grams without language information) and mult-POS + (POS n-grams with language information) both have a very good degree of wF1-reproducibility, their CV being 3.818 and 3.808 respectively; mult-word (word n-grams without language information) and mult-word + (word n-grams with language information) have notably higher CV , around 10.", "This tendency holds for all such pairs, indicating that using language information makes next to no difference to reproducibility.", "Moreover, the mult-dom and mult-emb variants all have similar CV .", "9 The indication is that the syntactic information is obtained/used in a way that is particularly reproducible, whereas the domain-specific information and the embeddings are obtained/used in a way that is particularly hard to reproduce.", "Overall, the random forest models using syntactic features have the best reproducibility; the logistic regressors using domain-specific features have the worst.", "Quantified reproducibility assessment (QRA) enables assessment of the degree of reproducibility of evaluation results for any given system and evaluation measure in a way that is scale-invariant 10 and comparable across different QRAs, for reproductions involving either the same or different original studies.", "Moreover, formally capturing (dis)similarities between systems and evaluation designs enables reproducibility to be assessed relative to such (dis)similarities.", "In combination, a set of results from QRA tests for the same system and evaluation measure can provide pointers to which aspects of the system and evaluation might be associated with low reproducibility.", "E.g. for the wF1 evaluations of the essay scoring systems above, it is clear that variations in reproducibility are associated at least in part with the different features used by systems.", "It might be expected that the reproducibility of human-assessed evaluations is generally worse than metric-assessed.", "Our study revealed a more mixed picture.", "As expected, the Fluency and Clarity evaluations of the PASS system were among those with highest CV , and the BLEU and SARI evaluation of the NTS systems and wF1 evaluation of the mult-POS and mult-dep systems were among those with lowest CV .", "However, human-assessed Stance Identifiability of PASS was among the most reproducible, and metric-assessed wF1 of mult-base, mult-dom and mult-emb were among the worst.", "In this paper, our focus has been QRA testing of existing research results.", "However, ideally, QRA would be built into new method development from the outset, where at first reporting, a detailed stan-9 The high CV for the baseline system may be due to an issue wiith the evaluation code (macro-F1 instead of weightedF1), as reported by Bestgen (Section 3.2, first paragraph), Caines and Buttery (Section 2.5, one before last paragraph) and Huber and ltekin (Section 3.2, second paragraph).", "10 If evaluation scores are multiplied by a common factor, CV does not change.", "dardised set of conditions of measurement is specified, and repeatability tests (where all conditions are identical except for the team conducting the tests, see Section 3.2) are performed to determine baseline reproducibility.", "Such repeatability QRA would provide quality assurance for new methods as well as important pointers for future reproductions regarding what degree of reproducibility to expect for given (types of) methods.", "If this is not possible, post-hoc reproducibility QRA (where there are differences in conditions of measurement values) is performed instead.", "If this yields high (poor) CV , one way to proceed is to minimise differences in conditions of measurement between the studies and observe the effect on CV , changing aspects of system and evaluation design and adding further conditions of measurement if need be.", "For human evaluation in particular, persistently high CV would indicate a problem with the method itself.", "We have described an approach to quantified reproducibility assessment (QRA) based on concepts and definitions from metrology, and tested it on 18 system and evaluation measure combinations involving diverse NLP tasks and types of evaluation.", "QRA produces a single score that quantifies the degree of reproducibility of a given system and evaluation measure, on the basis of the scores from, and differences between, multiple reproductions of the same original study.", "We found that the approach facilitates insights into sources of variation between reproductions, produces results that are comparable across different reproducibility assessments, and provides pointers about what needs to be changed in system and/or evaluation design to improve reproducibility.", "A recent survey (Belz et al., 2021a) found that just 14% of the 513 original/reproduction score pairs analysed were exactly the same.", "Judging the remainder simply not reproduced' is of limited usefulness, as some are much closer to being the same than others.", "At the same time, assessments of whether the same conclusions can be drawn on the basis of different scores involve subjective judgments and are prone to disagreement among assessors.", "Quantifying the closeness of results as in QRA, and, over time, establishing expected levels of closeness, seems a better way forward.", "We are grateful to the anonymous reviewers and area chairs for their exceptionally detailed and helpful feedback.", "Popovic's work on this s study was funded by the ADAPT SFI Centre for Digital Media Technology which is funded by Science Foundation Ireland through the SFI Research Centres Programme, and co-funded under the European Regional Development Fund (ERDF) through Grant 13/RC/2106.", "Mille's work was supported by the European Commission under the H2020 program contract numbers 786731, 825079, 870930 and 952133." ]
[ "abstain", "abstain", "result", "abstain", "objective", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "other", "other", "other" ]
[ "Success of deep learning techniques have renewed the interest in development of dialogue systems.", "However, current systems struggle to have consistent long term conversations with the users and fail to build rapport.", "Topic spotting, the task of automatically inferring the topic of a conversation, has been shown to be helpful in making a dialog system more engaging and efficient.", "We propose a hierarchical model with self attention for topic spotting.", "Experiments on the Switchboard corpus show the superior performance of our model over previously proposed techniques for topic spotting and deep models for text classification.", "Additionally, in contrast to offline processing of dialog, we also analyze the performance of our model in a more realistic setting i.e. in an online setting where the topic is identified in real time as the dialog progresses.", "Results show that our model is able to generalize even with limited information in the online setting.", "Recently, a number of commercial conversation systems have been introduced e.g. Alexa, Google Assistant, Siri, Cortana, etc.", "Most of the available systems perform well on goal-oriented conversations which spans over few utterances in a dialogue.", "However, with longer conversations (in open domains), existing systems struggle to remain consistent and tend to deviate from the current topic during the conversation.", "This hinders the establishment of long term social relationship with the users (Dehn and Van Mulken, 2000).", "In order to have coherent and engaging conversations with humans, besides other relevant natural language understanding (NLU) techniques (Jokinen and McTear, 2009), a system, while responding, should take into account the topic of the current conversation i.e. Topic Spotting.", "in commercial dialog systems (Bost et al., 2013; Jokinen et al., 2002) directly dealing with the customers.", "Topical information is useful for speech recognition systems (Iyer and Ostendorf, 1999) as well as in audio document retrieval systems (Hazen et al., 2007; Hazen, 2011).", "Importance of topic spotting can be gauged from the work of Alexa team (Guo et al., 2018), who have proposed topic based metrics for evaluating the quality of conversational bots.", "The authors empirically show that topic based metrics correlate with human judgments.", "Given the importance of topical information in a dialog system, this paper proposes self attention based hierarchical model for predicting topics in a dialog.", "We evaluate our model on Switchboard (SWBD) corpus (Godfrey et al., 1992) and show that our model supersedes previously applied techniques for topic spotting.", "We address the evaluative limitations of the current SWBD corpus by creating a new version of the corpus referred as SWBD2.", "We hope that SWBD2 corpus would provide a new standard for evaluating topic spotting models.", "We also experiment with an online setting where we examine the performance of our topic classifier as the length of the dialog is varied and show that our model can be used in a real time dialog system as well.", "Topic spotting is the task of detecting the topic of a dialog (Hazen et al., 2007).", "Topic spotting has been an active area of research over the past few decades both in the NLP community as well as in the speech community.", "In this section we briefly outline some of the main works in this area.", "For a detailed survey of prior research in this area, the reader is referred to Hazen (2011).", "Most of the methods proposed for topic spotting use features extracted from transcribed text as in-w k, 1 w k, 2 w 1 , 1 : w 1 ,L 0 w k,L E E E a 1 a 2 a L T <latexit sha1_base64=\"FjsLb5IuxN/Kp+xQz1FBQUFnWZA=\">AAAB8XicbVDLSgNBEOyNrxhfUY9eBoPgKeyKoDcDXjxGyAuTJcxOepMhs7PLzKwQlvyFlxyU4NUv8De8+TdOHgdNLGgoqrrp6g4SwbVx3W8nt7G5tb2T3y3s7R8cHhWPTxo6ThXDOotFrFoB1Si4xLrhRmArUUijQGAzGN7P/OYzKs1jWTOjBP2I9iUPOaPGSk+diJqBDrPauFssuWV3DrJOvCUp3X1OJlMAqHaLX51ezNIIpWGCat323MT4GVWGM4HjQifVmFA2pH1sWypphNrP5onH5MIqPRLGypY0ZK7+nshopPUoCmznPOGqNxP/89qpCW/9jMskNSjZYlGYCmJiMjuf9LhCZsTIEsoUt1kJG1BFmbFPKtgneKsnr5PGVdlzy96jV6pcwwJ5OINzuAQPbqACD1CFOjCQ8AKv8OZoZ+JMnfdFa85ZzpzCHzgfP6AFk8I=</latexit> <latexit sha1_base64=\"hbhZMlAjkUdy33rAl72tevAC25c=\">AAAB8XicbVDLSgNBEOyJrxhfUY9eBoPgKeyKoDcDXjxGyAuTJcxOZpMhs7PLzKwQlvyFlwiKePUL/A1v/o2zmxw0saChqOqmq9uPBdfGcb5RYW19Y3OruF3a2d3bPygfHrV0lCjKmjQSker4RDPBJWsabgTrxIqR0Bes7Y9vM7/9yJTmkWyYScy8kAwlDzglxkoPvZCYkQ7SxrRfrjhVJwdeJe6CVG4+Zxme6/3yV28Q0SRk0lBBtO66Tmy8lCjDqWDTUi/RLCZ0TIasa6kkIdNemiee4jOrDHAQKVvS4Fz9PZGSUOtJ6NvOPOGyl4n/ed3EBNdeymWcGCbpfFGQCGwinJ2PB1wxasTEEkIVt1kxHRFFqLFPKtknuMsnr5LWRdV1qu69W6ldwhxFOIFTOAcXrqAGd1CHJlCQ8AQv8Io0mqE39D5vLaDFzDH8Afr4Af1ClYc=</latexit> <latexit sha1_base64=\"hbhZMlAjkUdy33rAl72tevAC25c=\">AAAB8XicbVDLSgNBEOyJrxhfUY9eBoPgKeyKoDcDXjxGyAuTJcxOZpMhs7PLzKwQlvyFlwiKePUL/A1v/o2zmxw0saChqOqmq9uPBdfGcb5RYW19Y3OruF3a2d3bPygfHrV0lCjKmjQSker4RDPBJWsabgTrxIqR0Bes7Y9vM7/9yJTmkWyYScy8kAwlDzglxkoPvZCYkQ7SxrRfrjhVJwdeJe6CVG4+Zxme6/3yV28Q0SRk0lBBtO66Tmy8lCjDqWDTUi/RLCZ0TIasa6kkIdNemiee4jOrDHAQKVvS4Fz9PZGSUOtJ6NvOPOGyl4n/ed3EBNdeymWcGCbpfFGQCGwinJ2PB1wxasTEEkIVt1kxHRFFqLFPKtknuMsnr5LWRdV1qu69W6ldwhxFOIFTOAcXrqAGd1CHJlCQ8AQv8Io0mqE39D5vLaDFzDH8Afr4Af1ClYc=</latexit> <latexit sha1_base64=\"WBVcOY/JwJ318Rf7YS5SPbZt5Jo=\">AAAB8XicbVDLSsNAFL2pr1pfVZduBovgqiQi6LLgxmWFvrANZTKdtEMnkzBzI5TQv3DjQhG3/o07/8ZpmoW2Hhg4nHMvc+4JEikMuu63U9rY3NreKe9W9vYPDo+qxycdE6ea8TaLZax7ATVcCsXbKFDyXqI5jQLJu8H0buF3n7g2IlYtnCXcj+hYiVAwilZ6HEQUJybMWvNhtebW3RxknXgFqUGB5rD6NRjFLI24QiapMX3PTdDPqEbBJJ9XBqnhCWVTOuZ9SxWNuPGzPPGcXFhlRMJY26eQ5OrvjYxGxsyiwE7mCVe9hfif108xvPUzoZIUuWLLj8JUEozJ4nwyEpozlDNLKNPCZiVsQjVlaEuq2BK81ZPXSeeq7rl178GrNa6LOspwBudwCR7cQAPuoQltYKDgGV7hzTHOi/PufCxHS06xcwp/4Hz+ANtnkPg=</latexit> softmax | {z } u k w N, 1 : w N,L 00 \u0000! h 1(1) \u0000 h 1(1) \u0000! h 2(1) \u0000 h 2(1) \u0000! h L (1) \u0000 h L (1) softmax | {z } u N | {z } u 1 utterances BiLSTM Layer utterance embedding attention weights tanh BiLSTMLayer \u0000! h 1(2) \u0000! h 2(2) \u0000! h N (2) \u0000 h N (2) \u0000 h 2(2) \u0000 h 1(2) \u0000! h 1(1) \u0000 h 1(1) \u0000! h L (1) \u0000 h L (1) BiLSTMLayer BiLSTMLayer attention weights EmbeddingLayer EmbeddingLayer BiLSTM Hidden Layers v k , 2 <latexit sha1_base64=\"OuGlh/uVnUWLTeHXQQWMXIY9w20=\">AAAB+XicbVDLSsNAFL2pr1pfUZduBovgopSkG10W3LgRK9gHtCFMppN26GQSZiaFEvInblwo4tY/cefOT3HSdqGtBwYO59zLPXOChDOlHefLKm1sbm3vlHcre/sHh0f28UlHxakktE1iHstegBXlTNC2ZprTXiIpjgJOu8HkpvC7UyoVi8WjniXUi/BIsJARrI3k2/YgwnochNnUzya1Rp77dtWpO3OgdeIuSbVp391/A0DLtz8Hw5ikERWacKxU33US7WVYakY4zSuDVNEEkwke0b6hAkdUedk8eY4ujDJEYSzNExrN1d8bGY6UmkWBmSxyqlWvEP/z+qkOr72MiSTVVJDFoTDlSMeoqAENmaRE85khmEhmsiIyxhITbcqqmBLc1S+vk06j7jp198GtNmuwQBnO4BwuwYUraMIttKANBKbwBC/wamXWs/VmvS9GS9Zy5xT+wPr4AQu0lVo=</latexit> <latexit sha1_base64=\"qvm+No4aw12KMy0E19kmg50dSkI=\">AAAB+XicbVDLSsNAFL2pr1pfUZdugkVwUUrSjYKbghs3YgX7wDaEyWTSDp1MwsykUEL+xI0LRdz6J+5c+StO2i609cDA4Zx7uWeOnzAqlW1/GaW19Y3NrfJ2ZWd3b//APDzqyDgVmLRxzGLR85EkjHLSVlQx0ksEQZHPSNcfXxd+d0KEpDF/UNOEuBEachpSjJSWPNMcREiN/DCbeNm41shzz6zadXsGa5U4C1Jtmrd331fBY8szPwdBjNOIcIUZkrLv2IlyMyQUxYzklUEqSYLwGA1JX1OOIiLdbJY8t860ElhhLPTjypqpvzcyFEk5jXw9WeSUy14h/uf1UxVeuhnlSaoIx/NDYcosFVtFDVZABcGKTTVBWFCd1cIjJBBWuqyKLsFZ/vIq6TTqjl137p1qswZzlOEETuEcHLiAJtxAC9qAYQJP8AKvRmY8G2/G+3y0ZCx2juEPjI8fVvKWUg==</latexit> <latexit sha1_base64=\"qvm+No4aw12KMy0E19kmg50dSkI=\">AAAB+XicbVDLSsNAFL2pr1pfUZdugkVwUUrSjYKbghs3YgX7wDaEyWTSDp1MwsykUEL+xI0LRdz6J+5c+StO2i609cDA4Zx7uWeOnzAqlW1/GaW19Y3NrfJ2ZWd3b//APDzqyDgVmLRxzGLR85EkjHLSVlQx0ksEQZHPSNcfXxd+d0KEpDF/UNOEuBEachpSjJSWPNMcREiN/DCbeNm41shzz6zadXsGa5U4C1Jtmrd331fBY8szPwdBjNOIcIUZkrLv2IlyMyQUxYzklUEqSYLwGA1JX1OOIiLdbJY8t860ElhhLPTjypqpvzcyFEk5jXw9WeSUy14h/uf1UxVeuhnlSaoIx/NDYcosFVtFDVZABcGKTTVBWFCd1cIjJBBWuqyKLsFZ/vIq6TTqjl137p1qswZzlOEETuEcHLiAJtxAC9qAYQJP8AKvRmY8G2/G+3y0ZCx2juEPjI8fVvKWUg==</latexit> <latexit sha1_base64=\"WAD4Vlp0pIegzMiB8bnwBBkRRZI=\">AAAB+XicbVDLSsNAFL2pr1pfUZduBovgopSkm7osuHFZwT6gDWEynbRDJ5MwMymUkD9x40IRt/6JO//GSZuFth4YOJxzL/fMCRLOlHacb6uys7u3f1A9rB0dn5ye2ecXfRWnktAeiXkshwFWlDNBe5ppToeJpDgKOB0E8/vCHyyoVCwWT3qZUC/CU8FCRrA2km/b4wjrWRBmCz+bN1p57tt1p+msgLaJW5I6lOj69td4EpM0okITjpUauU6ivQxLzQineW2cKppgMsdTOjJU4IgqL1slz9GNUSYojKV5QqOV+nsjw5FSyygwk0VOtekV4n/eKNXhnZcxkaSaCrI+FKYc6RgVNaAJk5RovjQEE8lMVkRmWGKiTVk1U4K7+eVt0m81XafpPrr1TqOsowpXcA234EIbOvAAXegBgQU8wyu8WZn1Yr1bH+vRilXuXMIfWJ8/n06Tjg==</latexit> s k <latexit sha1_base64=\"yUKfgTFIknPb8I2zxRf6STJJB2A=\">AAAB7HicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkUI8FLx4rmLbQhrLZTtqlm03Y3Qgl9Dd48aCIV3+QN/+N2zYHbX0w8Hhvhpl5YSq4Nq777ZS2tnd298r7lYPDo+OT6ulZRyeZYuizRCSqF1KNgkv0DTcCe6lCGocCu+H0buF3n1BpnshHM0sxiOlY8ogzaqzk62E+nQ+rNbfuLkE2iVeQGhRoD6tfg1HCshilYYJq3ffc1AQ5VYYzgfPKINOYUjalY+xbKmmMOsiXx87JlVVGJEqULWnIUv09kdNY61kc2s6Ymole9xbif14/M9FtkHOZZgYlWy2KMkFMQhafkxFXyIyYWUKZ4vZWwiZUUWZsPhUbgrf+8ibp3NQ9t+49NGqtRhFHGS7gEq7Bgya04B7a4AMDDs/wCm+OdF6cd+dj1Vpyiplz+APn8wccmY7T</latexit> <latexit sha1_base64=\"yUKfgTFIknPb8I2zxRf6STJJB2A=\">AAAB7HicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkUI8FLx4rmLbQhrLZTtqlm03Y3Qgl9Dd48aCIV3+QN/+N2zYHbX0w8Hhvhpl5YSq4Nq777ZS2tnd298r7lYPDo+OT6ulZRyeZYuizRCSqF1KNgkv0DTcCe6lCGocCu+H0buF3n1BpnshHM0sxiOlY8ogzaqzk62E+nQ+rNbfuLkE2iVeQGhRoD6tfg1HCshilYYJq3ffc1AQ5VYYzgfPKINOYUjalY+xbKmmMOsiXx87JlVVGJEqULWnIUv09kdNY61kc2s6Ymole9xbif14/M9FtkHOZZgYlWy2KMkFMQhafkxFXyIyYWUKZ4vZWwiZUUWZsPhUbgrf+8ibp3NQ9t+49NGqtRhFHGS7gEq7Bgya04B7a4AMDDs/wCm+OdF6cd+dj1Vpyiplz+APn8wccmY7T</latexit> <latexit sha1_base64=\"yUKfgTFIknPb8I2zxRf6STJJB2A=\">AAAB7HicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkUI8FLx4rmLbQhrLZTtqlm03Y3Qgl9Dd48aCIV3+QN/+N2zYHbX0w8Hhvhpl5YSq4Nq777ZS2tnd298r7lYPDo+OT6ulZRyeZYuizRCSqF1KNgkv0DTcCe6lCGocCu+H0buF3n1BpnshHM0sxiOlY8ogzaqzk62E+nQ+rNbfuLkE2iVeQGhRoD6tfg1HCshilYYJq3ffc1AQ5VYYzgfPKINOYUjalY+xbKmmMOsiXx87JlVVGJEqULWnIUv09kdNY61kc2s6Ymole9xbif14/M9FtkHOZZgYlWy2KMkFMQhafkxFXyIyYWUKZ4vZWwiZUUWZsPhUbgrf+8ibp3NQ9t+49NGqtRhFHGS7gEq7Bgya04B7a4AMDDs/wCm+OdF6cd+dj1Vpyiplz+APn8wccmY7T</latexit> <latexit sha1_base64=\"yUKfgTFIknPb8I2zxRf6STJJB2A=\">AAAB7HicbVBNS8NAEJ3Ur1q/qh69LBbBU0mkUI8FLx4rmLbQhrLZTtqlm03Y3Qgl9Dd48aCIV3+QN/+N2zYHbX0w8Hhvhpl5YSq4Nq777ZS2tnd298r7lYPDo+OT6ulZRyeZYuizRCSqF1KNgkv0DTcCe6lCGocCu+H0buF3n1BpnshHM0sxiOlY8ogzaqzk62E+nQ+rNbfuLkE2iVeQGhRoD6tfg1HCshilYYJq3ffc1AQ5VYYzgfPKINOYUjalY+xbKmmMOsiXx87JlVVGJEqULWnIUv09kdNY61kc2s6Ymole9xbif14/M9FtkHOZZgYlWy2KMkFMQhafkxFXyIyYWUKZ4vZWwiZUUWZsPhUbgrf+8ibp3NQ9t+49NGqtRhFHGS7gEq7Bgya04B7a4AMDDs/wCm+OdF6cd+dj1Vpyiplz+APn8wccmY7T</latexit> W ( 1 ) <latexit sha1_base64=\"k+mab+VoRgDvO2Ue1m/xz+mrK7Y=\">AAAB+XicbVDLSsNAFL2pr1pfUZduBotQNyWRQl0W3LisYB/QxjKZTtqhk0mYmRRKyJ+4caGIW//EnX/jpM1CWw8MHM65l3vm+DFnSjvOt1Xa2t7Z3SvvVw4Oj45P7NOzrooSSWiHRDySfR8rypmgHc00p/1YUhz6nPb82V3u9+ZUKhaJR72IqRfiiWABI1gbaWTbwxDrqR+kvae05l5n2ciuOnVnCbRJ3IJUoUB7ZH8NxxFJQio04VipgevE2kux1IxwmlWGiaIxJjM8oQNDBQ6p8tJl8gxdGWWMgkiaJzRaqr83UhwqtQh9M5nnVOteLv7nDRId3HopE3GiqSCrQ0HCkY5QXgMaM0mJ5gtDMJHMZEVkiiUm2pRVMSW461/eJN2buuvU3YdGtdUo6ijDBVxCDVxoQgvuoQ0dIDCHZ3iFNyu1Xqx362M1WrKKnXP4A+vzBwSckzI=</latexit> <latexit sha1_base64=\"k+mab+VoRgDvO2Ue1m/xz+mrK7Y=\">AAAB+XicbVDLSsNAFL2pr1pfUZduBotQNyWRQl0W3LisYB/QxjKZTtqhk0mYmRRKyJ+4caGIW//EnX/jpM1CWw8MHM65l3vm+DFnSjvOt1Xa2t7Z3SvvVw4Oj45P7NOzrooSSWiHRDySfR8rypmgHc00p/1YUhz6nPb82V3u9+ZUKhaJR72IqRfiiWABI1gbaWTbwxDrqR+kvae05l5n2ciuOnVnCbRJ3IJUoUB7ZH8NxxFJQio04VipgevE2kux1IxwmlWGiaIxJjM8oQNDBQ6p8tJl8gxdGWWMgkiaJzRaqr83UhwqtQh9M5nnVOteLv7nDRId3HopE3GiqSCrQ0HCkY5QXgMaM0mJ5gtDMJHMZEVkiiUm2pRVMSW461/eJN2buuvU3YdGtdUo6ijDBVxCDVxoQgvuoQ0dIDCHZ3iFNyu1Xqx362M1WrKKnXP4A+vzBwSckzI=</latexit> <latexit sha1_base64=\"k+mab+VoRgDvO2Ue1m/xz+mrK7Y=\">AAAB+XicbVDLSsNAFL2pr1pfUZduBotQNyWRQl0W3LisYB/QxjKZTtqhk0mYmRRKyJ+4caGIW//EnX/jpM1CWw8MHM65l3vm+DFnSjvOt1Xa2t7Z3SvvVw4Oj45P7NOzrooSSWiHRDySfR8rypmgHc00p/1YUhz6nPb82V3u9+ZUKhaJR72IqRfiiWABI1gbaWTbwxDrqR+kvae05l5n2ciuOnVnCbRJ3IJUoUB7ZH8NxxFJQio04VipgevE2kux1IxwmlWGiaIxJjM8oQNDBQ6p8tJl8gxdGWWMgkiaJzRaqr83UhwqtQh9M5nnVOteLv7nDRId3HopE3GiqSCrQ0HCkY5QXgMaM0mJ5gtDMJHMZEVkiiUm2pRVMSW461/eJN2buuvU3YdGtdUo6ijDBVxCDVxoQgvuoQ0dIDCHZ3iFNyu1Xqx362M1WrKKnXP4A+vzBwSckzI=</latexit> <latexit sha1_base64=\"k+mab+VoRgDvO2Ue1m/xz+mrK7Y=\">AAAB+XicbVDLSsNAFL2pr1pfUZduBotQNyWRQl0W3LisYB/QxjKZTtqhk0mYmRRKyJ+4caGIW//EnX/jpM1CWw8MHM65l3vm+DFnSjvOt1Xa2t7Z3SvvVw4Oj45P7NOzrooSSWiHRDySfR8rypmgHc00p/1YUhz6nPb82V3u9+ZUKhaJR72IqRfiiWABI1gbaWTbwxDrqR+kvae05l5n2ciuOnVnCbRJ3IJUoUB7ZH8NxxFJQio04VipgevE2kux1IxwmlWGiaIxJjM8oQNDBQ6p8tJl8gxdGWWMgkiaJzRaqr83UhwqtQh9M5nnVOteLv7nDRId3HopE3GiqSCrQ0HCkY5QXgMaM0mJ5gtDMJHMZEVkiiUm2pRVMSW461/eJN2buuvU3YdGtdUo6ijDBVxCDVxoQgvuoQ0dIDCHZ3iFNyu1Xqx362M1WrKKnXP4A+vzBwSckzI=</latexit> W ( 2 ) <latexit sha1_base64=\"thiZP403hYYWRm+zF+IHR7/Hn/U=\">AAAB+XicbVDLSsNAFL2pr1pfUZduBotQNyUpBV0W3LisYB/QxjKZTtqhk0mYmRRKyJ+4caGIW//EnX/jpM1CWw8MHM65l3vm+DFnSjvOt1Xa2t7Z3SvvVw4Oj45P7NOzrooSSWiHRDySfR8rypmgHc00p/1YUhz6nPb82V3u9+ZUKhaJR72IqRfiiWABI1gbaWTbwxDrqR+kvae01rjOspFdderOEmiTuAWpQoH2yP4ajiOShFRowrFSA9eJtZdiqRnhNKsME0VjTGZ4QgeGChxS5aXL5Bm6MsoYBZE0T2i0VH9vpDhUahH6ZjLPqda9XPzPGyQ6uPVSJuJEU0FWh4KEIx2hvAY0ZpISzReGYCKZyYrIFEtMtCmrYkpw17+8SbqNuuvU3YdmtdUs6ijDBVxCDVy4gRbcQxs6QGAOz/AKb1ZqvVjv1sdqtGQVO+fwB9bnDwYjkzM=</latexit> <latexit sha1_base64=\"thiZP403hYYWRm+zF+IHR7/Hn/U=\">AAAB+XicbVDLSsNAFL2pr1pfUZduBotQNyUpBV0W3LisYB/QxjKZTtqhk0mYmRRKyJ+4caGIW//EnX/jpM1CWw8MHM65l3vm+DFnSjvOt1Xa2t7Z3SvvVw4Oj45P7NOzrooSSWiHRDySfR8rypmgHc00p/1YUhz6nPb82V3u9+ZUKhaJR72IqRfiiWABI1gbaWTbwxDrqR+kvae01rjOspFdderOEmiTuAWpQoH2yP4ajiOShFRowrFSA9eJtZdiqRnhNKsME0VjTGZ4QgeGChxS5aXL5Bm6MsoYBZE0T2i0VH9vpDhUahH6ZjLPqda9XPzPGyQ6uPVSJuJEU0FWh4KEIx2hvAY0ZpISzReGYCKZyYrIFEtMtCmrYkpw17+8SbqNuuvU3YdmtdUs6ijDBVxCDVy4gRbcQxs6QGAOz/AKb1ZqvVjv1sdqtGQVO+fwB9bnDwYjkzM=</latexit> <latexit sha1_base64=\"thiZP403hYYWRm+zF+IHR7/Hn/U=\">AAAB+XicbVDLSsNAFL2pr1pfUZduBotQNyUpBV0W3LisYB/QxjKZTtqhk0mYmRRKyJ+4caGIW//EnX/jpM1CWw8MHM65l3vm+DFnSjvOt1Xa2t7Z3SvvVw4Oj45P7NOzrooSSWiHRDySfR8rypmgHc00p/1YUhz6nPb82V3u9+ZUKhaJR72IqRfiiWABI1gbaWTbwxDrqR+kvae01rjOspFdderOEmiTuAWpQoH2yP4ajiOShFRowrFSA9eJtZdiqRnhNKsME0VjTGZ4QgeGChxS5aXL5Bm6MsoYBZE0T2i0VH9vpDhUahH6ZjLPqda9XPzPGyQ6uPVSJuJEU0FWh4KEIx2hvAY0ZpISzReGYCKZyYrIFEtMtCmrYkpw17+8SbqNuuvU3YdmtdUs6ijDBVxCDVy4gRbcQxs6QGAOz/AKb1ZqvVjv1sdqtGQVO+fwB9bnDwYjkzM=</latexit> <latexit sha1_base64=\"thiZP403hYYWRm+zF+IHR7/Hn/U=\">AAAB+XicbVDLSsNAFL2pr1pfUZduBotQNyUpBV0W3LisYB/QxjKZTtqhk0mYmRRKyJ+4caGIW//EnX/jpM1CWw8MHM65l3vm+DFnSjvOt1Xa2t7Z3SvvVw4Oj45P7NOzrooSSWiHRDySfR8rypmgHc00p/1YUhz6nPb82V3u9+ZUKhaJR72IqRfiiWABI1gbaWTbwxDrqR+kvae01rjOspFdderOEmiTuAWpQoH2yP4ajiOShFRowrFSA9eJtZdiqRnhNKsME0VjTGZ4QgeGChxS5aXL5Bm6MsoYBZE0T2i0VH9vpDhUahH6ZjLPqda9XPzPGyQ6uPVSJuJEU0FWh4KEIx2hvAY0ZpISzReGYCKZyYrIFEtMtCmrYkpw17+8SbqNuuvU3YdmtdUs6ijDBVxCDVy4gRbcQxs6QGAOz/AKb1ZqvVjv1sdqtGQVO+fwB9bnDwYjkzM=</latexit> W ( f ) <latexit sha1_base64=\"q/JWAI7hd1WJa5AVDnfLVsFD7V4=\">AAAB+XicbVDLSsNAFL2pr1pfUZduBotQNyWRQl0W3LisYB/QxjKZTtqhk0mYmRRKyJ+4caGIW//EnX/jpM1CWw8MHM65l3vm+DFnSjvOt1Xa2t7Z3SvvVw4Oj45P7NOzrooSSWiHRDySfR8rypmgHc00p/1YUhz6nPb82V3u9+ZUKhaJR72IqRfiiWABI1gbaWTbwxDrqR+kvae0Flxn2ciuOnVnCbRJ3IJUoUB7ZH8NxxFJQio04VipgevE2kux1IxwmlWGiaIxJjM8oQNDBQ6p8tJl8gxdGWWMgkiaJzRaqr83UhwqtQh9M5nnVOteLv7nDRId3HopE3GiqSCrQ0HCkY5QXgMaM0mJ5gtDMJHMZEVkiiUm2pRVMSW461/eJN2buuvU3YdGtdUo6ijDBVxCDVxoQgvuoQ0dIDCHZ3iFNyu1Xqx362M1WrKKnXP4A+vzB1WPk2c=</latexit> <latexit sha1_base64=\"q/JWAI7hd1WJa5AVDnfLVsFD7V4=\">AAAB+XicbVDLSsNAFL2pr1pfUZduBotQNyWRQl0W3LisYB/QxjKZTtqhk0mYmRRKyJ+4caGIW//EnX/jpM1CWw8MHM65l3vm+DFnSjvOt1Xa2t7Z3SvvVw4Oj45P7NOzrooSSWiHRDySfR8rypmgHc00p/1YUhz6nPb82V3u9+ZUKhaJR72IqRfiiWABI1gbaWTbwxDrqR+kvae0Flxn2ciuOnVnCbRJ3IJUoUB7ZH8NxxFJQio04VipgevE2kux1IxwmlWGiaIxJjM8oQNDBQ6p8tJl8gxdGWWMgkiaJzRaqr83UhwqtQh9M5nnVOteLv7nDRId3HopE3GiqSCrQ0HCkY5QXgMaM0mJ5gtDMJHMZEVkiiUm2pRVMSW461/eJN2buuvU3YdGtdUo6ijDBVxCDVxoQgvuoQ0dIDCHZ3iFNyu1Xqx362M1WrKKnXP4A+vzB1WPk2c=</latexit> <latexit sha1_base64=\"q/JWAI7hd1WJa5AVDnfLVsFD7V4=\">AAAB+XicbVDLSsNAFL2pr1pfUZduBotQNyWRQl0W3LisYB/QxjKZTtqhk0mYmRRKyJ+4caGIW//EnX/jpM1CWw8MHM65l3vm+DFnSjvOt1Xa2t7Z3SvvVw4Oj45P7NOzrooSSWiHRDySfR8rypmgHc00p/1YUhz6nPb82V3u9+ZUKhaJR72IqRfiiWABI1gbaWTbwxDrqR+kvae0Flxn2ciuOnVnCbRJ3IJUoUB7ZH8NxxFJQio04VipgevE2kux1IxwmlWGiaIxJjM8oQNDBQ6p8tJl8gxdGWWMgkiaJzRaqr83UhwqtQh9M5nnVOteLv7nDRId3HopE3GiqSCrQ0HCkY5QXgMaM0mJ5gtDMJHMZEVkiiUm2pRVMSW461/eJN2buuvU3YdGtdUo6ijDBVxCDVxoQgvuoQ0dIDCHZ3iFNyu1Xqx362M1WrKKnXP4A+vzB1WPk2c=</latexit> <latexit sha1_base64=\"q/JWAI7hd1WJa5AVDnfLVsFD7V4=\">AAAB+XicbVDLSsNAFL2pr1pfUZduBotQNyWRQl0W3LisYB/QxjKZTtqhk0mYmRRKyJ+4caGIW//EnX/jpM1CWw8MHM65l3vm+DFnSjvOt1Xa2t7Z3SvvVw4Oj45P7NOzrooSSWiHRDySfR8rypmgHc00p/1YUhz6nPb82V3u9+ZUKhaJR72IqRfiiWABI1gbaWTbwxDrqR+kvae0Flxn2ciuOnVnCbRJ3IJUoUB7ZH8NxxFJQio04VipgevE2kux1IxwmlWGiaIxJjM8oQNDBQ6p8tJl8gxdGWWMgkiaJzRaqr83UhwqtQh9M5nnVOteLv7nDRId3HopE3GiqSCrQ0HCkY5QXgMaM0mJ5gtDMJHMZEVkiiUm2pRVMSW461/eJN2buuvU3YdGtdUo6ijDBVxCDVxoQgvuoQ0dIDCHZ3iFNyu1Xqx362M1WrKKnXP4A+vzB1WPk2c=</latexit> U tt e r a n ce E n c o d e r D i a l og E n c o d e r a = [ a 1 ,...,a L ] <latexit sha1_base64=\"WDFg6st08nGebQNXZDOFuTRVAz4=\">AAACB3icbZDLSsNAFIZPvNZ6i5edIINFcFFCIoJuhIIbFy4q2AukIU6mk3bo5MLMRCghOze+ihsXirj1Fdz5Nk4vC239YeDjP+cw5/xByplUtv1tLCwuLa+sltbK6xubW9vmzm5TJpkgtEESnoh2gCXlLKYNxRSn7VRQHAWctoLB1ajeeqBCsiS+U8OUehHuxSxkBCtt+eZhJ8KqH4Q5LtAlcrGfO0XVsqyqppvC882KbdljoXlwplCp7YfhPQDUffOr001IFtFYEY6ldB07VV6OhWKE06LcySRNMRngHnU1xjii0svHdxToWDtdFCZCv1ihsft7IseRlMMo0J2jreVsbWT+V3MzFV54OYvTTNGYTD4KM45UgkahoC4TlCg+1ICJYHpXRPpYYKJ0dGUdgjN78jw0Ty3Htpxbp1I7g4lKcABHcAIOnEMNrqEODSDwCM/wCm/Gk/FivBsfk9YFYzqzB39kfP4A4AGZcQ==</latexit> <latexit sha1_base64=\"Oemvv3EJlsqQ0sZwivlFuGYf2lA=\">AAACB3icbZDLSsNAFIYn9VbrLV52ggwWwUUJGRF0IxTcuHBRwV4gDWEynbRDJxdmJkIJ2bnxVdy4UMStr+DOt3GSdqGtPwx8/Occ5pzfTziTyra/jcrS8srqWnW9trG5tb1j7u51ZJwKQtsk5rHo+VhSziLaVkxx2ksExaHPadcfXxf17gMVksXRvZok1A3xMGIBI1hpyzOP+iFWIz/IcA6voIO9DOUNy7Iamm5z1zPrtmWXgouAZlBvHgSlWp751R/EJA1ppAjHUjrITpSbYaEY4TSv9VNJE0zGeEgdjREOqXSz8o4cnmhnAINY6BcpWLq/JzIcSjkJfd1ZbC3na4X5X81JVXDpZixKUkUjMv0oSDlUMSxCgQMmKFF8ogETwfSukIywwETp6Go6BDR/8iJ0zixkW+gO1ZvnYKoqOATH4BQgcAGa4Aa0QBsQ8AiewSt4M56MF+Pd+Ji2VozZzD74I+PzB4D7mqk=</latexit> <latexit sha1_base64=\"Oemvv3EJlsqQ0sZwivlFuGYf2lA=\">AAACB3icbZDLSsNAFIYn9VbrLV52ggwWwUUJGRF0IxTcuHBRwV4gDWEynbRDJxdmJkIJ2bnxVdy4UMStr+DOt3GSdqGtPwx8/Occ5pzfTziTyra/jcrS8srqWnW9trG5tb1j7u51ZJwKQtsk5rHo+VhSziLaVkxx2ksExaHPadcfXxf17gMVksXRvZok1A3xMGIBI1hpyzOP+iFWIz/IcA6voIO9DOUNy7Iamm5z1zPrtmWXgouAZlBvHgSlWp751R/EJA1ppAjHUjrITpSbYaEY4TSv9VNJE0zGeEgdjREOqXSz8o4cnmhnAINY6BcpWLq/JzIcSjkJfd1ZbC3na4X5X81JVXDpZixKUkUjMv0oSDlUMSxCgQMmKFF8ogETwfSukIywwETp6Go6BDR/8iJ0zixkW+gO1ZvnYKoqOATH4BQgcAGa4Aa0QBsQ8AiewSt4M56MF+Pd+Ji2VozZzD74I+PzB4D7mqk=</latexit> <latexit sha1_base64=\"q5ty6+uBkHIUIpmqXQeFlC2nJCg=\">AAACB3icbZDLSsNAFIYnXmu9RV0KMlgEFyVkRNCNUHDjwkUFe4E2hMl00g6dTMLMRCghOze+ihsXirj1Fdz5Nk7aLLT1h4GP/5zDnPMHCWdKu+63tbS8srq2Xtmobm5t7+zae/ttFaeS0BaJeSy7AVaUM0FbmmlOu4mkOAo47QTj66LeeaBSsVjc60lCvQgPBQsZwdpYvn3Uj7AeBWGGc3gFe9jPUF53HKdu6Db3fLvmOu5UcBFQCTVQqunbX/1BTNKICk04VqqH3ER7GZaaEU7zaj9VNMFkjIe0Z1DgiCovm96RwxPjDGAYS/OEhlP390SGI6UmUWA6i63VfK0w/6v1Uh1eehkTSaqpILOPwpRDHcMiFDhgkhLNJwYwkczsCskIS0y0ia5qQkDzJy9C+8xBroPuUK1xXsZRAYfgGJwCBC5AA9yAJmgBAh7BM3gFb9aT9WK9Wx+z1iWrnDkAf2R9/gDZdZfx</latexit> Figure 1: Model Architecture put to a classifier (typically Nave Bayes or SVM ).", "Extracted features include: Bag of Words (BoW), TF-IDF (Sparck Jones, 1972; Schutze et al., 2008), n-grams, and word co-occurrences (Hazen, 2011; Myers et al., 2000).", "Some approaches (in addition to word co-occurrences features) incorporate background world knowledge using Wikipedia (Gupta and Ratinov, 2007).", "In our work, we do not explicitly extract the features but learn these during training.", "Moreover, unlike previous approaches, we explicitly model the dependencies between utterances via self attention mechanism and hierarchical structure.", "Topic spotting has been explored in depth in the speech processing community (see for example, Wright et al. (1996); Kuhn et al. (1997); Noth et al. (1997); Theunissen (2002)).", "Researchers in this community have attempted to predict the topic directly from the audio signals using phoneme based features.", "However, the performance of word based models supersedes those of audio models (Hazen et al., 2007).", "Recently, there has been lot of work in deep learning community for text classification (Kalch-brenner et al., 2014; Zhang et al., 2015; Lai et al., 2015; Lin et al., 2015; Tang et al., 2015).", "These deep learning models use either RNN-LSTM based neural networks (Hochreiter and Schmidhuber, 1997) or CNN based neural networks (Kim, 2014) for learning representation of words/sentences.", "We follow similar approach for topic spotting.", "Our model is related to the Hierarchical Attention Network (HN-ATT) model proposed by Yang et al. (2016) for document classification.", "HN-ATT models the document hierarchically by composing words (with weights determined by first level of attention mechanism) to get sentence representations and then combines the sentence representations with help of second level attention to get document representation which is then used for classification.", "The aim of this paper is not to improve text classification but to improve topic spotting.", "Topic spotting and text classification differ in various aspects.", "We are among the first to show the use of hierarchical self attention (HN-SA) model for topic spotting.", "It is natural to consider applying text classification techniques for topic spotting.", "However, as we empirically show in this paper, text classification techniques do not perform well in this setting.", "Moreover, for the dialog corpus simple BoW approaches perform better than more recently proposed HN-ATT model (Yang et al., 2016).", "We propose a hierarchical model with self attention (HN-SA) for topic spotting.", "We are given a topic label for each dialog and we want to learn a model mapping from space of dialogues to the space of topic labels.", "We learn a prediction model by minimizing Negative Log Likelihood ( N LL ) of the data.", "utterance in the dialog and outputs the corresponding utterance representation.", "A dialog encoder processes the utterance representations to give a compact vector representation for the dialog which is used to predict the topic of the dialog.", "Utterance Encoder: Each utterance in the dialog is processed sequentially using single layer Bidirectional Long Short Term Memory (BiLSTM) (Dyer et al., 2015) network and self-attention mechanism (Vaswani et al., 2017) to get the utterance representation.", "In particular, given an utterance with one-hot encoding for the tokens, u k = { w k , 1 , w k , 2 ,", "...., w k , L } , each token is mapped to a vector v k , i = Ew k , i ; i = 1 , 2 , ...L using pre-trained embeddings (matrix E ).", "Utterance representation ( s k = a TH ( 1 ) ) is the weighted sum of the forward and backward direction concatenated hidden states at each step of the BiLSTM ( H ( 1 ) = [ h ( 1 ) 1 , ...., h ( 1 ) L ] T where h ( 1 ) i = [ h i (1) : h i (1) ] = BiLSTM ( v k , i ) ).", "The weights of the combination ( a = softmax ( h ( 2 ) a ) ) are determined using self-attention mechanism proposed by Vaswani et al. (2017) by measuring the similarity between the concatenated hidden states ( h ( 2 ) a = W ( 2 ) a h ( 1 ) a + b ( 2 ) a and h ( 1 ) a = tanh ( W ( 1 ) a H ( 1 ) + b ( 1 ) a ) ) at each step in the utterance sequence.", "Self-attention computes the similarity of a token in the context of an utterance and thus, boosts the contribution of some keywords to the classifier.", "It also mitigates the need for a second layer of attention at a dialog level reducing the number of parameters, reducing the confusion of the classifier by not trying to reweigh individual utterances and reducing the dependence on having all utterances (full future context) for an accurate prediction.", "A simple LSTM based model (HN) and HN-ATT perform worse than the model using self attention ( 5), indicating the crucial role played by self-attention mechanism.", "Dialog Encoder: Utterance embeddings (repre-sentations) are sequentially encoded by a second single layer BiLSTM to get the dialog representation ( h ( 2 ) k = [ h k (2) : h k (2) ] = BiLSTM ( s k ) ; k = 1 , 2 , ...N ).", "Bidirectional concatenated hidden state corresponding to the last utterance (i.e. last step of BiLSTM) is used for making a prediction via a linear layer followed by softmax activation ( p ( T | D ) = softmax ( h D ) where h D = W f h ( 2 ) N ).", "# Dialogues # Topics Avg.", "# Utterances SWBD SWBD2 SWBD SWBD2 SWBD SWBD2 Train 1024 877 66 42 192.27 194.33 Dev 112 49 48 33 180.52 177.02 Test 19 98 12 42 237.58 201.97 Table 1: Corpus statistics for both versions of SWBD 4 Experimental Setup As in previous work ( 2), we use Switchboard (SWBD) (Godfrey et al., 1992) corpus for training our model.", "SWBD is a corpus of human-human conversations, created by recording (and later transcribing) telephonic conversations between two participants who were primed with a topic.", "Table 1 gives the corpus statistics.", "Topics in SWBD range over a variety of domains, for example, politics, health, sports, entertainment, hobbies, etc., making the task of topic spotting challenging.", "Dialogues in the test set of the original SWBD cover a limited number of topics (12 vs 66).", "The test set is not ideal for evaluating topic spotting system.", "We address this shortcoming by creating a new split and we refer to this version of the corpus as SWBD2 .", "The new split provides opportunity for more rigorous evaluation of a topic spotting system.", "SWBD2 was created by removing infrequent topics ( < 10 dialogues) from the corpus and then randomly moving dialogues between the train/development set and the test set, in order to have instances of each topic in the test set.", "The majority class baseline in SWBD2 is around 5%.", "In transcribed SWBD corpus some punctuation symbols such as #,", "?, have special meanings and non-verbal sounds have been mapped to special symbols e.g. < Laughter > .", "To preserve the meanings of special symbols we performed minimal preprocessing.", "Dialog Corpora is different from text classification corpora (e.g. product reviews).", "If we roughly equate a dialog to a document and an utterance to a sentence, dialogs are very long documents with short sentences.", "Moreover, the vocabulary distribution in a dialog corpus is fundamentally different, e.g. presence of back-channel words like uhm' and ah'.", "Model Hyper-parameters: We use GloVe embeddings (Pennington et al., 2014) with dimensionality of 300.", "The embeddings are updated during training.", "Each of the LSTM cell in the utterance and dialog encoder uses hidden state of dimension 256.", "The weight matrices in the attention network have dimension of 128.", "The hyper-parameters were found by experimenting with the Models SWBD SWBD2 BoW + Logsitic 78.95 87.76 BoW + SVM 73.68 90.82 Bigram + SVM 52.63 79.59 BoW + TF-IDF + Logistic 52.63 81.63 nGram + Logistic 52.63 78.57 nGram + TF-IDF + Logistic 57.89 87.76 Bag of Means + Logistic 78.95 87.76 Avg.", "development set.", "We trained the model by minimizing the cross-entropy loss using Adam optimizer (Kingma and Ba, 2014) with an initial learning rate of 0.001.", "The learning rate was reduced by half when development set accuracy did not change over successive epochs.", "Model took around 30 epochs to train.", "We compare the performance of our model (Table", "2) with traditional Bag of Words (BoW), TF-IDF, and n-grams features based classifiers.", "We also compare against averaged Skip-Gram (Mikolov et al., 2013), Doc2Vec (Le and Mikolov, 2014), CNN (Kim, 2014), Hierarchical Attention (HN-ATT) (Yang et al., 2016) and hierarchical network (HN) models.", "HN it is similar to our model HN-SA but without any self attention.", "Analysis: As is evident from the experiments on both the versions of SWBD, our model (HN-SA) outperforms traditional feature based topic spotting models and deep learning based document classification models.", "It is interesting to see that simple BoW and n-gram baselines are quite competitive and outperform some of the deep learning based document classification model.", "Similar observation has also been reported by Mesnil et al. (2014) for the task of sentiment analysis.", "The task of topic spotting is arguably more challenging than document classification.", "In the topic spotting task, the number of output classes (66/42 classes) is much more than those in document classification (5/6 classes), which is done mainly on the texts from customer reviews.", "Dialogues in SWBD have on an average 200 utterances and are much longer texts than customer reviews.", "Additionally, the number of dialogues available for training the model is significantly lesser than customer reviews.", "We further investigated the performance on SWBD2 by examining the confusion matrix of the model.", "Figure 2 shows the heatmap of the normalized confusion matrix of the model on SWBD2.", "For most of the classes the classifier is able to predict accurately.", "However, the model gets confused between the classes which are semantically close (w.r.t. terms used) to each other, for example, the model gets confused between pragmatically similar topics e.g. HOBBIES vs GARDENING, MOVIES vs TV PROGRAMS, RIGHT TO PRIVACY vs DRUG TESTING.", "Online Setting: In an online conversational system, a topic spotting model is required to predict the topic accurately and as soon as possible during the dialog.", "We investigated the relationship between dialog length (in terms of number of utterances) and accuracy.", "This would give us an idea about how many utterances are required to reach a desirable level of accuracy.", "For this experiment, we varied the length of the dialogues from the test set that was available to the model for making prediction.", "We created sub-dialogues of length starting with 1 / 32 of the dialog length and increasing it in multiples of 2, up to the full dialog.", "Figure 3 shows both the absolute accuracy and the accuracy relative to that on the full dialog.", "With just a few (3.125%) initial utterances available, the model is already 72% confident about the topic.", "This may be partly due to the fact that in a discussion, the first few utterances explicitly talk about the topic.", "However, as we have seen, since SWBD covers many different topics which are semantically close to each other but are assigned distinct classes, it is equally challenging to predict the topic with the same model.", "By the time the system has processed half the dialog in SWBD2 it is already within 99% accuracy of the full system.", "The experiment shows the possibility of using the model in an online setting where the model predicts the topic with high confidence as the conversation progresses.", "In this paper we presented a hierarchical model with self attention for topic spotting.", "The model outperforms the conventional topic spotting techniques as well as deep learning techniques for text classification.", "We empirically show that the proposed model can also be used in an online setting.", "We also introduced a new version of SWBD corpus: SWBD2.", "We hope that it will serve as the new standard for evaluating topic spotting models.", "Moving forward, we would like to explore a more realistic multi-modal topic spotting system.", "Such a system should fuse two modalities: audio and transcribed text to make topic predictions.", "We would like to thank anonymous reviewers for their insightful comments.", "Mubbasir Kapadia has been funded in part by NSF IIS-1703883, NSF S&AS-1723869, and DARPA SocialSim-W911NF-17-C-0098." ]
[ "abstain", "abstain", "abstain", "objective", "objective", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "result", "objective", "objective", "result", "other", "other", "abstain", "other", "other", "other", "other", "method", "abstain", "other", "other", "other", "other", "other", "method", "abstain", "other", "objective", "other", "objective", "other", "abstain", "other", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "method", "abstain", "objective", "objective", "objective", "objective", "abstain", "other", "other" ]
[ "In the principles-and-parameters framework, the structural features of languages depend on parameters that may be toggled on or off, with a single parameter often dictating the status of multiple features.", "The implied covariance between features inspires our probabilisation of this line of linguistic inquiry we develop a generative model of language based on exponential-family matrix factorisation.", "By modelling all languages and features within the same architecture, we show how structural similarities between languages can be exploited to predict typological features with near-perfect accuracy, outperforming several baselines on the task of predicting held-out features.", "Furthermore, we show that language embeddings pre-trained on monolingual text allow for generalisation to unobserved languages.", "This finding has clear practical and also theoretical implications: the results con-firm what linguists have hypothesised, i.e. that there are significant correlations between typological features and languages.", "Linguistic typologists dissect and analyse languages in terms of their structural properties (Croft, 2002).", "For instance, consider the phonological property of word-final obstruent decoding: German devoices word-final obstruents ( Zug is pronounced /zuk/), whereas English does not ( dog is pronounced /d 6 g/).", "In the tradition of generative linguistics, one line of typological analysis is the principles-and-parameters framework (Chomsky, 1981), which posits the existence of a set of universal parameters, switches as it were, that languages toggle.", "One arrives at a kind of factorial typology, to borrow terminology from optimality theory (Prince and Smolensky, 2008), through different settings of the parameters.", "Within the principle-and-parameters research program, then, the goal is Figure 1: Correlations between selected typological parameters.", "It is not enough, however, to simply write down the set of parameters available to language.", "Indeed, one of the most interesting facets of typology is that different parameters are correlated.", "To illustrate this point, we show a heatmap in Figure 1 that shows the correlation between the values of selected parameters taken from a typological knowledge base (KB).", "Notice how head-final word order, for example, highly correlates with strong suffixation.", "The primary contribution of this work is a probabilisation of typology inspired by the principles-and-parameters framework.", "We assume a given set of typological parameters and develop a generative model of a language's parameters, casting the problem as a form of exponential-family matrix factorisation.", "We observe a binary matrix that encodes the settings of each parameter for each language.", "For example, the Manchu head-final entry of this matrix would be set to 1, because Manchu is a head-final language.", "The goal of our model is to explain each entry of matrix as arising through the dot product of a language embedding and a parameter embedding passed through a sigmoid.", "We test our model on The World Atlas of Language Structures (WALS), the largest available knowledge base of typological parameters at the lexical, phonological, syntactic and semantic level.", "Our contributions are:", "(i) We develop a probabilisation of typology inspired by the principles-and-parameters framework.", "(ii) We introduce the novel task of typological collaborative filtering, where we observe some of a language's parameters, but hold some out.", "At evaluation time, we predict the held-out parameters using the generative model.", "(iii) We develop a semi-supervised extension, in which we incorporate language embeddings output by a neural language model, thus improving performance with unlabelled data.", "Indeed, when we partially observe some of the typological parameters of a language, we achieve near-perfect ( 97% ) accuracy on the prediction of held-out parameters.", "(iv) We perform an extensive qualitative and quantitative analysis of our method.", "What we will present in this paper is a generative model that corresponds to a generative tradition of research in linguistic typology.", "We first outline the technical linguistic background necessary for the model's exposition.", "Chomsky famously argued that the human brain contains a prior, as it were, over possible linguistic structures, which he termed universal grammar (Chomsky, 1965).", "The connection between Chomsky's Universal Grammar and the Bayesian prior is an intuitive one, but the earliest citation we know for the connection is Eisner (2002, 2).", "As a theory, universal grammar holds great promise in explaining the typological variation of human language.", "Cross-linguistic similarities and differences may be explained by the influence universal grammar exerts over language acquisition and change.", "While universal grammar arose early on in the writtings of Chomsky, early work in generative grammar focused primarily on English (Harris, 1995).", "Indeed, Chomsky's Syntactic Structures contains exclusively examples in English (Chomsky, 1957).", "As the generative grammarians turned their focus to a wider selection of languages, the principles and parameters framework for syntactic analysis rose to prominence.", "Given the tight relationship between the theory of universal grammar and typology, principles and parameters offers a fruitful manner in which to research typological variation.", "The principles and parameters takes a parametric view of linguistic typology.", "The structure of human language is governed by a series of principles , which are hard constraints on human language.", "A common example of a principle is the requirement that every sentence has a subject, even if one that is not pronounced; see the discussion on the pro-drop parameter in Carnie (2013).", "Principles are universally true for all languages.", "On the other hand, languages are also governed by parameters .", "Unlike principles, parameters are the parts of linguistic structure that are allowed to vary.", "It is useful to think of parameters as attributes that can take on a variety of values.", "As Chomsky (2000) himself writes we can think of the initial state of the faculty of language as a fixed network connected to a switch box; the network is constituted of the principles of language, while the switches are the options to be determined by experience. When the switches are set one way, we have Swahili; when they are set another way, we have Japanese. Each possible human language is identified as a particular setting of the switches-a setting of parameters, in technical terminology.", "What are possible parameters?", "Here, in our formalisation of the parameter aspect of the principles-and-parameters framework, we take a catholic view of parameters, encompassing all areas of linguistics, rather than just syntax.", "For example, as we saw before, consider the switch of devoicing word-final obstruents a parameter.", "We note that while principle-and-parameters typology has primarily been applied to syntax, there are also interesting applications to non-syntactic domains.", "For instance, van Oostendorp (2015) applies a parametric approach to metrical stress in phonology; this is in line with our view.", "In the field of linguistic typology, there is a vibrant line of research, which fits into the tradition of viewing typological parameters through the lens of principles and parameters.", "Indeed, while earlier work due to Chomsky focused on what have come to be called macro-parameters, many linguists now focus on micro-parameters, which are very close to the features found in the WALS dataset that we will be modelling (Baker, 2008; Nicolis and Biberauer, 2008; Biberauer et al., 2009).", "This justifies our viewing (cid:96) SOV SVO Str.", "WALS through the lens of principles and parameters, even though the authors of WALS adher to the functional-typological school.", "1 Notationally, we will represent the parameters as a vector = [ 1 , . . . , n ] .", "Each typological parameter i is a binary variable; for instance, does the language admit word-final voiced obstruents?", "We now seek a probabilistic formalisation of the linguistic theory presented in 2; specifically, for every language (cid:96) , we seek to explain the observed binary vector of parameters (cid:96) : (cid:96)i = 1 indicates that the i th parameter is on in language (cid:96) .", "The heart of our model will be quite simple: every language (cid:96) will have a language embedding (cid:96) R d and every parameter will have a parameter embedding e i R d .", "Now, (cid:96)i sigmoid ( e (cid:62) i (cid:96) ) .", "This model also takes inspiration from work in relation extraction (Riedel et al., 2013).", "Writing the joint distribution over the entire binary vector of parameters, we arrive at p ( (cid:96) | (cid:96) ) = | | (cid:89) i =1 p ( (cid:96)i | (cid:96) ) (1) = | | (cid:89) i =1 sigmoid (cid:16) e (cid:62) i (cid:96) (cid:17) (2) = | | (cid:89) i =1 1 1 + exp( e (cid:62) i (cid:96) ) (3) 1 For an overview of differences between these schools, we refer the reader to Haspelmath (2008).", "Note that p ( (cid:96) ) is, spiritually at least, a universal grammar: it is the prior over what sort of languages can exist, albeit encoded as a real vector.", "In the parlance of principles and parameters, the prior represents the principles.", "Then our model parameters are = { e 1 , . . . , e | | , 1 , . . . , |L| } .", "Note that for the remainder of the paper, we will never shorten model parameters' to simply parameters' to avoid ambiguity.", "We will, however, refer to typological parameters' as simply parameters.' We can view this model as a form of exponential-family matrix factorisation (Collins et al., 2001).", "Specifically, our model seeks to explain a binary matrix of parameters.", "We consider such matrices as the one in Figure 2, which depicts some of the binarised feature values for word order and affixation for English, Dutch, German, Vietnamese, Turkish, and Marind.", "We will have some parameters seen during training (highlighted in blue), some we use for evaluation (highlighted in red), and some which are unknown due to the nature of WALS (high-lighted in green).", "Crucially, the model in eq.", "(5) allows us to learn the correlations between typological parameters, as illustrated in Figure 1.", "We train the model over 10 epochs with a batch size of 64, using the Adam optimiser (Kingma and Ba, 2014) and L 2 regularisation (0.1), which corresponds to the Gaussian prior with variance 2 = 10 .", "A natural question we might ask is if our model can exploit unlabelled monolingual data to improve its performance.", "We explain how we can induce language embeddings from unlabelled data below and then incorporate these into our model through the prior eq.", "(4).", "This results in a semi-supervised model, as we incorporate an unsupervised pretraining step.", "This is motivated by the fact that related languages tend to exhibit correlations between each other.", "Figure 3 shows the distribution Figure 3: Distribution of feature values across three of the biggest language families in WALS.", "of a few features within the Semitic, Oceanic, and Indic language branches.", "Notice, for instance, the skewed distribution of feature values within the Indic branch: languages in that branch are almost exclusively head-initial with respect to word order, order of adposition and noun, and affixation.", "Words can be represented by distributed word representations, currently often in the form of word embeddings.", "Similarly to how words can be embedded, so can languages, by associating each language with a real-valued vector known as a language embedding .", "Training such representations as a part of a multilingual model allows us to infer similarities between languages.", "This is due to the fact that in order for multilingual parameter sharing to be successful in this setting, the neural network needs to use the language embeddings to encode features of the languages.", "Previous work has explored this type of representation learning in various tasks, such as NMT (Malaviya et al., 2017), language modelling (Tsvetkov et al., 2016; Ostling and Tiedemann, 2017), and tasks representing morphological, phonological, and syntactic linguistic levels (Bjerva and Augenstein, 2018a).", "In the context of computational typology, representations obtained through language modelling have been the most successful ( Ostling and Tiedemann, 2017).", "This approach is particularly interesting since unlabelled data is available for a large portion of the world's languages, meaning that high quality language embeddings can be obtained for more than 1,000 of the world's languages.", "In this work, we use a language modelling objective to pre-train language embeddings; we train a character-level neural language model with a distributed language embedding of language (cid:96) .", "Specifically, we use the model of Ostling and Tiedemann (2017), visualised in Figure 4.", "The model is a stacked character-based LSTM (Hochreiter and Schmidhuber, 1997) with two layers, followed by a softmax output layer.", "In order to accommodate the language embeddings, this relatively standard model is modified such that language embeddings are concatenated to the character embeddings at each time step.", "This method returns a language embedding, which we denote (cid:96) to distinguish it from the language embeddings (cid:96) discussed in the previous section.", "We use the same hyperparame-ter settings as Ostling and Tiedemann (2017), with 1024-dimensional LSTMs, 128-dimensional character embeddings, and 64-dimensional language embeddings.", "Training is done with Adam (Kingma and Ba, 2014), and using early stopping.", "In the semi-supervised regime, we use the estimated language embedding (cid:96) from the language model and define the model as follows p ( (cid:96) | (cid:96) ) = | | (cid:89) i =1 sigmoid ( e (cid:62) i (cid:96) ) (6) omitting the learned language embedding in the matrix factorisation.", "The likelihood of this model is now convex in the parameter embeddings.", "In contrast to the full matrix factorisation setting, here, all language-specific knowledge must come from an external source, namely, the unlabelled text.", "In this section, we introduce a novel task for linguistic typology, which we term typological collaborative filtering .", "Typological KBs such as WALS are known to be incomplete.", "In other words, not all parameters (cid:96)i are observed for all languages.", "Thus, a natural question we may ask how well our models can predict unobserved (or held-out) parameters.", "We view this problem as analogous to collaborative filtering (CF).", "Our task is similar to knowledge base population or completion in that we start off with a partly populated KB which we aim to complete, but differs in the aspect that we attempt to infer values from correlations between features and similarities between languages, rather than inferring these from a collection of texts.", "CF is a common technique for recommender systems (see 9).", "Consider the task of recommending a new movie to a customer.", "Given which movies different users have liked (equivalent to a typological parameter being on') and which movies the user has disliked (equivalent to the typological parameter being off'), a CF model tries to figure out the (latent) preferences of each user and the latent genre of each movie.", "In our setting, the languages are analogous to users and movies are analogous to parameters.", "Our model in eq.", "(5) seeks to learn what latent properties of languages and what latent properties of parameters explain their correlations.", "WALS.", "The World Atlas of Language Structures (WALS) is a large knowledge base of typological properties at the lexical, phonological, syntactic and semantic level on which we will run our experiments.", "The documentation of linguistic structure is spread throughout a wide variety of academic works, ranging from field linguistics to grammars describing the nuances of individual grammatical uses.", "KB creation is a laborious tasks as it involves distilling knowledge into a single, standardised resource, which, naturally, will be incomplete, prompting the need for methods to complete them (Min et al., 2013).", "In the case of WALS, few languages have all values annotated for all of the properties.", "In this section, we offer a formalisation of typological KBs to allow for our development of a probabilistic model over vectors of properties.", "WALS, for instance, contains n = 202 different parameters (Dryer and Haspelmath, 2013).", "Binarisation of WALS.", "Many common typological KBs, including WALS, the one studied here, contain binary as well as non-binary parameters.", "To deal with this, we binarise the KB as follows: Whenever there is a typological parameter that takes 3 values, e.g., Feature 81A: Order of Subject, Object and Verb' which takes the 7 values SOV', SVO', VSO', VOS', OVS', OSV', No dominant order', we introduce that many binary parameters.", "At test time, we get non-binary predictions by using a simple decoder that returns the arg max over the predicted probabilities for the binary features.", "As each typological parameter with n 3 feature values is coded as a one-hot binary vector of length n , we need to make sure that we do not mix a single typological parameter for a language into the training and test sets.", "This is visualised in Figure 2, where we train on the blue squares, i.e., the binarised 81A feature for for English, and the 26A feature for Dutch, as well as all features for all non-Germanic languages.", "The model is then evaluated on the held-out features for Germanic highlighted in red, i.e., 26A for English, 81A for Dutch, and both of these for German.", "This is important, as knowing that a language is SVO would make it trivial to infer that it is not, e.g., SOV.", "Unlabelled multilingual data.", "To induce language embeddings, we need a considerable amount of multilingual unlabelled data.", "We use an in-house version of the massively multilingual Bible corpus, so as to have comparable data for all languages, although parallel data is not a strict necessity for our method.", "2 We train the multilingual language model on a collection of 975 languages, each with approximately 200,000 tokens available.", "We only train on languages for which the symbol size is relatively modest, a criterion which we fulfil by only using translations with Latin, Cyrillic, and Greek alphabets.", "Language used in the bible differs substantially from most modern language use, which would be a challenge if one were interested in transferring the language model itself.", "Here, we are only interested in the distributed language embeddings for each language (cid:96) .", "It is safe to assume that the typological features underlying the texts we use will be representative of those coded in WALS, hence the domain should not matter much and the method should work equally well given any domain of input texts for the unsupervised training of language embeddings.", "As described in 6, we binarise WALS.", "In order to compare directly with our semi-supervised extension, we limit our evaluation to the subset of languages which is the intersection of the languages for which we have Bible data, and the languages which are present in WALS.", "Finally, we observe that some languages have very few features encoded, and some features are encoded for very few languages.", "For instance, feature 10B (Nasal Vowels in West Africa), is only encoded for a total of 40 languages, and only one feature value appears for more than 10 languages.", "Because of this, we restrict our evaluation to languages and feature values which occur at least 10 times.", "Note that we evaluate on the original parameters, and not the binarised ones.", "Our general experimental set-up is as follows.", "We first split the languages in WALS into each language branch ( genus using WALS terminology).", "This gives us, e.g., a set of Germanic languages, a set of Romance languages, a set of Berber languages, and so on.", "(We note that this does not correspond to the notion of a language family, e.g., the Indo-European language family.)", "We wish to evaluate on this type of held-out set, as it is both relatively challenging: If we know the parameters of Portuguese, predicting the parameters for Spanish is a much easier task.", "This setup will both give us a critical estimate of how well we can predict features overall, in addition to mimicking a scenario in which we either have a poorly covered language or branch, which we wish to add to WALS.", "We evaluate our method for typological collaborative filtering on each language branch in WALS in a series of experiments.", "Given a branch B (a set of languages), we randomly select 80% of the feature-language combinations from the languages (cid:96) B , which we use for evaluation (e.g. those highlighted in red in Figure 2).", "The remaining 20% is either not considered, or (partially) used for training, as we run experiments in which we train on (0 , 1 , 5 , 10 , 20) % relative of the held-out data.", "The idea behind this is that it should be very difficult to predict features if nothing is known for a language at all, whereas knowing a few features of a language, or of related languages, should allow the model to take advantage of the strong correlations between features (Figure 1) and between languages (Figure 3).", "We train on all data in WALS for languages which are not in the current evaluation branch under consideration.", "Each experiment is then an evaluation of how well we can predict features for a completely or relatively unseen language family.", "Evaluation is done across the branches in WALS with more than four languages represented, after filtering away languages for which we have fewer than 10 features available.", "This amounts to a total of 36 branches, and 448 languages.", "We repeat each experiment 5 times per language branch, for each proportion of in-branch training data in (0 , 1 , 5 , 10 , 20) %, yielding a total of 900 experiment runs.", "The results reported are the mean across these runs.", "Figure 5 shows the micro F1 score we obtain averaged across macroareas.", "The bars indicate 95% confidence intervals.", "We can see that, with access to 20% of the in-branch training data, we can predict features at above 90% F1 score regardless of macroarea.", "Prediction generally is more challenging for languages in the macroarea Africa.", "This can be explained by, e.g., contrasting with the Eurasian macroarea.", "Whereas the latter includes branches which are relatively uncontroversial, such as Germanic and Slavic languages, this is not the case with the former.", "One such example is Bongo-Bagirmi (one of the evaluation branches, spoken in Central Africa), for which there is poor agreement in terms of classification (Bender, 2000).", "of languages, although the domain of these texts should not matter much.", "This allows us to take advantage of correlations between similar languages to point the model in the right direction.", "Even with 1% training data, it may be very useful for a model to know that, e.g., German and Dutch are grammatically very similar.", "Hence, if the 1% training data contains features for Dutch, it should be quite easy for the model to learn to transfer these to German.", "Figure 6 shows results with pre-training.", "Without any in-branch training data, using pretrained embeddings does not offer any improvement in prediction power.", "This can be explained by the fact language embeddings are updated during training, which leads to a drift in the representations present in the training material.", "This was chosen since not updating the representations yielded poor performance in all settings explored.", "We hypothesise that this is because, although a language modelling objective offers a good starting point in terms of encoding typological features, it is not sufficient to explain the typological diversity of languages.", "For instance, a language model should hardly care about the phonological nature of a language.", "This is in line with previous work which shows that the linguistic nature of the target task is important when predicting typological features with language embeddings (Bjerva and Augenstein, 2018a).", "However, once we have access to > 5% of in-branch training data, language embeddings offers a substantial improvement, e.g. an F1 error reduction of more than 50% with access to 10% of the in-branch data (see Table 1 and Table 3 for per-branch results in the appendix).", "This shows that we can partially aid a typologist's work by utilising unannotated data.", "In addition to evaluating our method of typological CF, we compare to some baselines drawn from earlier work.", "First, we report a most frequent value baseline.", "As many typological features are heavily skewed, this is quite high already.", "For instance, defaulting to the most frequent value for word order (i.e. SVO) would yield an accuracy of 41% ( Freq. in Table 1).", "A more involved baseline is Bjerva and Augenstein (2018a), who use pre-trained language embeddings in a k-NN classifier trained on individual WALS features ( Individual pred. in Table 1).", "For the baseline reported here, we only use one nearest neighbour for this prediction.", "The scores we obtain here are quite low compared to Bjerva and Augenstein (2018a), which is explained by the fact that we have access to very little training data in the current setting, and highlights the importance of taking advantage of correlations between languages and features, and not simply looking at these factors in isolation.", "Finally, we compare our typological collaborative filtering approach, as well as our semi-supervised extension ( T-CF and SemiSup in Table 1).", "Accuracy for several experimental settings is visualised in Figure 7, broken down by the linguistic category of the predicted features.", "Since results change little between the 5% in-branch setting and higher percentages, we only look at 0%, 1% and 5% here.", "We also visualise accuracy without (Fig-ure 7, left) and with our semi-supervised extension (Figure 7, right) in each setting.", "Focussing on Figure 7 (left) alone first we observe an expected pattern: using increasingly more in-branch data boosts performance across all feature groups.", "This increase in accuracy can be attributed to the model having more knowledge about each query language in itself and about how languages relate to one another, based on similarities in their parameter configurations.", "Making a prediction about the order of adposition and noun phrase in a given language, lacking any other information about that language, is basically a shot in the dark.", "In-branch training data, in our experiments, includes in-language training data, too.", "Having one piece of information about the word order of that language, its ordering of relative clause and noun, or even its affixational properties, immediately makes the prediction informed rather than random: in many other languages within and outside this particular language family the model would have likely observed a strong correlation between these features and the order of adposition and noun phrase, which are all subject to the more general headedness parameter.", "Certain features and feature configurations may not be as abundant cross-linguistically as the set of headedness features.", "In those cases, access to in-branch data is crucial.", "Consider e.g. the feature 10B Nasal Vowels in West Africa: a handful of language branches exhibit this feature and at least one of its values, no nasal vs. oral vowel contrast , is characteristic predominantly of Niger-Congo languages.", "Without any in-branch training data, the model's knowledge of this feature value is extremely limited, making its correct prediction for a Niger-Congo language virtually impossible.", "A small amount of in-branch training data thus increases the chance of a correct prediction greatly.", "Comparing Figure 7 to Figure 6 reveals a crucial finding.", "While we see very little improvement from pretraining for 0% in-branch training overall, for individual linguistic categories, it mostly benefits prediction: seven out of nine feature groups are predicted more accurately with pretraining.", "Phonological and morphological predictions experience moderate deterioration, however, counterbalancing much of the improvement in other categories, which leads to the overall result of seemingly lit-Language Genus Fixed Stress Location Weight-Sensitive Stress English Germanic ?", "tle improvement from pretraining.", "The limited effect of pretraining on prediction of phonological and morphological features can be explained with reference to the richness and complexity of these linguistic domains, which makes for data sparsity and generally makes them harder to learn based on distributional information alone.", "Moreover, a number of phonological features refer to aspects of language that may not be reflected in writing, such as stress and devoicing.", "All other categories concern syntactic and semantic information, which is known to be learnable from word distribution, and therefore benefit from the knowledge carried by language embeddings.", "Figure 7 (right) shows an unsteady interaction between pretraining and the addition of increasing amounts of in-branch data.", "While pretraining alone helps for predicting most features, as pointed out above, an extra 1% of in-branch data in the pretrained setting has a rather variable impact across feature groups.", "For a few groups it helps, as is expected, for a few it has no effect and for two groups, Word Order' and Simple Clauses', it makes for quite a drop in accuracy.", "We speculate that this effect, while negative, is indicative of the general power of language embeddings in associating related languages.", "Consider the test query Fixed Stress Location' in English, where the 1% of in-branch training data contains the information in Table 2.", "Based on feature correlation alone, the model should predict No fixed stress' for English, since this value always co-occurs with Right-oriented stress'.", "Yet, due to the proximity in the English and Icelandic embeddings, the model may copy the value of Icelandic and falsely predict Initial stress' for English, too.", "The risk of this happening decreases with more in-branch training data, since the model can generalise over more in-branch features.", "Lastly, notice that accuracy for phonological features remains low even with 5% of in-branch data, and it is lower in the pretrained setting compared to the no-pretraining one.", "This brings us to the conclusion that using pretrained embeddings which are fine-tuned for specific tasks which encode different linguistic levels, as in Bjerva and Augen-Figure 7: Accuracy per feature group (Germanic).", "stein (2018a), might also be useful in our semi-supervised extension of typological collaborative filtering.", "Computational Typology The availability of unlabelled datasets for hundreds of languages permits inferring linguistic properties and categories ( Ostling, 2015; Asgari and Schutze, 2017).", "Individual prediction of typological features has been attempted in conjunction with several NLP tasks (Malaviya et al., 2017; Bjerva and Augenstein, 2018a,b).", "Our work is most similar to Murawaki (2017), who presents a Bayesian approach to utilising relations between features and languages for feature prediction.", "However, our work differs on several important counts, as we", "(i) include language information obtained through unsupervised learning, which allows us to take advantage of raw data and predict features for completely unannotated languages,", "(ii) analyse the effects of varying amounts of known features, especially in situations with and without in-branch training data, and", "(iii) view the problem of typological features through the lens of parameters from principles and parameters (Chomsky, 2000).", "Deep generative models have also been explored previously for modelling phonology (Cotterell and Eisner, 2017).", "Our work builds on these research directions, by", "(i) developing a deep generative model which", "(ii) takes advantage of correlations, rather than predicting features individually, and", "(iii) exploits unlabelled data.", "This work is also related to linguistic representations encoded in neural models (Kadar et al., 2017) and language embeddings (Bjerva et al., 2019), multilingual relations between languages in various representational levels (Beinborn and Choenni, 2019), as well as the related problem of phylogenetic inference (Farach et al., 1995; Nichols and Warnow, 2008).", "For a survey of typology in NLP, see Ponti et al. (2018).", "Matrix Factorisation Collaborative Filtering was popularised in the early 1990s as a technique for recommender systems with applications such as mail filtering (Goldberg et al., 1992), and article (Resnick et al., 1994) and movie recommendation (Dahlen et al., 1998).", "Model-based algorithms soon became popular (Breese et al., 1998) to overcome the cold start problem arising for unseen users or items at test time.", "The most successful one of these, in turn, is matrix factorisation, as applied in this paper, which represents users and items as (dense) vectors in the same latent feature space and measures their compatibility by taking the dot product between the two representations (Koren et al., 2009; Bokde et al., 2015).", "Beyond recommender systems, matrix factorisation has shown successes in a wide variety of subareas of NLP (Riedel et al., 2013; Rocktaschel et al., 2015; Levy and Goldberg, 2014; Lei et al., 2014; Augenstein et al., 2018).", "We introduce a generative model inspired by the principles-and-parameters framework, drawing on the correlations between typological features of languages to solve the novel task of typological collaborative filtering.", "We further show that raw text can be utilised to infer similarities between languages, thus allowing for extending the method with semi-supervised language embeddings.", "We acknowledge the computational resources provided by CSC in Helsinki through NeIC-NLPL (www.nlpl.eu), and the support of the NVIDIA Corporation with the donation of the Titan Xp GPU used for this research.", "The third author acknowledges support from a Facebook Fellowship." ]
[ "abstain", "objective", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "objective", "objective", "result", "abstain", "objective", "abstain", "objective", "objective", "objective", "method", "objective", "result", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "method", "method", "method", "abstain", "method", "method", "method", "method", "method", "method", "method", "method", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "other", "method", "objective", "abstain", "other", "other", "other", "method", "other", "other", "other", "method", "other", "other", "other", "method", "other", "objective", "objective", "other", "other" ]
[ "One key consequence of the information revolution is a significant increase and a contamination of our information supply.", "The practice of fact-checking won't suffice to eliminate the biases in text data we observe, as the degree of factuality alone does not determine whether biases exist in the spectrum of opinions visible to us.", "To better understand controversial issues, one needs to view them from a diverse yet comprehensive set of perspectives .", "For example, there are many ways to respond to a claim such as animals should have lawful rights , and these responses form a spectrum of perspectives, each with a stance relative to this claim and, ideally, with evidence supporting it.", "Inherently, this is a natural language understanding task, and we propose to address it as such.", "Specifically, we propose the task of substantiated perspective discovery where, given a claim , a system is expected to discover a diverse set of well-corroborated perspectives that take a stance with respect to the claim.", "Each perspective should be substantiated by evidence paragraphs which summarize pertinent results and facts.", "We construct PERSPECTRUM , a dataset of claims, perspectives and evidence, making use of online debate websites to create the initial data collection, and augmenting it using search engines in order to expand and diversify our dataset.", "We use crowdsourcing to filter out noise and ensure high-quality data.", "Our dataset contains 1 k claims, accompanied by pools of 10 k and 8 k perspective sentences and evidence paragraphs, respectively.", "We provide a thorough analysis of the dataset to highlight key underlying language understanding challenges, and show that human baselines across multiple subtasks far outperform machine baselines built upon state-of-the-art NLP techniques.", "This poses a challenge and an opportunity for the NLP community to address.", "Understanding most nontrivial claim s requires insights from various perspective s.", "Today, we make use of search engines or recommendation systems to retrieve information relevant to a claim, but this process carries multiple forms of bias .", "In particular, they are optimized relative to the claim (query) presented, and the popularity of the relevant documents returned, rather than with respect to the diversity of the perspective s presented in them or whether they are supported by evidence.", "In this paper, we explore an approach to mitigating this selection bias (Heckman, 1979) when studying (disputed) claims.", "Consider the claim shown in Figure 1: animals should have lawful rights.", "One might compare the biological simi-larities/differences between humans and other animals to support/oppose the claim.", "Alternatively, one can base an argument on morality and rationality of animals, or lack thereof.", "Each of these arguments, which we refer to as perspective s throughout the paper, is an opinion, possibly conditional, in support of a given claim or against it.", "A perspective thus constitutes a particular attitude towards a given claim .", "Natural language understanding is at the heart of developing an ability to identify diverse perspectives for claims.", "In this work, we propose and study a setting that would facilitate discovering diverse perspectives and their supporting evidence with respect to a given claim .", "Our goal is to identify and formulate the key NLP challenges underlying this task, and develop a dataset that would allow a systematic study of these challenges.", "For example, for the claim in Figure 1, multiple (non-redundant) perspectives should be retrieved from a pool of perspectives; one of them is animals have no interest or rationality , a perspective that should be identified as taking an opposing stance with respect to the claim .", "Each perspective should also be well-supported by evidence found in a pool of potential pieces of evidence.", "While it might be impractical to provide an exhaustive spectrum of ideas with respect to a claim , presenting a small but diverse set of perspectives could be an important step towards addressing the selection bias problem.", "Moreover, it would be impractical to develop an exhaustive pool of evidence for all perspectives, from a diverse set of credible sources.", "We are not attempting to do that.", "We aim at formulating the core NLP problems, and developing a dataset that will facilitate studying these problems from the NLP angle, realizing that using the outcomes of this research in practice requires addressing issues such as trustworthiness (Pasternack and Roth, 2010, 2013) and possibly others.", "Inherently, our objective requires understanding the relations between perspective s and claim s, the nuances in the meaning of various perspective s in the context of claim s, and relations between perspectives and evidence.", "This, we argue, can be done with a diverse enough, but not exhaustive, dataset.", "And it can be done without attending to the legitimacy and credibility of sources contributing evidence, an important problem but orthogonal to the one studied here.", "To facilitate the research towards developing solutions to such challenging issues, we propose Figure 2: Depiction of a few claims, their perspectives and evidences from PERSPECTRUM .", "PERSPECTRUM , a dataset of claims , perspectives and evidence paragraphs.", "For a given claim and pools of perspectives and evidence paragraphs , a hypothetical system is expected to select the relevant perspectives and their supporting paragraphs.", "Our dataset contains 907 claims, 11,164 perspectives and 8,092 evidence paragraphs.", "In constructing it, we use online debate websites as our initial seed data, and augment it with search data and paraphrases to make it richer and more challenging.", "We make extensive use of crowdsourcing to increase the quality of the data and clean it from annotation noise.", "The contributions of this paper are as follows: To facilitate making progress towards the problem of substantiated perspective discovery , we create a high-quality dataset for this task.", "1 We identify and formulate multiple NLP tasks that are at the core of addressing the substantiated perspective discovery problem.", "We show that humans can achieve high scores on these tasks.", "We develop competitive baseline systems for each sub-task, using state-of-the-art techniques.", "In this section we provide a closer look into the challenge and propose a collection of tasks that move us closer to substantiated perspective discovery .", "To clarify our description we use to following notation.", "Let c indicate a target claim of interest (for example, the claims c 1 and c 2 in Figure 2).", "Each claim c is addressed by a collection of perspectives { p } that are grouped into clusters of equivalent perspectives.", "Additionally, each perspective p is supported, relative to c , by at least one evidence paragraph e , denoted e (cid:15) p | c .", "Determination of argue-worthy claims: not every claim requires an in-depth discussion of perspectives.", "For a system to be practical, it needs to be equipped with understanding argumentative structures (Palau and Moens, 2009) in order to discern disputed claims from those with straightforward responses.", "We set aside this problem in this work and assume that all the inputs to the systems are discussion-worthy claims.", "Discovery of pertinent perspectives: a system is expected to recognize argumentative sentences (Cabrio and Villata, 2012) that directly address the points raised in the disputed claim.", "For example, while the perspectives in Figure 2 are topically related to the claims, p 1 , p 2 do not directly address the focus of claim c 2 (i.e., use of animals in entertainment ).", "Perspective equivalence: a system is expected to extract a minimal and diverse set of perspectives.", "This requires the ability to discover equivalent perspectives p, p (cid:48) , with respect to a claim c : p | c p (cid:48) | c .", "For instance, p 3 and p 4 are equivalent in the context of c 2 ; however, they might not be equivalent with respect to any other claim.", "The conditional nature of perspective equivalence differentiates it from the paraphrasing task (Bannard and Callison-Burch, 2005).", "Stance classification of perspectives: a system is supposed to assess the stances of the perspectives with respect to the given claim (supporting, opposing, etc.) (Hasan and Ng, 2014).", "Substantiating the perspectives: a system is expected to find valid evidence paragraph(s) in support of each perspective.", "Conceptually, this is similar to the well-studied problem of textual entailment (Dagan et al., 2013) except that here the entailment decisions depend on the choice of claims.", "Claim verification.", "The task of fact verification or fact-checking focuses on the assessment of the truthfulness of a claim, given evidence (Vlachos and Riedel, 2014; Mitra and Gilbert, 2015; Samadi et al., 2016; Wang, 2017; Nakov et al., 2018; Hanselowski et al., 2018; Karimi et al., 2018; Al-hindi et al., 2018).", "These tasks are highly related to the task of textual-entailment that has been extensively studied in the field (Bentivogli et al., 2008; Dagan et al., 2013; Khot et al., 2018).", "Some recent work study jointly the problem of identifying evidence and verifying that it supports the claim (Yin and Roth, 2018).", "Our problem structure encompasses the fact verification problem (as verification of perspectives from evidence ; Figure 1).", "Stance classification.", "Stance classification aims at detecting phrases that support or oppose a given claim.", "The problem has gained significant attention in the recent years; to note a few important ones, Hasan and Ng (2014) create a dataset of dataset text snippets, annotated with reasons (similar to perspectives in this work) and stances (whether they support or oppose the claim).", "Unlike this work, our pool of the relevant reasons is not restricted.", "Ferreira and Vlachos (2016) create a dataset of rumors (claims) coupled with news headlines and their stances.", "There are a few other works that fall in this category (Boltuzic and Snajder, 2014; Park and Cardie, 2014; Rinott et al., 2015; Swanson et al., 2015; Mohammad et al., 2016; Sobhani et al., 2017; Bar-Haim et al., 2017).", "Our approach here is closely related to existing work in this direction, as stance classification is part of the problem studied here.", "Argumentation.", "There is a rich literature on formalizing argumentative structures from free text.", "There are a few theoretical works that lay the ground work to characterizing units of arguments and argument-inducing inference (Teufel et al., 1999; Toulmin, 2003; Freeman, 2011).", "Others have studied the problem of extracting argumentative structures from free-form text; for example, Palau and Moens (2009); Khatib et al. (2016); Ajjour et al. (2017) studied elements of arguments and the internal relations between them.", "Feng and Hirst (2011) classified an input into one of the argument schemes.", "Habernal and Gurevych (2017) provided a large corpus annotated with argument units.", "Cabrio and Villata (2018) provide a thorough survey the recent work in this direction.", "A few other works studied other aspects of argumentative structures (Cabrio and Villata, 2012; Khatib et al., 2016; Lippi and Torroni, 2016; Zhang et al., 2017; Stab and Gurevych, 2017).", "A few recent works use a similar conceptual design that involves a claim , perspectives and evidence", ".These works are either too small due to the high cost of construction (Aharoni et al., 2014) or too noisy because of the way they are crawled from online resources (Wachsmuth et al., 2017; Hua and Wang, 2017).", "Our work makes use of both online content and of crowdsourcing, in order to construct a sizable and high-quality dataset.", "In this section we describe a multi-step process, constructed with detailed analysis, substantial re-finements and multiple pilots studies.", "We use crowdsourcing to annotate different aspects of the dataset.", "We used Amazon Mechanical Turk (AMT) for our annotations, restricting the task to workers in five English-speaking countries (USA, UK, Canada, New Zealand, and Aus-tralia), more than 1000 finished HITs and at least a 95% acceptance rate.", "To ensure the diversity of responses, we do not require additional qualifica-tions or demographic information from our annotators.", "For any of the annotations steps described below, the users are guided to an external platform where they first read the instructions and try a verification step to make sure they have understood the instructions.", "Only after successful completion are they allowed to start the annotation tasks.", "make sure that the workers are responding objectively to the tasks (as opposed to using their personal opinions or preferences).", "The screen-shots of the annotation interfaces for each step are included in the Appendix (Section A.3).", "In the steps outlined below, we filter out a subset of the data with low raterrater agreement (see Appendix A.2).", "In certain steps, we use an information retrieval (IR) system 2 to generate the best candidates for the task at hand.", "Step 1: The initial data collection.", "We start by crawling the content of a few notable debating websites: idebate.com, debatewise.org, procon.org .", "This yields 1 k claims, 8 k perspectives and 8 k evidence paragraphs (for complete statistics, see Table 4 in the Appendix).", "This data is significantly noisy and lacks the structure we would like.", "In the following steps we explain how we denoise it and augment it with additional data.", "Step 2a: Perspective verification.", "For each perspective we verify that it is a complete English sentence, with a clear stance with respect to the given claim.", "For a fixed pair of claim and perspective , we ask the crowd-workers to label the perspective with one of the five categories of support , oppose , mildly-support , mildly-oppose , or not a valid perspective .", "The reason that we ask for two levels of intensity is to distinguish mild or conditional arguments from those that express stronger positions.", "Every 10 claims (and their relevant perspectives) are bundled to form a HIT.", "Three independent annotators solve a HIT, and each gets paid $1.5-2 per HIT.", "To get rid of the ambiguous/noisy perspectives we measure rater-rater agreement on the resulting data and retain only the subset which has a significant agreement of 0 .", "5 .", "To account for minor disagreements in the intensity of 2 www.elastic.co perspective stances, before measuring any notion of agreement, we collapse the five labels into three labels, by collapsing mildly-support and mildly-oppose into support and oppose , respectively.", "To assess the quality of these annotations, two of the authors independently annotate a random subset of instances in the previous step (328 perspectives for 10 claims).", "Afterwards, the differences were adjudicated.", "We measure the accuracy adjudicated results with AMT annotations to estimate the quality of our annotation.", "This results in an accuracy of 94%, which shows high-agreement with the crowdsourced annotations.", "Step 2b: Perspective paraphrases.", "To enrich the ways the perspectives are phrased, we crowdsource paraphrases of our perspectives.", "We ask annotators to generate two paraphrases for each of the 15 perspectives in each HIT, for a reward of $1.50.", "Subsequently, we perform another round of crowdsourcing to verify the generated paraphrases.", "We create HITs of 24 candidate paraphrases to be verified, with a reward of $1.", "Overall, this process gives us 4 .", "5 paraphrased perspectives.", "The collected paraphrases form clusters of equivalent perspectives, which we refine further in the later steps.", "Step 2c: Web perspectives.", "In order to ensure that our dataset contains more realistic sentences, we use web search to augment our pool of perspectives with additional sentences that are topically related to what we already have.", "Specifically, we use Bing search to extract sentences that are similar to our current pool of perspectives, by querying claim+perspective.", "We create a pool of relevant web sentences and use an IR system (introduced earlier) to retrieve the 10 most similar sentences.", "These candidate perspectives are annotated using (similar to step 2a) and only those that were agreed upon are retained.", "Step 2d: Final perspective trimming.", "In a fi-nal round of annotation for perspectives, an expert annotator went over all the claims in order to verify that all the equivalent perspectives are clustered together.", "Subsequently, the expert annotator went over the most similar claim-pairs (and their perspectives), in order to annotate the missing perspectives shared between the two claims.", "To cut the space of claim pairs, the annotation was done on the top 350 most similar claim pairs retrieved Category Statistic Value Claims # of claims (step 1) 907 avg.", "Step 3: Evidence verification.", "The goal of this step is to decide whether a given evidence paragraph provides enough substantiations for a perspective or not.", "Performing these annotations exhaustively for any perspective-evidence pair is not possible.", "Instead, we make use of a retrieval system to annotate only the relevant pairs.", "In particular, we create an index of all the perspectives retained from step 2a .", "For a given evidence paragraph, we retrieve the top relevant perspectives.", "We ask the annotators to note whether a given evidence paragraph supports a given perspective or not.", "Each HIT contains a 20 evidence paragraphs and their top 8 relevant candidate perspectives.", "Each HIT is paid $ 1 and annotated by at least 4 independent annotators.", "In order to assess the quality of our annotations, a random subset of instances (4 evidence-perspective pairs) are annotated by two independent authors and the differences are adjudicated.", "We measure the accuracy of our adjudicated labels versus AMT labels, resulting in 87.7%.", "This indicates the high quality of the crowdsourced data.", "We now provide a brief summary of PERSPECTRUM .", "The dataset contains about 1 k claims with a significant length diversity (Table 2).", "Additionally, the dataset comes with 12 k perspectives, most of which were generated through paraphrasing (step 2b).", "The perspectives which convey the same point with respect to a claim are grouped into clusters.", "On average, each cluster has a size of 2 .", "3 which shows that, on average, many perspectives have Figure 3: Distribution of claim topics.", "equivalents.", "More granular details are available in Table", "2. To better understand the topical breakdown of claims in the dataset, we crowdsource the set of topics associated with each claim (e.g., Law, Ethics, etc .) We observe that, as expected, the three topics of Politics, World, and Society have the biggest portions (Figure 3).", "Additionally, the included claims touch upon 10+ different topics.", "Figure 4 depicts a few popular categories and sampled questions from each.", "We perform a closer investigation of the abilities required to solve the stance classification task.", "One of the authors went through a random subset of claim-perspectives pairs and annotated each with the abilities required in determining their stances labels.", "We follow the common definitions used in prior work (Sugawara et al., 2017; Khashabi et al., 2018).", "The result of this annotation is depicted in Figure 5.", "As can be seen, the problem requires understanding of commonsense , i.e., an understanding that is commonly shared among humans and rarely gets explicitly mentioned in the text.", "Additionally, the task requires various types of coreference understanding, such as event coreference and entity coreference .", "In this section we provide empirical analysis to address the tasks.", "We create a split of 60%/15%/25% of the data train/dev/test.", "In order to make sure our baselines are not overfitting to the keywords of each topic (the topic annotation from Section 4.2), we make sure to have claims with the same topic fall into the same split.", "For simplicity, we define a notation which we will extensively use for the rest of this paper.", "The clusters of equivalent perspectives are denoted as [[ p ]] , given a representative member p .", "Let P ( c ) denote the collection of relevant perspectives to a claim c , which is the union of all the equivalent perspectives participating in the claim: { [[ p i ]] } i .", "Let E ([[ p ]]) = E ( p ) = (cid:83) i e i denote the set of evidence documents lending support to a perspective p .", "Additionally, denote the two pools of perspectives and evidence with U p and U e , respectively.", "We make use of the following systems in our evaluation:", "IR (Information Retrieval).", "This baseline has been successfully used for related tasks like Question Answering (Clark et al., 2016).", "We create two versions of this baseline: one with the pool of perspectives U p and one with the pool of evidences U e .", "We use this system to retrieve a ranked list of best matching perspective/evidence from the corresponding index.", "BERT (Contextual representations).", "A recent state-of-the-art contextualized representation (De-vlin et al., 2018).", "This system has been shown to be effective on a broad range of natural language understanding tasks.", "Human Performance.", "Human performance provides us with an estimate of the best achievable results on datasets.", "We use human annotators to measure human performance for each task.", "We randomly sample 10 claims from the test set, and instruct two expert annotators to solve each of T1 to T4.", "We perform evaluations on four different subtasks in our dataset.", "In all of the following evaluations, the systems are given the two pools of perspectives U p and evidences U e .", "T1: Perspective extraction.", "A system is expected to return the collection of mutually disjoint perspectives with respect to a given claim.", "Let P ( c ) be the set of output perspectives.", "Define the precision and recall as Pre ( c ) = (cid:80) p P ( c ) 1 { p,s.t. p [[ p ]] } | P ( c ) | and Rec ( c ) = (cid:80) p P ( c ) 1 { p,s.t. p [[ p ]] } | P ( c ) | respectively.", "T2: Perspective stance classification.", "Given a claim, a system is expected to label every perspective in P ( c ) with one of two labels support or oppose .", "We use the well-established definitions of precision-recall for this binary classification task.", "T3: Perspective equivalence.", "A system is expected to decide whether two given perspectives are equivalent or not, with respect to a given claim.", "We evaluate this task in a way similar to a clustering problem.", "For a pair of perspectives p 1 , p 2 P ( c ) , a system predicts whether the two are in the same cluster or not.", "The ground-truth is whether there is a cluster which contains both of the perspectives or not: p s.t. p P ( c ) p 1 , p 2 [[ p ]] .", "We use this pairwise definition for all the pairs in P ( c ) P ( c ) , for any claim c in the test set.", "T4: Extraction of supporting evidences.", "Given a perspective p , we expect a system to return all the evidence { e i } from the pool of evidence U e .", "Let E ( p ) and E ( p ) be the predicted and gold evidence for a perspective p .", "Define macro-precision and macro-recall as Pre ( p ) = | E ( p ) E ( p ) | | E ( p ) | and Rec ( p ) = | E ( p ) E ( p ) | | E ( p ) | , respectively.", "The metrics are averaged across all the perspectives p participating in the test set.", "T5: Overall performance.", "The goal is to get estimates of the overall performance of the systems.", "Instead of creating a complex measure that would take all the aspects into account, we approximate the overall performance by multiplying the disjoint measures in T 1 , T 2 and T 4 .", "While this gives an estimate on the overall quality, it ignores the pipeline structure of the task (e.g., the propagation of the errors throughout the pipeline).", "We note that the task of T 3 (perspective equivalence) is indirectly being measured within T 1 .", "Furthermore, since we do not report an IR performance on T 2 , we use the always supp baseline instead to estimate an overall performance for IR.", "Table 3 shows a summary of the experimental results.", "To measure the performance of the IR system, we use the index containing U p .", "Given each claim, we query the top k perspectives, ranked according to their retrieval scores.", "We tune k on our development set and report the results on the test section according to the tuned parameter.", "We use IR results as candidates for other solvers (includ-ing humans).", "For this task, IR with top-15 candidates yields > 90% recall (for the PR-curve, see Figure 6 in the Appendix).", "In order to train BERT on this task, we use the IR candidates as the training instances.", "We then tune a threshold on the dev data to select the top relevant perspectives.", "In order to measure human performance, we create an interface where two human annotators see IR top-k and select a minimal set of perspectives (i.e., no two equivalent perspectives).", "We measure the quality of perspective stance classification, where the input is a claim-perspective pair, mapped to { support, oppose } .", "The candidate inputs are generated on the collection of perspectives P ( c ) relevant to a claim c .", "To have an understanding of a lower bound for the metric, we measure the quality of an always-support baseline.", "We measure the performance of BERT on this task as well, which is about 20% below human performance.", "This might be because this task requires a deep understanding of commonsense knowledge/reasoning (as indicated earlier in Section 5).", "Since a retrieval system is unlikely to distinguish perspectives with different stances, we do not report the IR performance for this task.", "We create instances in the form of ( p 1 , p 2 , c ) where p 1 , p 2 P ( c ) .", "The expected label is whether the two perspectives belong to the same equivalence class or not.", "In the experiments, we observe that BERT has a significant performance gain of 36% over the IR baseline.", "Meanwhile, this system is behind human performance by a margin of 20% .", "We evaluate the systems on the extraction of items from the pool of evidences U e , given a claim perspective pair.", "To measure the performance of the IR system working with the index containing U e we issue a query containing the concatenation of a perspective-claim pair.", "Given the sorted results (according to their retrieval confi-dence score), we select the top candidates using a threshold parameter tuned on the dev set.", "We Setting Targetset System Pre.", "also use the IR system's candidates (top-60) for other baselines.", "This set of candidates yields a > 85% recall (for the PR-curve, see Figure 6 in the Appendix).", "We train BERT system to map each (gold) claim perspective pair to its corresponding evidence paragraph(s).", "Since each evidence paragraph could be long (hence hard to feed into BERT), we split each evidence paragraph into sliding windows of 3 sentences.", "For each claim perspective pair, we use all 3-sentences windows of gold evidence paragraphs as positive examples, and rest of the IR candidates as negative examples.", "In the run-time, if a certain percentage (tuned on the dev set) of the sentences from a given evidence paragraph are predicted as positive by BERT, we consider the whole evidence as positive (i.e. it supports a given perspective ).", "Overall, the performances on this task are lower, which could probably be expected, considering the length of the evidence paragraphs.", "Similar to the previous scenarios, the BERT solver has a significant gain over a trivial baseline, while standing behind human with a significant margin.", "As one of the key consequences of the information revolution, information pollution and over-personalization have already had detrimental effects on our life.", "In this work, we attempt to facilitate the development of systems that aid in better organization and access to information, with the hope that the access to more diverse information can address over-personalization too (Vydiswaran et al., 2014).", "The dataset presented here is not intended to be exhaustive , nor does it attempt to reflect a true distribution of the important claims and perspectives in the world, or to associate any of the perspective and identified evidence with levels of expertise and trustworthiness.", "Moreover, it is important to note that when we ask crowd-workers to evaluate the validity of perspectives and evidence, their judgement process can potentially be influ-enced by their prior beliefs (Markovits and Nantel, 1989).", "To avoid additional biases introduced in the process of dataset construction, we try to take the least restrictive approach in filtering dataset content beyond the necessary quality assurances.", "For this reason, we choose not to explicitly ask annotators to filter contents based on the intention of their creators (e.g. offensive content).", "A few algorithmic components were not addressed in this work, although they are important to the complete perspective discovery and presentation pipeline.", "For instance, one has to first verify that the input to the system is a reasonably well-phrased and an argue-worthy claim.", "And, to construct the pool of perspectives, one has to extract relevant arguments (Levy et al., 2014).", "In a similar vein, since our main focus is the study of the relations between claim s, perspective s, and evidence , we leave out important issues such as their degree of factuality (Vlachos and Riedel, 2014) or trustworthiness (Pasternack and Roth, 2014, 2010) as separate aspects of problem.", "We hope that some of these challenges and limitations will be addressed in future work.", "The importance of this work is three-fold; we define the problem of substantiated perspective discovery and characterize language understanding tasks necessary to address this problem.", "We combine online resources, web data and crowdsourcing and create a high-quality dataset, in order to drive research on this problem.", "Finally, we build and evaluate strong baseline supervised systems for this problem.", "Our hope is that this dataset would bring more attention to this important problem and would speed up the progress in this direction.", "There are two aspects that we defer to future work.", "First, the systems designed here assumed that the input are valid claim sentences.", "To make use of such systems, one needs to develop mechanisms to recognize valid argumentative structures.", "In addition, we ignore trustworthiness and credibility issues, important research issues that are addressed in other works.", "The authors would like to thank Jennifer Sheffield, Stephen Mayhew, Shyam Upadhyay, Nitish Gupta and the anonymous reviewers for insightful comments and suggestions.", "This work was supported in part by a gift from Google and by Contract HR0011-15-2-0025 with the US Defense Advanced Research Projects Agency (DARPA).", "The views expressed are those of the authors and do not reflect the official policy or position of the Department of Defense or the U.S. Government." ]
[ "result", "result", "abstain", "abstain", "objective", "objective", "abstain", "objective", "method", "abstain", "result", "abstain", "abstain", "method", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "method", "result", "objective", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "abstain", "other", "other", "abstain", "abstain", "other", "other", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "method", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "method", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "method", "result", "result", "abstain", "abstain", "method", "abstain", "objective", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "result", "method", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other" ]
[ "This work presents methods for learning crosslingual sentence representations using paired or unpaired bilingual texts.", "We hypothesize that the cross-lingual alignment strategy is transferable, and therefore a model trained to align only two languages can encode multilingually more aligned representations.", "We thus introduce dual-pivot transfer : training on one language pair and evaluating on other pairs.", "To study this theory, we design unsupervised models trained on unpaired sentences and single-pair supervised models trained on bitexts, both based on the unsupervised language model XLM-R with its parameters frozen.", "The experiments evaluate the models as universal sentence encoders on the task of unsupervised bitext mining on two datasets, where the unsupervised model reaches the state of the art of unsupervised retrieval, and the alternative single-pair supervised model approaches the performance of multilingually supervised models.", "The results suggest that bilingual training techniques as proposed can be applied to get sentence representations with multilingual alignment.", "Cross-lingual alignment as evaluated by retrieval tasks has been shown to be present in the representations of recent massive multilingual models which are not trained on bitexts (Pires et al., 2019; Conneau et al., 2020b).", "Other studies further show that sentence representations with higher crosslingual comparability can be achieved by training a cross-lingual mapping (Aldarmaki and Diab, 2019) or fine-tuning (Cao et al., 2020) for every pair of languages.", "These two lines of research show that, on the one hand, multilingual alignment arises from training using monolingual corpora alone, and, on the other, bilingual alignment can be enhanced by training on bitexts of specific language pairs.", "Combining these insights yields a question: can training with bilingual corpora help improve multilingual alignment?", "Given a language model encoding texts in different languages with some shared structure already, we can expect that the model further trained to align a pair of languages will take advantage of the shared structure and will therefore generalize the alignment strategy to other language pairs.", "From a practical point of view, bitexts for some pairs of languages are more abundant than others, and it is therefore efficient to leverage data from resource-rich pairs for the alignment of resource-poor pairs in training multilingual language models.", "To better understand the cross-lingual structure from the unsupervised models, we also ask the following question: how can multilingual alignment information be extracted from the unsupervised language models?", "Unsupervised multilingual models out-of-the-box as sentence encoders fall short of their supervised counterparts such as LASER (Artetxe and Schwenk, 2019b) in the task of bitext mining (Hu et al., 2020).", "The discovery of cross-lingual structure in the hidden states in the unsupervised model (Pires et al., 2019; Conneau et al., 2020b), however, raises the possibility that with relatively light post-training for better extraction of deep features, the unsupervised models can generate much more multilingually aligned representations.", "In this paper, we address both questions with the design of dual-pivot transfer , where a model is trained for bilingual alignment but tested for multilingual alignment.", "And we hypothesize that training to encourage similarity between sentence representations from two languages, the dual pivots, can help generate more aligned representations not only for the pivot pair, but also for other pairs.", "In particular, we design and study a simple extraction module on top of the pretrained multilingual language model XLM-R (Conneau et al., 8696 2020a).", "To harness different training signals, we propose two training architectures.", "In the case of training on unpaired sentences, the model is encouraged by adversarial training to encode sentences from the two languages with similar distributions.", "In the other case where bitexts of the two pivot languages are used, the model is encouraged to encode encountered parallel sentences similarly.", "Both models are then transferred to language pairs other than the dual pivots.", "This enables our model to be used for unsupervised bitext mining, or bitext mining where the model is trained only on parallel sentences from a single language pair.", "The experiments show that both training strategies are effective, where the unsupervised model reaches the state of the art on completely unsupervised bitext mining, and the one-pair supervised model approaching the state-of-the-art multilingually-supervised language models in one bitext mining task.", "Our contributions are fourfold: This study proposes effective methods of bilingual training using paired or unpaired sentences for sentence representation with multilingual alignment.", "The strategies can be incorporated in language model training for greater efficiency in the future.", "The work demonstrates that the alignment information in unsupervised multilingual language models is extractable by simple bilingual training of a light extraction module (without fine-tuning) with performance comparable to fully supervised models and reaching the state of the art of unsupervised models.", "The models are tested using a new experimental design dual-pivot transfer to evaluate the generalizability of a bilingually-supervised sentence encoder to the task of text mining for other language pairs on which it is not trained.", "This study shows that unsupervised bitext mining has strong performance which is comparable to bitext mining by a fully supervised model, so the proposed techniques can be applied to augment bilingual corpora for data-scarce language pairs in the future.", "Alignment with adversarial nets This work follows the line of previous studies which use adversarial networks (GANs) (Goodfellow et al., 2014) to align cross-domain distributions of embeddings without supervision of paired samples, in some cases in tandem with cycle consistency (Zhu et al., 2017), which encourages representations translated to another language then translated back to be similar to the starting representations.", "Conneau et al. (2018)'s MUSE project trains a linear map from the word-embedding space of one language to that of another using GANs, the method of which is later applied to an unsupervised machine translation model (Lample et al., 2018a).", "Cycle consistency in complement to adversarial training has been shown to be effective in helping to learn cross-lingual lexicon induction (Zhang et al., 2017; Xu et al., 2018; Mohiuddin and Joty, 2020).", "Our work is the first to our knowledge to apply such strategy of adversarial training and cycle consistency to the task of bitext mining.", "Alignment with pretrained LMs We adopt the training strategy aforementioned on top of pretrained multilingual language models, the extractability of multilingual information from which has been studied in several ways.", "Pires et al. (2019) find multilingual alignment in the multilingual BERT (mBERT) model (Devlin et al., 2019) pretrained on monolingual corpora only, while Conneau et al. (2020b) identify shared multilingual structure in monolingual BERT models.", "Other work studies the pretrained models dynamically by either fine-tuning the pretrained model for crosslingual alignment (Cao et al., 2020) or learning cross-lingual transformation (Aldarmaki and Diab, 2019) with supervision from aligned texts.", "Recently, Yang et al. (2020) use multitask training to train multilingual encoders focusing on the performance on retrieval, and Reimers and Gurevych (2020) use bitexts to tune multilingual language models and to distill knowledge from a teacher model which has been tuned on paraphrase pairs.", "Also, Chi et al. (2021a) pretrain an alternative XLM-R on a cross-lingual contrastive objective.", "Our work falls in the line of exploring multilingual-ity of pretrained models with a distinct emphasis on investigating the multilingual structure induced by bilingual training without fine-tuning or alternative pretraining.", "Unsupervised parallel sentence mining The evaluation task of our work is bitext mining without supervision from any bitexts or from bitexts of the pair of languages of the mining task.", "Such experiments have been explored previously.", "Hangya et al. (2018) show that unsupervised bilingual word embeddings are effective on bitext mining, and Hangya and Fraser (2019) further improve the system with a word-alignment algorithm.", "Kiros (2020) trains a lensing module over mBERT for the task of natural language inference (NLI) and transfers the model to bitext mining.", "Keung et al. (2020)'s system uses bootstrapped bitexts to fine-tune mBERT, while Kvapilkov et al. (2020)'s system uses synthetic bitexts from an unsupervised machine translation system to fine-tune XLM (Con-neau and Lample, 2019).", "Results from the three aforementioned studies are included in Section 5 for comparisons.", "Methodologically, our approach differs from the above in that our system is based on another pretrained model XLM-R (Conneau et al., 2020a) without fine-tuning, for one of the goals of the study is to understand the extractability of the alignment information from the pretrained model; and our model receives training signals from existing monolingual corpora or bitexts, instead of from NLI, bootstrapped, or synthesized data.", "The model as an encoder generates fixed-length vectors as sentence representations from the the hidden states of the pretrained multilingual language model XLM-R (Conneau et al., 2020a).", "Formally, given a sentence i, in language { s, t } , with the pretrained language model producing features x i of l layers, sequence length q , and embedding size d , the extraction module f ( ) generates a sentence embedding y i of fixed size d based on the features x i , or f ( x i ) = y i , x i R l q d and y i R d .", "With the parameters of XLM-R frozen, within the extraction module f ( ) are only two trainable components.", "The first is an ELMo-style trainable softmax-normalized weighted linear combination module (Peters et al., 2018), and the second being a trainable linear map.", "The linear combination module learns to weight the hidden states of every layer l of the pretrained language model and output a weighted average, on which a sum-pooling layer is then applied to q embeddings.", "And then the linear map takes this bag-of-word representation and produces the final sentence representation y i of the model.", "Monolingual corpora in different languages with language labels provide the signal for alignment if the semantic contents of the utterances share similar distributions across corpora.", "In order to exploit this information, we introduce to the model adversarial networks (Goodfellow et al., 2014) with cycle consistency (Zhu et al., 2017), to promote similarity in the distribution of representations.", "As is usual in GANs, there is a discriminator module d ( ) , which in this model consumes the representation y i and outputs continuous scores for the language identity of the sentence i, .", "Following Romanov et al. (2019), as inspired by Wasserstein-GAN (Arjovsky et al., 2017), the loss of the discriminator L disc is the difference between the unnormalized scores instead of the usual cross-entropy loss, or L disc = d ( y si ) d ( y tj ) .", "Adversarial training helps learning aligned encodings across languages at the distributional level.", "At the individual level, however, the model is not constrained to generate encodings which are both aligned and discriminative (Zhu et al., 2017).", "In particular, a degenerate encoder can produce pure noise which is distributively identical across languages.", "A cycle consistency module inspired by Zhu et al. (2017) is therefore used to constrain the model to encode with individual-discerning alignment.", "Cycle consistency is also reminiscent of the technique of using back-translation for unsupervised translation systems (Lample et al., 2018b).", "In this model, a trainable linear map F ( ) maps elements from the encoding space of one language to the space of the other, and another linear map G ( ) operates in the reverse direction.", "The cycle loss so defined is used to update parameters for both of the cycle mappings and the encoder: L cycle = h ( y si , G ( F ( y si ))) + h ( F ( G ( y tj )) , y tj ) where h is the triplet ranking loss function which sums the hinge costs in both directions: h ( a, b ) = (cid:88) n max(0 , sim( a, b ) + sim( a n , b )) + max(0 , sim( a, b ) + sim( a, b n )) , where the margin and the number of negative samples n are hyperparameters, and sim( ) is co-sine similarity.", "The loss function h encourages the model to encode similar representations between positive pairs ( a, b ) and dissimilar representations between negative pairs ( a n , b ) and ( a, b n ) , where a n and b n are sampled from the embeddings in the mini-batch.", "Based on the findings that the hard negatives, or non-translation pairs of high similarity between them, are more effective than the sum of negatives in the ranking loss (Faghri et al., 2018), our system always includes in the summands the costs from the hardest negatives in the mini-batch along with the costs from any other randomly sampled ones.", "The full loss of the unsupervised model is L unsup = L adv + L cycle , with a hyperparameter .", "In addition to the completely unsupervised model, we also experiment with a model which is supervised with bitext from one pair of languages and then transferred to other pairs.", "In this set-up, instead of using cyclical mappings, bitexts provide the alignment signal through the ranking loss directly, so the loss for the supervised model is L sup = h ( y si , y ti ) , where y si and y ti are representations of parallel sentences.", "The model is trained with the Adam optimizer (Kingma and Ba, 2014) and learning rate 0 .", "001 with the parameters in XLM-R frozen.", "Our training program is built upon AllenNLP (Gardner et al., 2018), HuggingFace Transformers (Wolf et al., 2020), and PyTorch (Paszke et al., 2019).", "The code for this study is released publicly.", "1 For adversarial training, the discriminator is updated times for every step of backpropagation to the encoder.", "Other hyperparameters include the dimension of the output representations d , number of negative samples n , margin value , and weight of the cycle loss .", "The hyperparameters and the the values which are experimented with are summarized in Table 1.", "We empirically determine the hyperparameters among experimented values, and report their values in specific evaluation sections.", "The bilingual corpora used to train the encoder is taken from OPUS (Tiedemann, 2012) as produced for the training of XLM (Conneau and Lample, 2019).", "2 We experimented with two language pairs for training the modelArabic-English (ar-en) and 1 The repository at https://github.com/cctien/ bimultialign .", "2 We use the script https://github.", "com/facebookresearch/XLM/blob/main/get-data-para.sh to get the corpora MultiUN (Eisele and Chen, 2010) and EUbookshop, where each training corpus we use is of 9 million sentences.", "German-English (de-en)to explore potential effects of the choice of the dual-pivots.", "After being trained, the encoder is evaluated on two tasks of bitext mining between texts in English and in another language.", "Additionally, we train the models with the pivot pair of Arabic-German (ar-de), which does not include English, to be evaluated on the second task.", "Four models, two unsupervised and two one-pair supervised trained on either of the two language pairs, are evaluated on two bitext mining or retrieval tasks of the BUCC corpus (Zweigenbaum et al., 2018) and of the Tatoeba corpus (Artetxe and Schwenk, 2019a).", "Unsupervised baselines The XLM-R (Conneau et al., 2020a) bag-of-embedding (boe) representations out-of-the-box serve as the unsupervised baseline.", "We identify the best-performing among the layers of orders of multiples of 4, or layer L { 0 , 4 , 8 , 12 , 16 , 20 , 24 } , as the baseline.", "In the case of BUCC mining task, for example, the best-performing baseline model is of layer 16 and denoted by XLM-R L16-boe.", "Results from Kiros (2020), Keung et al. (2020), and Kvapilkov et al. (2020), as state-of-the-art models for unsupervised bitext mining from pretrained language models, are included for comparison (see Section 2 for a description of them).", "Fully supervised models LASER (Artetxe and Schwenk, 2019b) and LaBSE (Feng et al., 2020), both fully supervised with multilingual bitexts, are included for comparisons.", "LASER is an LSTM-based encoder and translation model trained on parallel corpora of 93 languages, and is the earlier leading system on the two mining tasks.", "LaBSE on the other hand is a transformer-based multilingual sentence encoder supervised with parallel sentences from 109 languages using the additive margin softmax (Wang et al., 2018) for the translation language modeling objective, and has state-of-the-art performance on the two mining tasks.", "Finally, XLM-R+SBERT from Reimers and Gurevych, 2020 is XLM-R fine-tuned to align representations of bitexts of 50 language pairs and to distill knowledge from SBERT (Reimers and Gurevych, 2019) fine-tuned on English paraphrase pairs.", "The BUCC corpora (Zweigenbaum et al., 2018), consist of 95k to 460k sentences in each of 4 languagesGerman, French, Russian, and Mandarin Chinesewith around 3% of such sentences being English-aligned.", "The task is to mine for the translation pairs.", "Margin-based retrieval The retrieval is based on the margin-based similarity scores (Artetxe and Schwenk, 2019a) related to CSLS (Conneau et al., 2018), score( y s , y t ) = margin(sim( y s , y t ) , scale( y s , y t )) scale( y s , y t ) = (cid:88) z NN k ( y s ) sim( y s , z ) 2 k + (cid:88) z NN k ( y t ) sim( y t , z ) 2 k , where NN k ( y ) denotes the k nearest neighbors of y in the other language.", "Here we use k = 4 and the ratio margin function, or margin( a, b ) = a/b , following the literature (Artetxe and Schwenk, 2019b).", "By scaling up the similarity associated with more isolated embeddings, margin-based retrieval helps alleviate the hubness problem (Radovanovic et al., 2010), where some embeddings or hubs are nearest neighbors of many other embeddings with high probability.", "Following Hu et al. (2020), our model is evaluated on the training split of the BUCC corpora, and the threshold of the similarity score cutting 8700 off translations from non-translations is optimized for each language pair.", "While Kvapilkov et al. (2020) and Kiros (2020) optimize for the language-specific mining thresholds as we do here, Keung et al. (2020) use a prior probability to infer the thresholds.", "And different from all other baselines or models for comparisons presented here, Kvapilkov et al. (2020)'s model is evaluated upon the undisclosed test split of the BUCC corpus.", "Results F1 scores on the BUCC dataset presented in Table 2 demonstrate that bilingual alignment learned by the model is transferable to other pairs of languages.", "The hyperparameter values of the unsupervised model presented in the table are n = 1 , = 0 , = 5 , = 2 , and those of the supervised model are n = 1 , = 0 .", "The adversarially-trained unsupervised model outperforms the unsupervised baselines and nearing the state of the art, and is thus effective in extracting sentence representations which are sharable across languages.", "The choice of pivot pairs shows effects on the unsupervised models, with the model trained on the de-en texts performing better than that on the ar-en texts at mining for parallel sentences between English and German by 7 points.", "The results suggest that while alignment is transferable, the unsupervised model can be further improved for multilingual alignment by being trained on multilingual texts of more than two pivots.", "The one-pair supervised model trained with bitexts of one pair of languages, on the other hand, performs within a 6-point range of the fully supervised systems, which shows that much alignment information from unsupervised pretrained models is recoverable by the simple extraction module.", "Noticeably, the model supervised with ar-en bitexts but not from the four pairs of the task sees a 20-point increase from the plain XLM-R, and the choice of dual pivots does not have significant effects on the supervised model.", "We also measure the parallel sentence matching accuracy over the Tatoeba dataset (Artetxe and Schwenk, 2019b).", "This dataset consists of 100 to 1,000 English-aligned sentence pairs for 112 languages, and the task is to retrieve the translation in one of the target languages given a sentence in English using absolute similarity scores without margin-scaling.", "Results Matching accuracy for the retrieval task of the Tatoeba dataset are presented in Table", "3. Following Feng et al., 2020, average scores from different groups are presented to compare different models.", "The 36 languages are those selected by Xtreme (Hu et al., 2020), and the 41 languages are those for which results are presented in Kvapilkov et al., 2020.", "The hyperparameter values of the unsupervised model presented in the table are n = 2 , = 0 .", "2 , = 10 , = 2 , and those of the supervised model are n = 1 , = 0 .", "The unsupervised model outperforms the baselines by roughly 20 points, and the one-pair supervised model performs close to the the supervised model LASER but falls shorts by around 10 to 20 points to the other supervised model LaBSE.", "When one of the two pivot languages is English, the choice of the pivot does not show much difference on this task on average.", "While the models trained on ar-de (where neither pivot language is English) still exhibits strong transfer performance, there is a drop of around 2 to 6 points from the models where English is one of the pivot languages (ar-en and de-en).", "To understand the factors affecting the performance of the model, we consider several variants.", "All models presented in this section are trained with the same hyperparameters presented in the evaluation section above using either de-en corpora or corpora of multiple language pairs (Section 6.3).", "We ablate from the model the trainable componentsthe weighted linear combination and the linear mapas well as the two training losses, L adv and L cycle , of the unsupervised model.", "When the weighted linear combination is ablated, we use the unweighted average of embeddings across layers.", "We evaluate on the average accuracy over the 36 languages of the Tatoeba corpus.", "The results in Table 4 show a few interesting trends.", "First, the cycle consistency loss is essential for the unsupervised model, as can be seen by the very low performance when only L adv is used.", "Secondly, the linear map plays a larger role than the linear combination in extracting alignment information in the unsupervised model: in both conditions with cycle consistency loss, the linear map alone outperforms the linear combination alone, and in the condition with only cycle consistency loss, the linear map alone does best.", "Finally, in the one-pair supervised model, the linear combination module alone shows gains of 13 points from the baseline but does not produce gains when trained along with a linear map.", "Previous studies show that representations from different layers differ in their cross-lingual alignment (Pires et al., 2019; Conneau et al., 2020b).", "To understand this phenomenon in the present setting, we take layers whose orders are multiples of 4 and train the model with representations from one single layer without combining embeddings from different layers.", "The average accuracy scores over 36 languages in Tatoeba summarized in Figure 2 show that the alignment information is most extractable deep in the middle layers, corroborating the findings from the previous work (Kvapilkov et al., 2020; Litschko et al., 2021; Chi et al., 2021b).", "The model trained with the best-performing layer shows similar or higher scores than the full model with learned linear combination, which is consistent with the findings from the ablation that the learned linear combination is not essential for extracting alignment information.", "It is possible that a model trained on texts from more pairs of languages may improve upon the multilingual alignment so far demonstrated.", "To test this, we trained the model with the same hyper-8702 Model F1 score ( % ) xx en de fr ru zh Unsupervised Optimized thresholds 91 .", "parameters as in the previous section but on texts from multiple languages, 3 where each multi-pair model is trained on 16 million total sentences.", "The results are in Table", "5. There is an aggregate 3 point improvement for the unsupervised model and 5 point for the supervised model.", "The results suggest that with our models, one bilingual pivot is capable of extracting much transferable multilingual representations from XLM-R, but using more pivots can still improve the transferability of the representations to some extent.", "Previous work observes that in the BUCC mining task, the thresholds optimized for different language pairs are close to one another, suggesting that one can tune the threshold on high-resource pairs and use the system to mine other language pairs (Kiros, 2020).", "We examine the mining performance on the BUCC dataset of two threshold schemes: optimized thresholds, where thresholds are optimized for each language pair, and the dual-pivot threshold, where the threshold optimized for the pivot pair, in this case German-English, is used to mine all languages.", "The scores from these two schemes on BUCC are summarized in Table", "6. The results show that the thresholds optimized for the pivot pair transfer to other pairs with at most a 2-point decrease in the F1 score, and that the results of the two schemes are almost identical to the one-pair supervised model.", "These experiments corroborate the 3 The 16 pairs are those between en and ar, bg, de, el, es, fa, fr, hi, id, ru, sw, th, tr, ur, vi, zh .", "previous observation and demonstrate yet another case for leveraging texts from resource-rich pairs for unsupervised mining of other language pairs.", "To test whether our method of training language-agnostic sentence encoders encourages meaning-based representations, we evaluate the models on the multilingual semantic textual similarity (STS) 2017 (Cer et al., 2017) with Spearman correlation reported in Table", "7. All evaluation pairs on average see about 10 percentage-point increases from baseline (XLM-R L12-boe) for both models.", "Yet the gaps between our models and the fully supervised systems suggest that supervision with more language pairs and more trainable parameters likely encourages sentence representations to be closer to what humans see as meaning.", "This work shows that training for bilingual alignment benefits multilingual alignment for unsupervised bitext mining.", "The unsupervised model shows the effectiveness of adversarial training with cycle consistency for building multilingual language models, and reaches the state of the art of unsupervised bitext mining.", "Both unsupervised and one-pair supervised models show that significant multilingual alignment in an unsupervised language model can be recovered by a linear mapping, and that combining monolingual and bilingual training data can be a data-efficient method for promoting multilingual alignment.", "Future work may combine both the supervised and the unsupervised techniques to attain sentence embeddings with stronger multilingual alignment through the transferability of bilingual alignment demonstrated 8703 in this work, and such work will benefit tasks involving languages of low resources in bitexts.", "Alexis Conneau, Guillaume Lample, Marc'Aurelio Ranzato, Ludovic Denoyer, and Herv Jgou.", "Word translation without parallel data.", "Fangxiaoyu Feng, Yinfei Yang, Daniel Cer, Naveen Arivazhagan, and Wei Wang.", "2020.", "Language-agnostic BERT sentence embedding." ]
[ "method", "objective", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "objective", "method", "objective", "abstain", "abstain", "abstain", "method", "abstain", "objective", "objective", "objective", "objective", "objective", "method", "other", "other", "objective", "method", "other", "other", "other", "other", "objective", "abstain", "other", "other", "other", "other", "other", "other", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "other", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "result", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain" ]
[ "This paper proposes Dynamic Memory Induction Networks (DMIN) for few-shot text classification.", "The model utilizes dynamic routing to provide more flexibility to memory-based few-shot learning in order to better adapt the support sets, which is a critical capacity of few-shot classification models.", "Based on that, we further develop induction models with query information, aiming to enhance the generalization ability of meta-learning.", "The proposed model achieves new state-of-the-art results on the miniRCV1 and ODIC dataset, improving the best performance (accuracy) by 2 4 % .", "Detailed analysis is further performed to show the effectiveness of each component.", "Few-shot text classification, which requires models to perform classification with a limited number of training instances, is important for many applications but yet remains to be a challenging task.", "Early studies on few-shot learning (Salamon and Bello, 2017) employ data augmentation and regularization techniques to alleviate overfitting caused by data sparseness.", "More recent research leverages meta-learning (Finn et al., 2017; Zhang et al., 2018; Sun et al., 2019) to extract transferable knowledge among meta-tasks in meta episodes.", "A key challenge for few-shot text classification is inducing class-level representation from support sets (Gao et al., 2019), in which key information is often lost when switching between meta-tasks.", "Recent solutions (Gidaris and Komodakis, 2018) leverage a memory component to maintain mod-els' learning experience, e.g., by finding from a supervised stage the content that is similar to the unseen classes, leading to the state-of-the-art performance.", "However, the memory weights are static Corresponding author.", "during inference and the capability of the model is still limited when adapted to new classes.", "Another prominent challenge is the instance-level diversity caused by various reasons (Gao et al., 2019; Geng et al., 2019), resulting in the difficulty of finding a fixed prototype for a class (Allen et al., 2019).", "Recent research has shown that models can benefit from query-aware methods (Gao et al., 2019).", "In this paper we propose Dynamic Memory Induction Networks (DMIN) to further tackle the above challenges.", "DMIN utilizes dynamic routing (Sabour et al., 2017; Geng et al., 2019) to render more flexibility to memory-based few-shot learning (Gidaris and Komodakis, 2018) in order to better adapt the support sets, by leveraging the routing component's capacity in automatically adjusting the coupling coefficients during and after training.", "Based on that, we further develop induction models with query information to identify, among diverse instances in support sets, the sample vectors that are more relevant to the query.", "These two modules are jointly learned in DMIN.", "The proposed model achieves new state-of-the-art results on the miniRCV1 and ODIC datasets, improving the best performance by 2 4 % accuracy.", "We perform detailed analysis to further show how the proposed network achieves the improvement.", "Few-shot learning has been studied in early work such as (Fe-Fei et al., 2003; Fei-Fei et al., 2006) and more recent work (Ba et al., 2016; Santoro et al., 2016; Munkhdalai and Yu, 2017; Ravi and Larochelle, 2016; Mishra et al., 2017; Finn et al., 2017; Vinyals et al., 2016; Snell et al., 2017; Sung et al., 2018; Allen et al., 2019).", "Researchers have also investigated few-shot learning in various NLP tasks (Dou et al., 2019; Wu et al., 2019; Gu et al., 2018; Chen et al., 2019; Obamuyide and Vlachos, 2019; Hu et al., 2019), including text classification (Yu et al., 2018; Rios and Kavuluru, 2018; Xu et al., 2019; Geng et al., 2019; Gao et al., 2019; Ye and Ling, 2019).", "Memory mechanism has shown to be very effective in many NLP tasks (Tang et al., 2016; Das et al., 2017; Madotto et al., 2018).", "In the few-shot learning scenario, researchers have applied memory networks to store the encoded contextual information in each meta episode (Santoro et al., 2016; Cai et al., 2018; Kaiser et al., 2017).", "Specifi-cally Qi et al. (2018) and Gidaris and Komodakis (2018) build a two-stage training procedure and regard the supervisely learned class representation as a memory component.", "An overview of our Dynamic Memory Induction Networks (DMIN) is shown in Figure 1, which is built on the two-stage few-shot framework Gidaris and Komodakis (2018).", "In the supervised learning stage (upper, green subfigure), a subset of classes in training data are selected as the base sets, consisting of C base number of base classes, which is used to finetune a pretrained sentence encoder and to train a classifier.", "In the meta-learning stage (bottom, orange sub-figure), we construct an episode to compute gradients and update our model in each training iteration.", "For a C -way K -shot problem, a training episode is formed by randomly selecting C classes from the training set and choosing K examples within each selected class to act as the support set S = Cc =1 { x c,s , y c,s } Ks =1 .", "A subset of the remaining examples serve as the query set Q = { x q , y q } Lq =1 .", "Training on such episodes is conducted by feeding the support set S to the model and updating its parameters to minimize the loss in the query set Q .", "We expect that developing few-shot text classifier should benefit from the recent advance on pretrained models (Peters et al., 2018; Devlin et al., 2019; Radford et al.).", "Unlike recent work (Geng et al., 2019), we employ BERT-base (Devlin et al., 2019) for sentence encoding , which has been used in recent few-shot learning models (Bao et al., 2019; Soares et al., 2019).", "The model architecture of BERT (Devlin et al., 2019) is a multi-layer bidi-Class 1 Class 2 Class 3 Query Support Set Sample Vector P r e t r a i n e d E n c od e r Query Vector W base <latexit sha1_base64=\"KcMaNtDB+GupqEn+Up6toKWo0MI=\">AAACy3icjVHLSsNAFD2Nr1pfVZdugkVwVRIRdFlw40aoYB9QS0mm0xqaF5mJUGuX/oBb/S/xD/QvvDNOQS2iE5KcOfecO3Pv9dMwENJxXgvWwuLS8kpxtbS2vrG5Vd7eaYokzxhvsCRMsrbvCR4GMW/IQIa8nWbci/yQt/zRmYq3bnkmgiS+kuOUdyNvGAeDgHmSqHarN1Huaa9ccaqOXvY8cA2owKx6Un7BNfpIwJAjAkcMSTiEB0FPBy4cpMR1MSEuIxToOMcUJfLmpOKk8Igd0XdIu45hY9qrnEK7GZ0S0puR08YBeRLSZYTVabaO5zqzYn/LPdE51d3G9PdNrohYiRti//LNlP/1qVokBjjVNQRUU6oZVR0zWXLdFXVz+0tVkjKkxCncp3hGmGnnrM+29ghdu+qtp+NvWqlYtWdGm+Nd3ZIG7P4c5zxoHlVdp+peHldqjhl1EXvYxyHN8wQ1nKOOhp7jI57wbF1Ywrqz7j+lVsF4dvFtWQ8fKeySgg==</latexit> <latexit sha1_base64=\"KcMaNtDB+GupqEn+Up6toKWo0MI=\">AAACy3icjVHLSsNAFD2Nr1pfVZdugkVwVRIRdFlw40aoYB9QS0mm0xqaF5mJUGuX/oBb/S/xD/QvvDNOQS2iE5KcOfecO3Pv9dMwENJxXgvWwuLS8kpxtbS2vrG5Vd7eaYokzxhvsCRMsrbvCR4GMW/IQIa8nWbci/yQt/zRmYq3bnkmgiS+kuOUdyNvGAeDgHmSqHarN1Huaa9ccaqOXvY8cA2owKx6Un7BNfpIwJAjAkcMSTiEB0FPBy4cpMR1MSEuIxToOMcUJfLmpOKk8Igd0XdIu45hY9qrnEK7GZ0S0puR08YBeRLSZYTVabaO5zqzYn/LPdE51d3G9PdNrohYiRti//LNlP/1qVokBjjVNQRUU6oZVR0zWXLdFXVz+0tVkjKkxCncp3hGmGnnrM+29ghdu+qtp+NvWqlYtWdGm+Nd3ZIG7P4c5zxoHlVdp+peHldqjhl1EXvYxyHN8wQ1nKOOhp7jI57wbF1Ywrqz7j+lVsF4dvFtWQ8fKeySgg==</latexit> <latexit sha1_base64=\"KcMaNtDB+GupqEn+Up6toKWo0MI=\">AAACy3icjVHLSsNAFD2Nr1pfVZdugkVwVRIRdFlw40aoYB9QS0mm0xqaF5mJUGuX/oBb/S/xD/QvvDNOQS2iE5KcOfecO3Pv9dMwENJxXgvWwuLS8kpxtbS2vrG5Vd7eaYokzxhvsCRMsrbvCR4GMW/IQIa8nWbci/yQt/zRmYq3bnkmgiS+kuOUdyNvGAeDgHmSqHarN1Huaa9ccaqOXvY8cA2owKx6Un7BNfpIwJAjAkcMSTiEB0FPBy4cpMR1MSEuIxToOMcUJfLmpOKk8Igd0XdIu45hY9qrnEK7GZ0S0puR08YBeRLSZYTVabaO5zqzYn/LPdE51d3G9PdNrohYiRti//LNlP/1qVokBjjVNQRUU6oZVR0zWXLdFXVz+0tVkjKkxCncp3hGmGnnrM+29ghdu+qtp+NvWqlYtWdGm+Nd3ZIG7P4c5zxoHlVdp+peHldqjhl1EXvYxyHN8wQ1nKOOhp7jI57wbF1Ywrqz7j+lVsF4dvFtWQ8fKeySgg==</latexit> Classification Score Class Vector D y n a m i c M e m o r y M odu l e Q u e r y e nh a n ce d I ndu c t i on M odu l e C l ass i f i e r Base Batch Base Set Query Set S up e r v i se d L ea r n i ng M e t a L ea r n i ng Figure 1: An overview of Dynamic Memory Induction Network with a 3-way 2-shot example.", "rectional Transformer encoder based on the original Transformer model (Vaswani et al., 2017).", "A special classification embedding ([CLS]) is inserted as the first token and a special token ([SEP]) is added as the final token.", "We use the d -dimensional hidden vector output from the [ CLS ] as the representation e of a given text x : e = E ( x | ) .", "The pretrained BERT model provides a powerful context-dependent sentence representation and can be used for various target tasks, and it is suitable for the few-shot text classification task (Bao et al., 2019; Soares et al., 2019).", "We finetune the pre-trained BERT encoder in the supervised learning stage.", "For each input document x , the encoder E ( x | ) (with parameter ) will output a vector e of d dimension.", "W base is a matrix that maintains a class-level vector for each base class, serving as a base memory for meta-learning.", "Both E ( x | ) and W base will be further tuned in the meta training procedure.", "We will show in our experiments that replacing previous models with pre-trained encoder outperforms the corresponding state-of-the-art models, and the proposed DMIN can further improve over that.", "At the meta-learning stage, to induce class-level representations from given support sets, we develop a dynamic memory module (DMM) based on knowledge learned from the supervised learning stage through the memory matrix W base .", "Unlike static memory (Gidaris and Komodakis, 2018), DMM utilizes dynamic routing (Sabour et al., 2017) to render more flexibility to the memory learned from base classes to better adapt support sets.", "The routing component can automatically adjust the coupling coefficients during and after training, which inherently suits for the need of few-shot learning.", "Specifically, the instances in the support sets are first encoded by the BERT into sample vectors { e c,s } Ks =1 and then fed to the following dynamic memory routing process.", "Given a memory matrix M (here W base ) and sample vector q R d , the algorithm aims to adapt the sample vector based on memory M learned in the supervised learning stage.", "First, for each entry m i M , the standard matrix-transformation and squash operations in dynamic routing (Sabour et al., 2017) are applied on the inputs:", "where the transformation weights W j and bias b j are shared across the inputs to fit the few-shot learning scenario.", "We then calculate the Pearson Correlation Coefficients (PCCs) (Hunt, 1986; Yang et al., 2019) between m i and q j .", "p ij = tanh ( P CCs ( m ij , q j )) , (4) P CCs = Cov ( x 1 , x 2 ) x 1 x 2 .", "(5) where the general formula of PCCs is given above for vectors x 1 and x 2 .", "Since PCCs values are in the range of [-1, 1], they can be used to encourage or penalize the routing parameters.", "The routing iteration process can now adjust coupling coefficients, denoted as d i , with regard to the input capsules m i , q and higher level capsules v j .", "d i = softmax ( i ) , (6) ij = ij + p ij m i v j .", "Since our goal is to develop dynamic routing mechanism over memory for few-shot learning, we add the PCCs with the routing agreements in every routing iteration as shown in Eq.", "8.", "v j = n (cid:88) i =1 ( d ij + p ij ) m ij , (8) v j = squash ( v j ) .", "(9) Algorithm 1 Dynamic Memory Routing Process Require: r , q and memory M = { m 1 , m 2 , ..., m n } Ensure: v = v 1 , v 2 , ..., v l , q (cid:48) 1: for all m i , v j do 2: m ij = squash ( W j m i + b j ) 3: q j = sqush ( W j q + b j ) 4: ij = 0 5: p ij = tanh ( P CCs ( m ij , q j )) 6: end for 7: for r iterations do 8: d i = softmax ( i ) 9: v j = (cid:80) ni =1 ( d ij + p ij ) m ij 10: v j = squash ( v j ) 11: for all i, j : ij = i,j + p ij m ij v j 12: for all j : q j = q j + v j 2 13: for all i, j : p ij = tanh ( P CCs ( m ij , q j )) 14: end for 15: q (cid:48) = concat [ v ] 16: Return q (cid:48) We update the coupling coefficients ij and p ij with Eq.", "6 and Eq.", "7, and finally output the adapted vector q (cid:48) as in Algorithm 1.", "The Dynamic Memory Module (DMM) aims to use DMR to adapt sample vectors e c,s , guided by the memory W base .", "That is, the resulting adapted sample vector is computed with e (cid:48) c,s = DMR ( W base , e c,s ) .", "After the sample vectors (cid:8) e (cid:48) c,s (cid:9) s =1 ,...,K are adapted and query vectors { e q } Lq =1 are encoded by the pretrained encoder, we now incorporate queries to build a Query-guided Induction Module (QIM).", "The aim is to identify, among (adapted) sample vectors of support sets, the vectors that are more relevant to the query, in order to construct class-level vectors to better classify the query.", "Since dynamic routing can automatically adjusts the coupling coefficients to help enhance related (e.g., similar) queries and sample vectors, and penalizes unrelated ones, QIM reuses the DMR process by treating adapted sample vectors as memory of background knowledge about novel classes, and induces class-level representation from the adapted sample vectors that are more relevant/similar to the query under concern.", "In the final classification stage, we then feed the novel class vector e c and query vector e q to the classifier discussed above in the supervised training stage and get the classification score.", "The standard setting for neural network classifiers is, after having extracted the feature vector e R d , to estimate the classification probability vector p by first computing the raw classification score s k of each category k [1 , K ] using the dot-product operator s k = e T w k , and then applying softmax operator across all the K classification scores.", "However, this type of classifiers do not fit few-shot learning due to completely novel categories.", "In this work, we compute the raw classification scores using a cosine similarity operator: s k = cos ( e, w k ) = e T w k , (11) where e = e (cid:107) e (cid:107) and w k = w k (cid:107) w k (cid:107) are l 2 normalized vectors, and is a learnable scalar value.", "After the base classifier is trained, all the feature vectors that belong to the same class must be very closely matched with the single classification weight vector of that class.", "So the base classification weights W base = { w b } C base b =1 trained in the 1 st stage can be seen as the base classes' feature vectors.", "In the few-shot classification scenario, we feed the query vector e q and novel class vector e c to the classifier and get the classification scores in a unified manner.", "In the supervised learning stage, the training objective is to minimize the cross-entropy loss on C base number of base classes given an input text x and its label y :", "where y is one-hot representation of the ground truth label, and y is the predicted probabilities of base classes with y k = softmax ( s k ) .", "In the meta-training stage, for each meta episode, given the support set S and query set Q = { x q , y q } Lq =1 , the training objective is to minimize the cross-entropy loss on C novel classes.", "where y q = softmax ( s q ) is the predicted probabilities of C novel classes in this meta episode, with s q = { s q,c } Cc =1 from Equation 12.", "We feed the support set S to the model and update its parameters to minimize the loss in the query set Q in each meta episode.", "We evaluate our model on the miniRCV1 (Jiang et al., 2018) and ODIC dataset (Geng et al., 2019).", "Following previous work (Snell et al., 2017; Geng et al., 2019), we use few-shot classification accuracy as the evaluation metric.", "We average over 100 and 300 randomly generated meta-episodes from the testing set in miniRCV1 and ODIC, respectively.", "We sample 10 test texts per class in each episode for evaluation in both the 1 -shot and 5 -shot scenarios.", "We use Google pre-trained BERT-Base model as our text encoder, and fine-tune the model in the training procedure.", "The number of base classes C base on ODIC and miniRCV1 is set to be 100 and 20, respectively.", "The number of DMR interaction is", "3. We build episode-based meta-training models with C = [5 , 10] and K = [1 , 5] for comparison.", "In addition to using K sample texts as the support set, the query set has 10 query texts for each of the C sampled classes in every training episode.", "For example, there are 10 5+5 5 = 75 texts in one training episode for a 5-way 5-shot experiment.", "We compare DMIN with various baselines and state-of-the-art models: BERT (Devlin et al., 2019) finetune, ATAML (Jiang et al., 2018), Rel.", "Net (Sung et al., 2018), Ind.", "Net (Geng et al., 2019), HATT (Gao et al., 2019), and LwoF (Gidaris and Komodakis, 2018).", "Note that we re-implement them with the BERT sentence encoder for direct comparison.", "Overall Performance The accuracy and standard deviations of the models are shown in Table 1 and", "2. We can see that DMIN consistently outperform all existing models and achieve new state-of-the-art results on both datasets.", "The differences between DMIN and all the other models are statistically significant under the one-tailed paired t-test at the 95% significance level.", "Note that LwoF builds a two-stage training procedure with a memory module learnt from the supervised learning and used in the meta-learning stage, but the memory mechanism is static after training, while DMIN uses dynamic memory routing to automatically adjust the coupling coefficients after training to generalize to novel classes, and outperform LwoF significantly.", "Note also that the performance of some of the baseline models (Rel. Net and Ind. Net) reported in Table 1 and 2 is higher than that in Geng et al. (2019) since we used BERT to replace BiLSTM-based encoders.", "The BERT encoder improves the baseline models by a powerful context meaning representation ability, and our model can further outperform these models with a dynamic memory routing method.", "Even with these stronger baselines, the proposed DMIN consistently outperforms them on both dataset.", "Ablation Study We analyze the effect of different components of DMIN on ODIC in Table", "3. Specifically, we remove DMM and QIM, and vary the number of DMR iterations.", "We see that the best performance is achieved with 3 iterations.", "The results show the effectiveness of both the dynamic memory module and the induction module with query information.", "Figure 2 is the t-SNE visualization (Maaten and Hinton, 2008) for support sample vectors before", "and after DMM under a 10 -way 5 -shot setup on ODIC.", "We randomly select a support set with 50 texts (10 texts per class) from the ODIC testing set, and obtain the sample vectors before and after DMM, i.e., { e c,s } c =1 ,... 5 ,s =1 ... 10 and (cid:8) e (cid:48) c,s (cid:9) c =1 ,... 5 ,s =1 ... 10 .", "We can see that the support vectors produced by the DMM are better separated, demonstrating the effectiveness of DMM in leveraging the supervised learning experience to encode semantic relationships between lower level instance features and higher level class features for few-shot text classification.", "We propose Dynamic Memory Induction Networks (DMIN) for few-shot text classification, which builds on external working memory with dynamic routing, leveraging the latter to track previous learning experience and the former to adapt and generalize better to support sets and hence to unseen classes.", "The model achieves new state-of-the-art results on the miniRCV1 and ODIC datasets.", "Since dynamic memory can be a learning mechanism more general than what we have used here for few-shot learning, we will investigate this type of models in other learning problems.", "The authors would like to thank the organizers of ACL-2020 and the reviewers for their helpful suggestions.", "The research of the last author is supported by the Natural Sciences and Engineering Research Council of Canada (NSERC)." ]
[ "objective", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "abstain", "abstain", "objective", "other", "other", "other", "other", "other", "result", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "objective", "abstain", "abstain", "method", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "objective", "abstain", "objective", "other", "other" ]
[ "Incorporating domain knowledge is vital in building successful natural language processing (NLP) applications.", "Many times, cross-domain application of a tool results in poor performance as the tool does not account for domain-specific attributes.", "The clinical domain is challenging in this aspect due to specialized medical terms and nomenclature, shorthand notation, fragmented text, and a variety of writing styles used by different medical units.", "Temporal resolution is an NLP task that, in general, is domain-agnostic because temporal information is represented using a limited lexicon.", "However, domain-specific aspects of temporal resolution are present in clinical texts.", "Here we explore parsing issues that arose when running our system, a tool built on Newswire text, on clinical notes in the THYME corpus.", "Many parsing issues were straightforward to correct; however, a few code changes resulted in a cascading series of parsing errors that had to be resolved before an improvement in performance was observed, revealing the complexity of temporal resolution and rule-based parsing.", "Our system now outperforms current state-of-the-art systems on the THYME corpus with little change in its performance on Newswire texts.", "Temporal resolution is required for comprehending many types of communication, including written texts.", "This is especially true in clinical texts as patient narratives revolve around when an event happened, such as when a symptom occurred or the frequency a drug was administered (Lee et al., 2017; Sun et al., 2013b).", "Understanding the temporal component in texts is vital for many NLP systems (Tissot et al., 2015) to accurately interpret a patient narrative (Sun et al., 2013b).", "Some temporal expressions could be considered domain agnostic as there are limited ways to represent information about time, such as formatted dates or days of the week.", "However, there are many lexical variations of these standard tokens.", "Additionally, vague temporal expressions, relative times, and event durations require contextual or implicit knowledge of the subject area for resolution (Sun et al., 2013b).", "Clinical texts include all these types of temporal expressions, and also contain domain-specific challenges to temporal expression identification and normalization, such as differentiating between dosage and time.", "Additionally, clinical texts frequently use repeated phrases such as At this time that are infrequently used in the general domain.", "These phrases are vague, relative, and require contextual knowledge of the subject area and the time of events to be resolved (Sun et al., 2013b).", "In this work we focus on identification of temporal expressions in clinical texts using Chrono a hybrid system that normalizes temporal expressions into the SCATE Schema (Bethard and Parker, 2016).", "Originally designed on general domain Newswire texts, we evaluate Chrono's performance on the clinical THYME corpus (Styler et al., 2014) out-of-the-box with no modifications, perform an error analysis, algorithm updates, and then re-evaluate on THYME.", "This analysis reveals six aspects of temporal expression extraction that should be considered when using a general domain tool in the clinical domain.", "State-of-the-art temporal expression extraction and normalization tools have emerged from temporal parsing challenges such as TempEval (Ver-hagen et al., 2007, 2010; UzZaman et al., 2013) and i2b2 (Sun et al., 2013a).", "Strategies utilized by these tools range from rule-based (SU-Time (Chang and Manning, 2012), HeidelTime Figure 1: Overview of Chrono Workflow (Strotgen and Gertz, 2010), NavyTime (Cham-bers, 2013), GUTime (Verhagen et al., 2005)) to machine learning (TRIPS and TRIOS (UzZaman and Allen, 2010), ClearTK (Bethard, 2013)) and hybrid approaches (ManTIME (Filannino et al., 2013)).", "For general domain texts, machine learning systems like ClearTK perform well at identifying temporal expression spans; however, rule-based and hybrid systems have better performance when taking temporal expression normalization into account (UzZaman et al., 2013).", "When applied to clinical text in the 2012 i2b2 Challenge, high-ranking general domain systems SUTime, GUTime, and HeidelTime had reduced performance (Sun et al., 2013a) as compared to systems built specifically for this data (Sohn et al., 2013; Kovaevi et al., 2013; Xu et al., 2013).", "Regardless of the performance on general domain texts, modifications had to be made to the state-of-the-art systems to recognize clinical temporal expressions and achieve improved performance.", "For example, three teams utilized HeidelTime with two teams incorporating additional rules and machine learning modules on top of the default system, which achieved better performance in the 2012 i2b2 Challenge than HeidelTime with no modifications.", "In addition to temporal challenges, other systems have been developed for general domain temporal parsing that utilize machine learning and complex grammars (Lee et al., 2014; Angeli et al., 2012) and rule-based methods referencing a central knowledge base (Llorens et al., 2012).", "SynTime (Zhong et al., 2017) takes a simplistic approach by defining a layer of syntactic token types that rules are applied to instead of processing the raw tokens.", "For temporal expression extraction, SynTime out-performs HeidelTime and SU-Time, however, it does not attempt normalization.", "All these systems were built and trained on general domain texts, such as TimeBank (Pustejovsky et al., 2003) and WikiWars (Mazur and Dale, 2010) and may require adjustments to accurately capture clinical temporal expressions.", "In addition, these systems normalize expressions into the ISO-TimeML (Pustejovsky et al., 2010) representation, which is unable to represent expressions that don't map to a single calendar unit or are relative to an event instead of a temporal unitboth of which are frequent in clinical texts.", "The SCATE schema is able to faithfully represent these types of expressions, but normalization requires a more detailed approach to annotate fine-grained temporal components that are not captured by TimeML (Bethard and Parker, 2016).", "In this work we adapt Chrono, a novel SCATE normalization system, to the clinical domain and describe the challenges encountered when normalizing to the SCATE Schema.", "Chrono is a hybrid rule-based and machine learning system built to identify temporal expressions in the AQUAINT corpus of Newswire texts (Graff, 2002) followed by normalization into the SCATE Schema for SemEval 2018 Task 6 (Laparra et al., 2018).", "Chrono consists of 3 main modules:", "1) Temporal Phrase Extraction,", "2) SCATE Normalization, and", "3) Temporal Disambiguation (Fig-ure 1).", "Briefly, the Temporal Phrase Extraction module identifies temporal/numeric tokens using a series of hierarchical rules and regular expressions.", "Temporal phrases are extracted based on consecutive tagged temporal/numeric tokens.", "Next, the SCATE Normalization module normalizes temporal phrases into the SCATE Schema using additional rule-based logic and regular expressions to identify specific temporal entities within each phrase, and links related sub-intervals.", "Finally, machine learning is used in the Temporal Disambiguation module as a sub-module of SCATE Normalization to disambiguate certain SCATE entities.", "Details on the specific rules implemented by Chrono for SemEval 2018 can be found in the systems description paper (Olex et al., 2018), and Chrono can be downloaded from https://github.com/AmyOlex/Chrono.", "The THYME corpus consists of de-identified clinical notes and pathology reports for colon and brain cancer patients.", "For this work, we utilized the subset of the THYME colon cancer documents that have associated SCATE annotations in the Anafora XML format from SemEval 2018 Task 6 (Laparra et al., 2018).", "The Training Corpus includes 22 clinical notes and 13 pathology reports along with their gold standard Anafora XML annotations.", "The Evaluation Corpus includes 92 clinical notes and 49 pathology reports with the annotations withheld.", "In this work, Chrono is first run on the THYME Evaluation Corpus before modifications are made, then the THYME Training Corpus is used to identify problem areas in need of improvement.", "Finally, Chrono is run on the Evaluation Corpus again after making improvements.", "Data in the Evaluation Corpus remained hidden through the entire process.", "Evaluation of Chrono's performance on the Training Corpus utilized python scripts provided by AnaforaTools that compare Anafora XML (Chen and Styler, 2013) annotation files.", "All metrics reported exclude the Event entity because event identification is currently not implemented by Chrono, and was not included in the SemEval Task.", "Chrono's annotation of the Evaluation Corpus was uploaded to the Post-Evaluation submission system for SemEval 2018 Task 6, and overall Precision, Recall, and F1 measures are reported in Tables 1 and 3.", "https://github.com/bethard/anaforatools 6 Results and Discussion This section first discusses Chrono's out-of-the-box performance on the THYME Evaluation Corpus prior to any code changes.", "The next section presents parsing issues encountered using the Training Corpus that fall into six main categories:", "1) lexical,", "2) entity frequency,", "3) numeric disambiguation,", "4) machine learning training data,", "5) writing style, and", "6) document structure.", "While fixing some of these issues was straightforward, more complex issues resulted in debugging an error cascade before performance increased.", "Finally, a discussion of Chrono's improved performance on the THYME Evaluation Corpus is presented.", "Chrono's performance decreased significantly on the THYME Evaluation Corpus out-of-the-box with an F1 of 0.35, precision of 0.49, and recall of 0.27 (Table 1).", "This is due to Chrono having only been trained on Newswire text, thus, it saw a limited number of temporal expression examples.", "Chrono's performance on the THYME Training Corpus resulted in an F1 of 0.314 when considering all entity properties (100% Correct Entity), and an F1 of 0.468 when only considering correct token span (Span Only).", "The higher Span Only result indicates that Chrono is identifying more correct entities than the 100% Correct Entity score indicates, but it is not assigning all the properties correctly.", "With the AnaforaTools evaluation script we are able to look at the performance on each SCATE entity individually to identify specific entities that significantly impact performance.", "Addressing cross-domain parsing issues felt synonymous to playing the arcade game of Whack-A-Mole, where as one issue was fixed another popped up.", "Several code improvements resulted in a cascading series of other code bugs and/or logical issues that needed resolution prior to realizing a performance improvement.", "This section describes these adventures in code improvement, which identify six primary challenges encountered in cross-domain application of temporal expression extraction.", "The following examples relay how complex and interconnected temporal expression extraction can be, and demonstrate the need to go beyond basic pattern identification and dictionary look-up strategies to including contextual and semantic information in order to capture all types of temporal expressions.", "Different domains are expected to differ in their lexicon.", "For example, the clinical domain contains many specialized medical terms and clinical jargon that is not encountered in general domain texts (Meystre et al., 2008).", "This is also true for a temporal lexicon.", "Originally trained on the Newswire corpus, Chrono's lexicon was limited to examples found in this domain; however, by expanding Chrono's temporal lexicon the performance on several SCATE entities increased.", "Performance on the SCATE entity Modifier improved after refining the lexicon to include missed terms such as nearly, almost, mid, over, early, and beginning, and removing terms that should be annotated with other entities such as this, next, and last.", "These descriptive temporal tokens are commonly used in clinical texts to describe various events in the patient narrative such as when symptoms occur or patient histories.", "The PartOfDay entity was also augmented with the terms bedtime, eve, and midnight as these, and similar terms, are frequently utilized in clinical notes for medication instructions, such as take one pill at bedtime.", "Significant improvement in performance was observed after these additions, with an F1 increase of 0.117 for PartOfDay, and an F1 increase of 0.241 for Modifier.", "Patient records revolve around temporal information, such as conveying medication instructions, describing symptom time lines, and outlining patients' histories.", "We found that temporal phrases associated with these events, like at that time, take one-time daily, in four weeks time, since that time, etc., were ubiquitous.", "All of these expressions include the token time, which is annotated as a Period entity in the SCATE Schema.", "This token, along with others found frequently in clinical text such as /min and /week that are most commonly used as short-hand for conveying medication frequency, were not included in Chrono's temporal lexicon.", "This resulted in poor performance for the CalendarInterval and Period SCATE entities.", "The addition of 15 terms that were not present in the Newswire corpus significantly improved performance for these phrases.", "This result indicates that commonly used tokens have domain-specific frequencies.", "For example, the token time was used on average 0.32 times per document in the Newswire corpus and just over 4 times per document in the THYME corpus (Table 2).", "The frequency for some lexical terms, like time, in clinical texts is understandable as certain concepts that convey a patient's narrative may be utilized over and over again.", "However, it is interesting that this observation also applies at the temporal entity level.", "For example, the initial build of Chrono excluded the SCATE entity Frequency because it is highly complex to parse and did not appear regularly in the Newswire corpus (0.12 times per document on average, Table 2).", "However, in the THYME corpus, the Frequency entity appeared on average 8.9 times per document a 72-fold increasewhich had a major impact on Chrono's performance.", "In clinical texts, phrases specifying frequency such as 2 time per day or once a day are abundant as they are routinely used for specifying medication or symptom frequency.", "This increase in clinical usage extends to all but two temporal entities, with Frequency having the second highest fold change next to Event (Table 2).", "Clinical text commonly contains non-temporal numerical information representing lab test results or medication dosage along with their frequency.", "The majority of these instances in the THYME corpus were not identified as temporal because their values and formats were distinct.", "However, Chrono confused a few occurrences of medication dosage with a 24-hour time instance.", "For example, in the phrase Vitamin D-3 1000 unit tablet the 1000 was incorrectly assigned the 24-hour time value of 10am.", "In the current implementation of Chrono, if a 4-digit dose falls within the correct year range (1500 to 2050) or 24-hour time it will Chrono Newswire Clinical Entity Implements Avg Freq Avg Freq AMPM-Of-Day Y 0.06 1.26 After Y 0.25 2.29 Before Y 0.44 0.91 Between N 0.28 1.11 Calendar-Interval Y 1.83 6.80 Day-Of-Month Y 2.84 8.66 Day-Of-Week Y 1.33 1.29 Event N 0.91 151.97 Every-Nth N 0 0.09 Frequency N 0.12 8.91 Hour-Of-Day Y 1.15 1.46 Intersection Y 0.11 1.60 Last Y 2.80 3.86 Minute-Of-Hour Y 1.12 1.31 Modifier Y 0.42 1.31 Month-Of-Year Y 3.31 9.77 Next Y 0.72 0.80 NotNormalizable N 0.06 0.06 NthFromStart Y 0.30 0 Number Y 1.17 13.66 Part-Of-Day Y 0.19 0.91 Part-Of-Week Y 0.04 0 Period Y 1.64 4.97 Season-Of-Year Y 0.07 0.03 Second-Of-Minute Y 0.67 0.17 Sum N 0.01 0.03 This Y 1.43 2.60 Time-Zone Y 0.44 0 Two-Digit-Year Y 0.98 0.23 Union N 0.02 0.03 Year Y 1.67 9.91 Table 2: The average frequency per document of each SCATE Entity for the Newswire (81 documents) and THYME (35 documents) training corpora.", "be annotated as such.", "A fix for this issue has yet to be implemented in Chrono, as it has a low rate of occurrence, but may include rules to identify dosage amounts such as mg and machine learning methods to disambiguate 4-digit numbers.", "Another example of the need to disambiguate numerical values is found in the clinical phrase Carotid pulses are 4/4.", "Without context, the 4/4 could be interpreted as the date April 4th.", "This instance did not cause an issue with Chrono because a 2or 4-digit year is required for a phrase to be identified as a formatted date.", "While this strategy worked for this example, it could become a problem when parsing files that contain year-less formatted dates.", "Thus, future improvements will include a numerical disambiguation module to aid in determining if a numerical phrase is temporal.", "Supervised machine learning (ML) methods require the use of annotated training data in order to generate a predictive model.", "Naturally, training data is chosen from the domain of the task as it is the most relevant.", "Chrono utilizes ML to disambiguate the SCATE entities Period and CalendarInterval.", "First, rule-based logic identifies if an entity is a possible Period or CalendarInterval, but it is hard to tell which one without considering context.", "Then the ML module decides which class the entity should be labeled.", "The training data for this task was initially from the Newswire corpus, but this performed poorly on clinical texts with an overall F1 of 0.544.", "To incorporate domain-specific contextual elements, Chrono was re-trained using just the THYME corpus, which improved performance to an F1 of 0.577.", "We then generated a model that utilized both the Newswire and THYME data, which performed slightly better, giving an F1 of 0.578.", "As temporal expressions can be domain-agnostic, it makes sense that training on cross-domain data would generate a more robust and generalizable model; therefore, we chose to use the cross-domain model.", "An advantage of processing clinical texts is that you are introduced to a variety of writing styles and preferences from different departments and medical personnel, where each may represent the same temporal concept differently.", "This results in lexical variations of concepts, for example, the concept of Monday can be represented as M, Mon., or, monday, and a temporal reasoning system must be able to identify that these all refer to the same day.", "The following sub-sections discuss issues associated with variation in formatted dates, times, and long temporal phrases.", "Variation in Formatted Dates/Times: There are a number of standard formats to convey dates and times, of which only a few were identified in the Newswire corpus and implemented in Chrono.", "Clinical texts introduced additional variability in date and time formats that Chrono was unable to handle correctly.", "For example, the date format 21-SEP-2009 contains a mixture of letters and numbers needing to be interpreted.", "Chrono uses regular expressions to identify formatted dates and times; however, the expression restricted all components to be digits, so dates with alphanumeric characters were not captured.", "Editing the regular expression to allow for alphanumeric characters fixed the capturing issue, but resulted in an error downstream where other methods expected a numeric month to be returned.", "Ultimately, a custom function was written to convert months represented as text to integers as existing conversion packages were not versatile enough to accommodate all lexical variations of these entities.", "Similarly, hour and minute formats such as 5:45 PM were not being recognized correctly because Chrono's regular expression looked specifically for the format found in the Newswire corpus that contained seconds (hh:mm:ss).", "Debugging formatted time expressions proved to be a challenge because Chrono utilizes three different modules to parse out this data.", "First, a module to identify the hours, minutes, and seconds, followed by a module to identify AMPM entities, and finally, a module to link sub-intervals where both MinuteOfHour and AMPM entities are subintervals of HourOfDay.", "Interestingly, the performance of HourOfDay for the Span Only evaluation had an F1 score of 0.941 both before and after improvements, indicating that Chrono was actually identifying most of the hours correctly, but was missing specific SCATE properties.", "Punctuation To Include or Not to Include?", "Part of the HourOfDay parsing issue stemmed from temporal phrases at the end of a sentence, such as 2:04 AM., where the period ended up being part of the AM string.", "Initially, Chrono looked for AMPM entities without considering punctuation unlike the MonthOfYear parsing, which specifically accounts for punctuation such as Dec..", "Thus, the AM. in the example was never identified, so the HourOfDay entity 2 would be lacking the subinterval link to the AMPM entity.", "To resolve this, Chrono was modified to utilize regular expressions in parsing out AMPM entities with and without surrounding punctuation.", "One dilemma arose when considering the variants of an AMPM entity.", "For example, valid AMPM entity strings include AM, am, A.M., and a.m.; however, AM. may not be considered a valid representation of an AMPM entity.", "Thus, Chrono specifically includes the period in the span only if there is a period after each letter in strings (e.g. A.M.), otherwise, the period is not included in the span.", "Implementing this fix resulted in a significant performance improvement for the AMPM entity and, oddly, a decrease in HourOfDay performance.", "Where have the Minutes Gone?", "While the HourOfDay entity was performing well in the Span Only evaluation, the MinuteOfHour entity performed poorly in both Span Only and 100% Correct Entity evaluations.", "This was a result of Chrono looking for an HourOfDay in two different methodsone that identified formatted times and another that first looked for an AMPM entity and, if found, searched for an upstream HourOfDay.", "The majority of time expressions in THYME were formatted as hh:mm followed by an AM or PM which resulted in HourOfDay being identified by AMPM parsing and not the formatted time method.", "The AMPM method was designed to identify the pattern found frequently in Newswire texts (e.g. 5 PM), which doesn't include second or minute parsing.", "To fix this issue the formatted time method was adjusted to allow for the hh:mm format, so now the HourOfDay and MinuteOfHour entities are being identified and appropriate sub-intervals are annotated.", "However, this code improvement resulted in another decrease in performance of the HourOfDay entity.", "Too Many Hours of the Day!", "The expected result of fixing the AMPM entity and formatted time parsing was increased performance on AMPM, MinuteOfHour, and HourOfDay entity parsing because the AMPM and MinuteOfHour sub-interval links were now identified correctly.", "However, HourOfDay performance actually became worse due to predicting too many HourOfDay entities.", "Further investigation revealed that every temporal phrase that included an AMPM entity had duplicate HourOfDay entities annotated (the same hour was annotated twice), one with the correct AMPM and MinuteOfHour sub-interval links and the other with no sub-interval links.", "This issue stemmed from a combination of the hierarchical parsing of formatted dates/times and inadvertently excluding a check to see if an HourOfDay entity already existed when parsing AMPM entities.", "In Chrono, all temporal phrases are interrogated by all modules.", "To ensure only one entity of each type is identified in each temporal phrase Chrono implements a flag system.", "For example, in the phrase Monday at 3:05 PM. there is one DayOfWeek, one HourOfDay, one MinuteOfHour, and one AMPM entity.", "This phrase is first parsed by the formatted date/time module to identify the HourOfDay 3 and the MinuteOfHour 05 entity.", "Following is the identification of the PM AMPM entity; however, if this module finds an AMPM entity it then proceeds to look for an HourOfDay entity preceeding the AMPM substring.", "However, an HourOfDay had already been identified, and the AMPM module neglected checking this.", "Fixing this double parsing issue was straightforward as the AMPM module just needed to check if the HourOfDay flag had been set for the given temporal phrase.", "This error resulted in some initially puzzling results where the HourOfDay performance kept decreasing with every im-provement, and ended up identifying twice as many HourOfDay entities as it should have.", "Different modules may be required for parsing different date/time formats, so it is important to ensure that all modules are consistently coded.", "It is also important to keep in mind that some formats are more frequent in one domain than another.", "This issue had not appeared when using the Newswire corpus because the majority of the AMPM entities were accompanied by the shorter format of 5 PM, or contained the full hh:mm:ss format, whereas in the clinical domain the specification of hour and minutes, such as 3:05 PM, was ubiquitous throughout the corpus.", "Stop words splitting temporal phrases: Chrono was initially unable to handle stop words that connected temporal entities into a single phrase, which limited its performance on the THYME corpus due to the use of long temporal expressions in clinical texts.", "Chrono identified temporal phrases by looking for consecutive temporal and/or numeric tokens.", "If a stop word was identified (e.g. is, of, at, etc), the temporal phrase would be terminatedin some cases prematurely.", "For example, the phrase beginning of this month on September 1 was originally separated into 3 temporal phrases: beginning, this month, and September 1.", "Other examples of temporal phrases that were incorrectly split include 2005 in April and October 14, 2010 at 02:07 PM, which were both separated into two phrases.", "While individual temporal entities were identified correctly, the correct sub-intervals for each entity were unable to be assigned because Chrono only links sub-intervals within a single phrase.", "To fix this, code was added to tag link-ing words in the temporal phrase extraction module.", "Now, if a linking token is identified while constructing a temporal phrase it is ignored and the phrase is extended.", "This allows Chrono to correctly identify longer temporal phrases and results in correct assignment of sub-intervals, which brought the 100% Entity performance closer to Span Only.", "Unexpected Effects of Longer Temporal Phrases: The inclusion of stop words in temporal phrases was a major upgrade to Chrono resulting in sub-intervals of longer phrases being correctly assigned.", "However, this had an unintended result that initially lowered the overall F1 scores for Calendar-Interval and Period entities.", "Investigating changes in performance revealed CalendarInterval and Period entities that were correct were now incorrectly annotated with a link to a Number entity.", "This happened for phrases like four times a day or one time a day, which are highly frequent expressions in clinical notes as they are part of instructions for taking medications.", "This behavior resulted from Chrono's parsing strategy for identifying associated numbers with SCATE entities where Chrono naively looked for a number token in the sub-string of characters preceding an annotated entity.", "This parsing strategy worked well for Newswire text as the majority of associated numbers appeared in formats similar to 2 weeks ago, or 5 days.", "Previously, Chrono assigned expressions like four times a day to two temporal phrases: four times and day.", "Thus, the Calendar-Interval day was correctly identified with no Number link.", "After including the stop words in the temporal phrases the first number in the phrase (e.g. four) was incorrectly associated with the Period or Calendar-Interval entity.", "Chrono's number parsing strategy also became an issue with other frequent clinical phrases such as one-time daily where the number one was incorrectly associated with the Calendar-Interval daily.", "To fix this issue, Chrono's definition of where a number had to be located in order to be linked to a SCATE entity was restricted to the immediately preceding token instead of the full preceding sub-string.", "This restriction works well for the THYME and Newswire corpora; however, may not work well with expressions such as 2 full weeks from now where the Period weeks should be annotated with the Number 2.", "Sentence Boundaries: An interesting temporal parsing issue appears in clinical texts regarding sentence tokenization due to item lists in the clinical record.", "Initially, Chrono did not tokenize on sentences as temporal phrases spanning sentence boundaries were not an issue in the Newswire corpus.", "However, clinical records in the THYME corpus contained entries like the following: ...my notes from December. 2. Ulcerative colitis...", "Where the top sentence ends with the temporal entity December followed by a numbered list item.", "Since Chrono did not consider sentence boundaries, this line break was removed in the preprocessing phase and the 2 that numbers the list item was parsed as a DayOfMonth associated with December.", "To resolve this issue, Chrono was updated to identify sentence boundaries.", "In Temporal Phrase Extraction, Chrono no longer allows a single temporal phrase to span sentence boundaries; however, the Temporal Disambiguation module still ignores these boundaries.", "Metadata: Domain agnostic rules and procedures can be developed to identify many temporal expressions in written text, but metadata presents additional challenges in that it is inherently domain-specific, and can even be document type specific within the same domain.", "For example, pathology reports and clinical encounters with a physician can have their metadata formatted in different ways.", "In dealing with metadata the first question is if one wants to parse the metadata at all.", "A good reason to do so would be to gather contextual information that is not explicitly written in the text, like identifying the document cre-ation date to disambiguate references to days of the week, etc.", "The gold standard SCATE annotations do contain dates from the metadata sections, so it is necessary for Chrono to identify these entities.", "Two issues arose when working on this problem:", "1) How to identify a temporal token using whitespace tokenization when the metadata line contains little whitespace, and", "2) whether or not to include the word date as a temporal token.", "In the THYME corpus metadata is formatted as [start date=12/02/2010, rev date=12/02/2010].", "Using whitespace tokenization this line is split into two tokensboth marked as temporal as they contain formatted date strings.", "However, in the Temporal Phrase Extraction module this line is considered a single phrase because it is composed of two consecutive temporal tokens.", "This causes an issue as Chrono assumes there is only one of each SCATE entity type in a phrase; thus, initially Chrono only annotated one of the two dates in the metadata line.", "To resolve this, Chrono now converts all equal signs to spaces prior to whitespace tokenization, thereby separating the metadata text to four tokens.", "While this fix resolved the issue of parsing metadata dates, an equal sign could be useful information, so a more sophisticated approach will be required in the future.", "The second issue with parsing metadata information arose when updating the lexicon of known temporal tokens.", "The word date is temporal, but had not been included in the initial lexicon of Chrono.", "Including date as a temporal token resulted in identifying the metadata line as a single temporal phrase again as it was now a consecutive sequence of four temporal tokens: start date, 12/02/2010, rev date, and 12/02/2010.", "As start date and rev date are just labels they should not be considered temporal entities.", "Some mentions of date were valid temporal expressions, but there were few of them.", "Thus, we decided to continue to exclude this token.", "To be applicable to different domains, more sophisticated methods to parse metadata will need to be implemented to resolve issues with temporal labels and other special characters seen in metadata text.", "Improvements made to Chrono using the THYME Training Corpus lead to a 0.27 and 0.24 increase in precision and recall, respectively, with a 0.26 increase in F1 measure for the Evaluation Corpus (Table 3).", "This resulted in Chrono being the top performing system for SCATE Normalization.", "Chrono's performance on the Training Corpus improved similarly with a precision of 0.881 in the Span Only evaluation and 0.729 for the 100% Correct Entity.", "This indicates that Chrono is identifying the correct location of many entities, but it is having trouble setting all the properties correctly.", "When designing a rule-base system it is possible to develop rules that overfit or are tailored to the training corpus (i.e. Newswire texts).", "Overfitting rules results in good performance on the Dataset System Precision Recall F1 THYME Eval Chrono 0.76 0.51 0.61 THYME Eval Laparra et.", "training domain and poor performance on the testing domain, similar to Chrono's performance on the THYME corpus.", "However, when rules are adjusted to incorporate another domain it is expected that the performance in the training domain go down, indicating that it was overfitting the training domain.", "To see if this happened with Chrono, we re-evaluated our final model on the Newswire corpus.", "The results showed an insignificant 0.01 drop in F1 due to a 0.05 drop in Precision and a 0.04 increase in Recall, which indicates that Chrono is now more compatible with cross-domain application.", "Since we do not see a major drop in performance on the Newswire corpus we can conclude the original rules did not overfit the Newswire domain, but rather they were incomplete and required expansion to improve performance in the clinical domain.", "In conclusion, clinical domain texts posed additional challenges that were either not present in the Newswire corpus, or not frequent enough to prioritize highly when initially building Chrono.", "Application to the THMYE Training Corpus brought these limitations to light, such as the consistent use of temporal expressions that utilize frequency, highly repeated temporal phrases, dosage values being annotated as temporal expressions, and additional lexical elements.", "As temporal information is relatively domain agnostic, improvements made to Chrono for THYME should improve performance on other domains.", "An advantage of utilizing clinical texts is that it encounters a variety of writing styles from different practitioners who may prefer specifying temporal information in different ways.", "Additionally, different medical forms, such as pathology reports versus clinical notes, have specific ways to convey dates.", "Thus, the range of temporal expressions Chrono now identifies has been significantly expanded due to the variety incorporated in the clinical texts.", "While Chrono's performance on SCATE Normalization has improved, there are still many areas for further development.", "These include identifying frequency, disambiguating dosage versus 4-digit year or 24-hour time, implementing more sophisticated approaches to parsing metadata, and performing a more detailed investigation at the entity level to identify which SCATE properties are being missed or incorrectly assigned in order to bring the 100% Correct Entity performance closer to the Span Only performance.", "These updates will require the implementation of additional rule-sets as well as the addition of machine learning modules and more complex contextual parsing.", "One approach to augmenting current rule sets is the automated generation of regular expressions (Redd et al., 2015) based on annotated gold standards, which has the potential to expand Chrono's capabilities without time-consuming human review of missed expressions.", "Finally, Chrono outputs normalized temporal expressions in the SCATE schema format, which limits our ability to evaluate its performance on corpora in other domains.", "Currently, only select subsets of the AQUAINT and THYME corpora are annotated with SCATE, and the complete conversion of TimeML to the SCATE schema is difficult as TimeML lacks details required by SCATE.", "Thus, implementation of a method to convert SCATE XML to the standard TimeML format will allow Chrono to be evaluated on additional cross-domain corpora and classic benchmark temporal corpora such as i2b2 (Sun et al., 2013a), TempEval (Verhagen et al., 2007, 2010; UzZaman et al., 2013), and Clinical TempEval (Bethard et al., 2015).", "The process of improving Chrono brought to light several aspects of cross-domain application of temporal parsing:", "1) lexical differences,", "2) the frequency of temporal entity usage,", "3) disambiguating numerical phrases,", "4) appropriate machine learning data,", "5) lexical variation of concepts, and", "6) differences in document structure.", "While the concept of time is the same regardless of the domain, its representation can vary.", "Thus, temporal parsing provides a good backdrop for determining the challenges of cross-domain application, which is difficult for many NLP applications.", "The aspects of cross-domain application discussed herein provide a foundation for designing adaptable NLP tools that can be utilized across domains." ]
[ "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain" ]
[ "Authorship attribution aims to identify the author of a text based on the stylometric analysis.", "Authorship obfuscation , on the other hand, aims to protect against authorship attribution by modifying a text's style.", "In this paper, we evaluate the stealthiness of state-of-the-art authorship obfuscation methods under an adversarial threat model.", "An obfuscator is stealthy to the extent an adversary finds it challenging to detect whether or not a text modified by the obfuscator is obfuscated a decision that is key to the adversary interested in authorship attribution.", "We show that the existing authorship obfuscation methods are not stealthy as their obfuscated texts can be identified with an average F1 score of 0.87.", "The reason for the lack of stealthiness is that these obfuscators degrade text smoothness, as ascertained by neural language models, in a detectable manner.", "Our results highlight the need to develop stealthy authorship obfuscation methods that can better protect the identity of an author seeking anonymity.", "Authorship attribution aims to identify the author of a text using stylometric techniques designed to capitalize on differences in the writing style of different authors.", "Owing to recent advances in machine learning, authorship attribution methods can now identify authors with impressive accuracy (Abbasi and Chen, 2008) even in challenging settings such as cross-domain (Overdorf and Greenstadt, 2016) and at a large-scale (Narayanan et al., 2012; Ruder et al., 2016).", "Such powerful authorship attribution methods pose a threat to privacy-conscious users such as journalists and activists who may wish to publish anonymously (Times, 2018; Anonymous, 2018).", "Authorship obfuscation, a protective countermeasure, aims to evade authorship attribution by obfuscating the writing style in a text.", "Since it is challenging to accomplish this manually, researchers have developed automated authorship obfuscation methods that can evade attribution while preserving semantics (PAN, 2018).", "However, a key limitation of prior work is that authorship obfuscation methods do not consider the adversarial threat model where the adversary is ob-fuscation aware (Karadzhov et al., 2017; Potthast et al., 2018; Mahmood et al., 2019).", "Thus, in addition to evading attribution and preserving semantics, it is important that authorship obfuscation methods are stealthy i.e., they need to hide the fact that text was obfuscated from the adversary.", "In this paper, we investigate the stealthiness of state-of-the-art authorship obfuscation methods.", "Our intuition is that the application of authorship obfuscation results in subtle differences in text smoothness (as compared to human writing) that can be exploited for obfuscation detection.", "To capitalize on this intuition, we use off-the-shelf pre-trained neural language models such as BERT and GPT-2 to extract text smoothness features in terms of word likelihood.", "We then use these as features to train supervised machine learning classifiers.", "The results show that we can accurately detect whether or not a text is obfuscated.", "Our findings highlight that existing authorship obfuscation methods themselves leave behind stylistic signatures that can be detected using neural language models.", "Our results motivate future research on developing stealthy authorship obfuscation methods for the adversarial threat model where the adversary is obfuscation aware.", "We study the problem of obfuscation detection for state-of-the-art authorship obfuscation methods.", "This and the underlying property of stealthiness has been given scant attention in the literature.", "We also note that this problem is potentially more challenging than the related one of synthetic text detection since most of the original text can be retained during obfuscation.", "We explore 160 distinct BERT and GPT-2 based neural language model architectures designed to leverage text smoothness for obfuscation detection.", "We conduct a comprehensive evaluation of these architectures on 2 different datasets.", "Our best architecture achieves F1 of 0.87, on average, demonstrating the serious lack of stealthiness of existing authorship obfuscation methods.", "Paper Organization: The rest of this paper proceeds as follows.", "Section 2 summarizes related work on authorship obfuscation and obfuscation detection.", "Section 3 presents our proposed approach for obfuscation detection using neural language models.", "Section 4 presents details of our experimental setup including the description of various authorship obfuscation and obfuscation detection methods.", "We present the experimental results in Section 5 before concluding.", "The relevant source code and data are available at https://github.com/asad1996172/ Obfuscation-Detection .", "In this section, we separately discuss prior work on authorship obfuscation and obfuscation detection.", "Given the privacy threat posed by powerful authorship attribution methods, researchers have started to explore text obfuscation as a countermeasure.", "Early work by Brennan et al. (2012) instructed users to manually obfuscate text such as by imitating the writing style of someone else.", "Anony-mouth (McDonald et al., 2012, 2013) was proposed to automatically identify the words and phrases that were most revealing of an author's identity so that these could be manually obfuscated by users.", "Follow up research leveraged automated machine translation to suggest alternative sentences that can be further tweaked by users (Almishari et al., 2014; Keswani et al., 2016).", "Unfortunately, these methods are not effective or scalable because it is challenging to manually obfuscate text even with some guidance.", "Moving towards full automation, the digital text forensics community (Potthast and Hagen, 2018) has developed rule-based authorship obfuscators (Mansoorizadeh et al., 2016; Karadzhov et al., 2017; Castro-Castro et al., 2017).", "For example, Karadzhov et al. (2017) presented a rule-based obfuscation approach to adapt the style of a text towards the average style of the text corpus.", "Castro et al. (2017) presented another rule-based obfuscation approach to simplify the style of a text.", "Researchers have also proposed search and model based approaches for authorship obfuscation.", "For example, Mahmood et al. (2019) proposed a genetic algorithm approach to search for words that when changed, using a sentiment-preserving word embedding, would have the maximum adverse effect on authorship attribution.", "Bevendorff et al. (2019) proposed a heuristic-based search algorithm to find words that when changed using operators such as synonyms or hy-pernyms, increased the stylistic distance to the author's text corpus.", "Shetty et al. (2018) used Generative Adversarial Networks (GANs) to transfer the style of an input text to a target style.", "Emmery et al. (2018) used auto-encoders with a gradient reversal layer to de-style an input text (aka style invariance).", "Prior work has successfully used stylometric analysis to detect manual authorship obfuscation (Juola, 2012; Afroz et al., 2012).", "The intuition is that humans tend to follow a particular style as they try to obfuscate a text.", "In a related area, Shahid et al. (2017) used stylometric analysis to detect whether or not a document was spun by text spinners.", "We show later that these stylometric-methods do not accurately detect more advanced automated authorship obfuscation methods.", "There is increasing interest in distinguishing synthetic text generated using deep learning based language models such as BERT and GPT-2 from human written text.", "Using contextual word likelihoods, as estimated using a pre-trained language model (Radford et al., 2019), Gehrmann et al. (2019) were able to raise the accuracy of humans at detecting synthetic text from 54% to 72%.", "Zellers et al. (2019) showed that a classifier based on a language model can accurately detect synthetic text generated by the same language model.", "However, the detection accuracy degrades when different language models are used to generate and to detect.", "Bakhtin et al. (2019) also showed that the detection accuracy degrades when the synthetic text is generated using a language model trained on a different corpus.", "In summary, recent research has leveraged language models to detect their generated synthetic text.", "However, in obfuscation we start with human written text and make modifications such that text semantics is still preserved.", "This is in part achieved by retaining chunks of the original writing.", "Thus, the quirks of the obfuscator will be mingled in unpredictable proportions and ways with the author's original writing style.", "This makes the detection of obfuscated text different and potentially more challenging than synthetic text detection.", "To the best of our knowledge, this work presents the first systematic study of the detection of automatically obfuscated text.", "An automated authorship obfuscator changes the input text so that it evades authorship attribution while preserving semantics.", "The quality and smoothness of automated text transformations using the state-of-the-art obfuscators differ from that of human written text (Mahmood et al., 2019).", "Therefore, the intuition behind our obfuscation detectors is to exploit the differences in text smoothness between human written and obfuscated texts.", "We capture text smoothness using powerful pretrained context aware neural language models.", "1 A text with a relatively greater proportion of high likelihood words is likely to be more smooth.", "Figure 1 shows the pipeline of our method for detecting whether or not a given text is obfuscated.", "First, a language model is used to extract the likelihood (in the form of probability or rank) for each word in the text.", "Second, these likelihoods are used to build a smoothness representation for the text.", "This is input to a supervised machine learning model that is trained to classify the text as human written or obfuscated.", "The three steps correspond to three significant architectural dimensions 1 BERT: https://ai.googleblog.com/2018/11/open-sourcing-bert-state-of-art-pre.html;GPT-2:https://openai.com/blog/better-language-models of our detectors with multiple algorithmic options in each dimension.", "Combinations of choices along each dimension yield different architectures that can be used by an adversary to detect obfuscated documents.", "We detail each dimension next.", "Given a word sequence, language models are designed to predict the next word.", "They do this by building contextual models of word occurrences as probability distributions over the full vocabulary.", "Then some heuristic is used to pick the next word e.g., select the word with the highest probability.", "In our case, instead of word prediction, we extract the likelihood from the language model (either as a probability or as a rank) for each word in the text given its context.", "The language model has a critical role.", "Thus, we use neural language models with deep architectures and trained on large amounts of data which are better at identifying both long-term and short-term context.", "In order to imitate an adversary who may not have the significant resources needed to train such models, we use off-the-shelf pre-trained neural language models.", "Specifically, we choose well-known context-aware neural language models GPT-2 (Radford et al., 2019) and BERT (Devlin et al., 2018).", "We choose both as they use different approaches.", "GPT-2 has been shown to perform better than BERT (Gehrmann et al., 2019) at synthetic text detection, with word rank giving higher performance than word probability.", "Their relative merit for obfuscation detection is unknown.", "1) GPT-2.", "GPT-2 released by Open AI in 2019 uses at its core, a variation of the transformer architecture, an attention based model (Vaswani et al., 2017) and is trained on text from 45 million outbound links on Reddit (40 GB worth of text).", "We use GPT-2 to compute the conditional probability for word i as p ( w i | w 1 ...i 1 ) .", "The position of w i in the sorted list (descending order of probability) of vocabulary words gives the word rank.", "The authors (Radford et al., 2019) trained four versions of GPT-2 differing in architecture size.", "Of these, we used the small and medium versions containing 117M and 345M parameters, respectively.", "The authors eventually also released a large version containing 762M parameters and a very large version containing 1542M parameters.", "2 We did not use 2 https://openai.com/blog/gpt-2-6-month-follow-up/ Language Model 1) GPT-2 117M 2) GPT-2 345M 3) BERT base 4) BERT large w 1 w 2 w 3 w n Input Text Word Likelihood Extraction Feature Representation Probabilities Ranks 0.2 0.8 0.3 0.7 1 50 20 100 Binning VGG19 1) SVM 2) RFC 3) KNN 4) ANN 5) GNB Classification Model Figure 1: Pipeline for obfuscation detection them because only the small and medium versions were released at the time of our experimentation.", "2) BERT.", "BERT released by Google in 2018 is also based on Transformers.", "It is trained on text from Wikipedia (2.5B words) and Book-Corpus (800M words).", "BERT considers a bidirectional context unlike the uni-directional context considered by GPT-2.", "Thus, in BERT the conditional occurrence probability for word i is p ( w i | w i k...i 1 , w i +1 ...i + k ) where k is the window size on each direction.", "Rank is computed in the similar way as GPT-2.", "We use both pre-trained BERT: BERTBASE with 110M parameters and BERTLARGE with 340M parameters.", "We implement likelihood extraction for both GPT-2 and BERT, using code made available by the Giant Language Model Test Room (GLTR) tool.", "3 3.2.2 Feature Representation We experiment with two different representations of smoothness.", "Each is explored with occurrence probabilities and with ranks.", "1) Binning based features: Text smoothness is represented by the likelihood of words in text.", "A text with a greater proportion of high likelihood words is likely to be smoother.", "We aggregate this information using fixed size bins representing different likelihood ranges.", "For probabilities we create bin sizes of 0.001, 0.005 and 0.010.", "For ranks we create bin sizes of 10, 50 and 100.", "Thus for example, one feature representation is to consider bins of ranks from 0 to 10, 11 to 20, 21 to 30 etc.", "Each bin contains the proportion of words in the document with likelihood in that range.", "2) Image based features: Since the word likelihood values received from language models are in essence signals, we explore signal detection approaches as well.", "For example, for audio classifi-3 https://github.com/HendrikStrobelt/detecting-fake-text cation (Hershey et al., 2017) store plots of the log-mel spectogram of the audios as images and then apply image classification methods.", "VGG (Si-monyan and Zisserman, 2014), was one of the top performers of the different classifiers they tested.", "Inspired by them, we explore obfuscation detection via image classification.", "Specifically, we explore a transfer learning approach wherein we use the VGG-19 classifier 4 trained for image classification on ImageNet dataset 5 .", "For our method, we sort the extracted likelihood values for the text in descending order and then plot these values saving it as an image.", "This image is then processed by the pre-trained VGG-19.", "We extract the document's 6 representation from the last flatten layer of VGG-19 (before the fully connected layers) as it contains high-level information regarding edges and patterns in the image.", "We expect this resulting feature representation vector to capture information regarding text smoothness.", "We experiment with Support Vector Machine (SVM) with a linear kernel, Random Forest Classifier (RFC) an ensemble learning method, K Nearest Neighbor (KNN) which is a nonparametric method, Artificial Neural Network (ANN) which is a parametric method, and Gaussian Naive Bayes (GNB) which is a probabilistic method.", "All classifiers are trained using default parameters from scikit-learn 7 except for ANN, where we use lbfgs solver instead of adam because it is more performant and works well on smaller datasets.", "Options selected for each dimension combine to form a distinct obfuscation detection architecture.", "With 4 language models giving probabilities or ranks as output, 4 features (3 binning based features and 1 image based feature) and 5 different classifiers we experiment with a total of 160 distinct architectures.", "The assumption here is that a determined adversary will similarly look for the most effective obfuscation detector.", "As state-of-the-art automated authorship obfuscators we identified the top two systems (Potthast et al., 2018) from PAN, a shared CLEF task.", "8 We also chose Mutant-X, a search based system presented in (Mahmood et al., 2019), which shows better performance than the PAN obfuscation systems.", "These are detailed next.", "Document Simplification (Castro-Castro et al., 2017) .", "This approach obfuscates by applying rule-based text simplifications on the input document.", "The process is as follows.", "1) If the number of contractions in the document is greater than the number of expansions, then replace all contractions with expansions otherwise replace all expansions with contractions.", "2) Simplify by removing parenthetical texts that do not contain any named entity, discourse markers or appositions.", "3) Replace words with synonyms that haven't been already used in the text.", "We implement this approach and refer to it as DS-PAN17.", "Style Neutralization (Karadzhov et al., 2017) .", "This system is also a rule-based text obfuscator.", "First they calculate the average values for the whole corpus for stylometric features such as stopword to non stopword ratio, punctuation to word count ratio and average number of words per sentence.", "Next, they calculate the values of same sty-lomteric features for the input document.", "Finally, using text transformation rules (e.g., replace ! with !!, merge or split sentences etc.) they move the document's stylometric feature values towards the corpus averages.", "We evaluate this approach using the code provided by the authors and refer to it as SN-PAN16.", "MUTANT-X (Mahmood et al., 2019) This system uses a genetic algorithm (GAs) in combination with an authorship attribution system to identify words that when changed would have the highest positive effect towards obfuscation.", "Text 8 https://pan.webis.de/ transformations are done using a sentiment preserving variation of Word2Vec (Yu et al., 2017).", "The authors present two versions: MUTANT-X writeprintsRFC, built using a traditional machine learning based authorship attribution system and MUTANT-X embeddingCNN, built using a deep learning based authorship attribution system.", "We evaluate MUTANT-X embeddingCNN using code made available by authors.", "We use the two data collections which were used by (Mahmood et al., 2019).", "1) Extended Brennan Greenstadt corpus .", "This text corpus from (Brennan et al., 2012) contains 699 documents written by 45 unique authors.", "Documents are mainly academic in nature but they do not contain any citations and section headings and have under 500 words, as instructed by data collectors.", "We refer to this collection as EBG. 2) Blog authorship corpus .", "This text corpus which is from (Schler et al., 2006) contains more than 600,000 blogger.com blog posts written by 19,320 unique authors.", "These texts are more casual in nature and are written without any instructions.", "We refer to this collection as BLOG.", "The authors in (Mahmood et al., 2019) tested their GA obfuscators using 49 EBG and 200 BLOG documents written by 10 authors.", "A document is considered obfuscated if it has been processed by an authorship obfuscation tool.", "Thus we processed each document in the 49 EBG and 200 BLOG sets using each of the three obfuscation systems in turn (DS-PAN17, SN-PAN16 and MUTANT-X embeddingCNN).", "This resulted in 127 obfuscated EBG documents and 571 obfuscated BLOG documents.", "9 An obfuscated document is viewed as an evaded document if it successfully evades authorship attribution.", "Using the writeprintsRFC authorship attributor (Mahmood et al., 2019) for this purpose, we process all obfuscated documents (127 and 571 for EBG and BLOG respectively).", "This resulted 9 DS-PAN17, SN-PAN16 and MUTANT-X embeddingCNN respectively created 49, 49 and 29 obfuscated documents for EBG whereas for BLOG they created 200, 200 and 171 obfuscated documents.", "Lower number of documents generated by MUTANT-X is due to the fact that it only obfuscates documents that are correctly classified by the attribution classifier it uses.", "Despite the type and extent of transformations made, the remaining obfuscated documents did not evade the attributor.", "The evaded documents are those where the modification strategy somehow crossed an implicit threshold for evading authorship attribution.", "Thus, we surmise that the evaded documents are likely to be relatively less smooth.", "It is noteworthy that evaded documents are the realistic goal from a privacy seeking user's perspective.", "Thus, in addition to assessing detection for obfuscated data, we also assess detection for evaded data for both (EBG and BLOG) obfuscated datasets.", "For each dataset, we use the original and obfus-cated/evaded documents by 5 (out of 10) authors for training and the documents by the remaining 5 authors for testing.", "An important decision to make here is with regards to the distribution of original documents versus obfuscated (or evaded) ones in our training and testing set.", "We have no real world scenario to mimic in that we have not encountered any real world use of automated obfuscators and their outputs.", "Therefore we make the datasets under a reasonable assumption that original documents are in the vast majority.", "Thus we create train and test datasets with roughly 85% -90% original documents.", "We do this by adding more original documents from each data collection to achieve these proportions.", "Overall, we ensure that each author's documents (of any version) are all either in the train or in the test set.", "Table 1 provides the size details for our obfuscated and evaded datasets.", "We propose a total of 160 distinct architectures (see 3.2.4).", "Testing these for each of the four datasets, we conduct a total of 640 distinct obfuscation detection experiments.", "In addition, we explore three other methods; the first is inspired by recent research to detect synthetic text.", "The other two were used previously to detect manually obfuscated documents.", "1) GLTR (Gehrmann et al., 2019) .", "The authors present a tool to help humans distinguish between original and machine generated synthetic text.", "Their tool uses pretrained language models to extract word likelihoods and presents their plot to humans making the decision.", "Their best model uses GPT-2 117M language model to extract word ranks which are then put in 4 unequal range bins: 0-10, 10-100, 100-1000 and greater than 1000.", "We test the same method on our obfuscation detection problem.", "For obfuscation detection, we tested this feature set with linear SVM and RFC.", "Among these, linear SVM performed the best.", "2) Character trigrams + KNN (Juola, 2012) .", "This paper presents a stylometric approach to detect manually obfuscated documents.", "Firstly, they preprocess each document by unifying case and separating punctuation (e.g., !! becomes ! !).", "Then they represent each document using character trigrams.", "Finally the classification is performed using KNN with normalized cosine distance.", "We implement this approach to evaluate it against automated authorship obfuscaters.", "3) Writeprints + SVM (Afroz et al., 2012) .", "This paper also uses stylometric features and is also focused on detecting manual obfuscation.", "It is also similar to the approach in (Shahid et al., 2017), which uses stylometric features to perform spun document detection.", "Afroz et al. tested with 3 different feature sets of which writeprints was the best.", "Their writeprints feature set contains character related features (e.g., total characters, percentage of uppercase letters etc.), word related features (e.g., total words, frequency of large words etc.) and other features including frequencies of digits, special characters, function words etc.", "They use this feature set with SVM (using poly kernel) to perform obfuscation detection.", "On our datasets, we found linear SVM to be working better than the polynomial one, so we report results with linear SVM.", "We implement this writeprints approach with linear SVM as our final detector.", "Summary trends: After averaging we find that for obfuscation detection, 25% of all 160 architectures achieve F1 score greater than 0.76, 50% achieve F1 score greater than 0.72 and a high 75% of them were able to achieve F1 score greater than", "0.52.", "Figure 2 summarizes the performances of all 160 different architectures across the four datasets.", "We see that obfuscation detection is easier in EBG than in BLOG with median EBG F1 scores being significantly better than median BLOG F1 scores (notches do not overlap (Krzywinski and Altman, 2014)).", "This can be explained by the fact that EBG contains scholarly articles that are relatively more consistent in their smoothness than blogs.", "This likely makes it easier to pick up on the difference in smoothness caused by obfuscated documents in EBG than in BLOG.", "We can also see that evaded documents achieve a higher maximum F1 score than obfuscated documents.", "This confirms our intuition presented in 4.2.1, that evaded documents are likely to be less smooth and therefore easier to detect than obfuscated documents.", "However, we also see that F1 scores for evaded datasets are less stable (greater box size) than obfuscated datasets.", "We believe that this is due to the fact that there are fewer documents in evaded datasets as compared to their respective obfuscated datasets (see Table 1).", "Performance evaluation: In terms of architecture selection, instead of choosing randomly across 160 architectures, we make the following assump-EBG obfuscated EBG evaded BLOG obfuscated BLOG evaded 0.0 0.2 0.4 0.6 0.8 1.0 F 1 s c o r e Figure 2: Notched box plots for obfuscation detection F1 scores using all 160 architectures for each dataset.", "tion.", "We assume that the adversary is knowledgeable about the various choices, tests these alternatives and employs the best configuration.", "Thus, we present results for the best models, based on F1 scores for obfuscation detection, achievable by the adversary (Table 2).", "Table 2 also presents results for the three additional methods presented in section 4.3.2.", "Our best BERT and GPT2 combinations outperform all other methods across each of the four datasets in F1 score and recall.", "Along with (GLTR + SVM) these achieve the best precision for the EBG datasets.", "In BLOG obfuscated, GLTR based method achieves the highest precision whereas in BLOG evaded both the GLTR based method and character trigrams method top the chart however in each case with a sizeable penalty paid in recall and therefore in F1 score.", "In summary, we see that using the best of methods the adversary can detect evaded and obfuscated documents with F1 score of 0.77 or higher (average 0.87 across datasets) which indicates that the tested state-of-the-art obfuscators are far from stealthy.", "Now we analyze the effect of different choices made within each of the three dimensions depicted in Figure 1.", "As mentioned earlier, for a privacy seeking user evading author attribution is more im-SVM RFC KNN ANN GNB 0.0 0.2 0.4 0.6 0.8 1.0 EBG evaded SVM RFC KNN ANN GNB 0.0 0.2 0.4 0.6 0.8 1.0 BLOG evaded GPT2 1 1 7 MGPT2 3 4 5 MBERT b a s e BERT l a r g e 0.0 0.2 0.4 0.6 0.8 1.0 EBG evaded GPT2 1 1 7 MGPT2 3 4 5 MBERT b a s e BERT l a r g e 0.0 0.2 0.4 0.6 0.8 1.0 BLOG evaded Binning Image 0.0 0.2 0.4 0.6 0.8 1.0 EBG evaded Binning Image 0.0 0.2 0.4 0.6 0.8 1.0 BLOG evaded probabilities ranks 0.0 0.2 0.4 0.6 0.8 1.0 EBG evaded probabilities ranks 0.0 0.2 0.4 0.6 0.8 1.0 BLOG evaded", "portant than just obfuscation.", "So, in this section we present architecture analysis results only for evaded datasets involving 320 experiments (160 each for EBG evaded and BLOG evaded).", "Figure 3", "(a) presents notched box plots comparing distribution of F1 scores achieved by language models across both datasets.", "In EBG evaded, BERT language models achieve higher maximum F1 score (0.95) than GPT-2 (0.90 0.91).", "On the other hand, in BLOG evaded, GPT-2 345M achieves higher maximum F1 score (0.83) than others (0.75 0.80).", "Relatively, BERT shows greater consistency in F1 score (box size) than GPT-2 in both datasets.", "We believe that the bidirectional nature of BERT helps in capturing context and consequently smoothness better than GPT-2 which is uni-directional.", "While the difference in maximum F1 score between ranks and probabilities is slight for each dataset (Figure 3", "(b)) box sizes show the spread in F1 scores is smaller with probabilities than with ranks.", "Upon further investigation, we find that experiments which use probabilities with image based features have an inter-quartile range of 0.05 and 0.1 for EBG and BLOG respectively whereas for experiments using probabilities with binning based features, this range is 0.32 for both datasets.", "On the other hand, inter-quartile range for experiments using ranks with image based features is 0.08 and 0.05 for EBG and BLOG whereas for experiments using ranks with binning based features, this range is 0.49 and 0.42 respectively.", "This shows that for both datasets, greater variation in F1 scores for ranks as compared to probabilities is caused by binning based features.", "We believe that binning ranks with fixed bin sizes (10, 50, 100) is less stable for both BERT and GPT-2 which have different limits of ranks this could account for the larger inter-quartile range using ranks.", "The box sizes in Figure 3", "(c) show that image based features exhibit strikingly greater stability in F1 scores than binning based features.", "Image based features also achieve significantly higher median F1 score than with binning for both datasets.", "This can in part be explained by the observation stated earlier that some bin size choices tested perform much worse than others because of not being fine-tuned.", "There is no difference between feature types in maximum F1 score for EBG whereas in BLOG, image based feature achieve somewhat higher maximum F1 score (0.83) than binning based features (0.78).", "We believe that the reason why image based features work so well is that VGG-19, the image model we use to extract features, is powerful enough to recognize the slopes in plots which represent the smoothness in our case.", "Figure 3", "(d), shows that for EBG, ANN and GNB achieve higher maximum F1 score (0.95), whereas for BLOG, GNB achieve higher maximum F1 score (0.83).", "KNN and ANN consistently achieve far more stable F1 scores than other classification methods.", "In both datasets, KNN achieves significantly higher median F1 score than other classification methods.", "ANN also follows the same pattern with the exception of GNB in BLOG evaded.", "We believe that the reason why KNN and ANN achieve relatively high and stable performance is in their nature of being able to adapt to diverse and complex feature spaces.", "In summary we conclude that BERT with probabilities is a good choice for dimension 1.", "(We remind the reader that in contrast, in the area of synthetic text detection (Gehrmann et al., 2019)", "GPT2 had the edge over BERT).", "Image based features are a clear winner in dimension 2 while KNN and ANN are the best candidates for dimension 3.", "Key to note as well is that the top performing architectures in Table 2 differ across datasets indicating the need for dataset specific choices.", "Figure 4 validates our intuition from Section 3 that the text generated by obfuscators is less smooth than the original text.", "Using EBG obfuscated dataset and BERTBASE for illustration, we first sort words in a document by estimated probability and plot average probability at each rank.", "The steeper the fall in the curve, the lower the smoothness of text.", "This plot shows that original documents are generally more smooth than obfuscated documents.", "The average detection error rates (Mutant-X embeddingCNN: 0.72, SN-PAN16: 0.48, and DS-PAN17: 0.07) are also consistent with the plot.", "These results show that Mutant-X is the most stealthy obfuscator while DS-PAN17 is the least stealthy obfuscator.", "In this paper, we showed that the state-of-the-art authorship obfuscation methods are not stealthy.", "We showed that the degradation in text smoothness caused by authorship obfuscators allow a detector to distinguish between obfuscated documents and original documents.", "Our proposed 0 100 200 300 400 500 Sorted words in documents 0.0 0.2 0.4 0.6 0.8 1.0 A v e r a g e o c c u r e n c e p r o b a b ili t y Original Mutant-X embeddingCNN DS-PAN17 SN-PAN16 Figure 4: Comparison between different obfuscators and original documents on the basis of average sorted probabilities extracted by BERTBASE for EBG obfuscated dataset.", "obfuscation detectors were effective at classifying obfuscated and evaded documents (F1 score as high as 0.92 and 0.95, respectively).", "Our findings point to future research opportunities to build stealthy authorship obfuscation methods.", "We suggest that obfuscation methods should strive to preserve text smoothness in addition to semantics." ]
[ "abstain", "abstain", "method", "abstain", "result", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "result", "method", "method", "result", "result", "objective", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "objective", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "other", "abstain", "method", "method", "method", "abstain", "method", "method", "method", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "method", "method", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "result", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "result", "result", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "result", "abstain", "other", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "objective", "abstain", "method", "abstain" ]
[ "Computing precise evidences , namely minimal sets of sentences that support or refute a given claim, rather than larger evidences is crucial in fact verification (FV), since larger evidences may contain conflicting pieces some of which support the claim while the other refute, thereby misleading FV.", "Despite being important, precise evidences are rarely studied by existing methods for FV.", "It is challenging to find precise evidences due to a large search space with lots of local optimums.", "Inspired by the strong exploration ability of the deep Q-learning network (DQN), we propose a DQN-based approach to retrieval of precise evidences.", "In addition, to tackle the label bias on Q-values computed by DQN, we design a postprocessing strategy which seeks best thresholds for determining the true labels of computed evidences.", "Experimental results confirm the effectiveness of DQN in computing precise evidences and demonstrate improvements in achieving accurate claim verification.", "1 1 Introduction With the growing false information, such as fake news, political deception and online rumors, automatic fact-checking systems have emerged to automatically identify and filter this information.", "Fact verification (FV) is a special fact-checking task that aims to retrieve related evidences from a text corpus to verify a textual claim.", "Taking Figure 1 as example, an existing method for FV first retrieves related documents from the given corpus at stage 1 (namely the document retrieval stage), then finds key sentences from the documents at stage 2 (namely the sentence selection stage), and finally treats the set of key sentences as an evidence to verify the claim at stage Corresponding author 1 Source code and data are available at https:// github.com/sysulic/DQN-FV .", "3 (namely the claim verification stage).", "As can be seen in this example, it is desirable to retrieve an evidence consisting of the first two sentences only, since it does not contain unnecessary sentences to determine the truthfulness of the claim and can alleviate human efforts to further validate the evidence.", "More importantly, an evidence containing unnecessary sentences may involve conflicting pieces some of which support the claim while the other refute the claim.", "Thus, it is crucial to compute minimal sets of sentences that can determine the truthfulness of the claim.", "In this paper, we refer to a minimal set of sentences that supports or refutes a given claim as a precise evidence .", "Existing methods for FV do not target the retrieval of precise evidences.", "Most existing studies (Thorne et al., 2018b; Nie et al., 2019; Zhou et al., 2019; Liu et al., 2020; Zhong et al., 2020; Ye et al., 2020; Subramanian and Lee, 2020; Wang et al., 2020) formulate FV as a three-stage pipeline task as illustrated in Figure", "1. This way makes the retrieval of precise evidences extremely diffi-cult since the sentence selection stage is required to select a precise set of relevant sentences rather than a fixed number of sentences as in existing methods.", "To the best of our knowledge, TwoWingOS (Yin and Roth, 2018) is the only method by now which does not follow the three-stage pipeline.", "Instead, it exploits a supervised training scheme to train the last two stages jointly and is able to compute precise evidences.", "However, it exhibits a significantly worse performance than other state-of-the-art methods for FV, especially in terms of the recall of evidences.", "Therefore, there is still a need for designing new methods to compute precise evidences.", "These methods are expected to achieve better performance than TwoWingOS.", "It is challenging to compute precise evidences.", "On one hand, the search space for precise evidences is very large.", "For example, in the benchmark Fact Extraction and VERification (FEVER) dataset (Thorne et al., 2018b) the average number of sentences for each claim input to the sentence selection stage is 40 , and an output evidence has up to 5 sentences.", "Hence there are up to (cid:80) 5 i =1 C i 40 = 760 , 098 candidates in the search space.", "On the other hand, greedy search of precise evidences easily falls into a local optimum.", "As shown in our experiments (see Table 6), a greedy search method does not perform well.", "Inspired by the strong exploration ability of the Deep Q-learning Network (DQN) (Mnih et al., 2015), we develop a DQN-based approach to retrieval of precise evidences.", "In this approach, we first employ DQN to compute candidate pairs of precise evidences and their labels, and then use a post-processing strategy to refine candidate pairs.", "We notice that Q-values computed by DQN has label bias due to two reasons.", "On one hand, the label NOT ENOUGH INFO does not locate at the same concept level as SUPPORTS or REFUTES.", "On the other hand, there is not a fixed range for Q-values, making Q-values hard to accurately estimate.", "Thus, a post-processing strategy is needed to tackle the label bias on Q-values.", "We develop such a strategy to seek best thresholds in determining the true labels of computed evidences.", "Our experimental results on FEVER (Thorne et al., 2018b) confirm that our DQN-based approach is effective in finding precise evidences.", "More importantly, the approach is shown to outperform state-of-the-art methods for FV.", "The FEVER 1.0 shared task (Thorne et al., 2018b) aims to develop an automatic fact verification system to determine the truthfulness of a textual claim by extracting related evidences from Wikipedia.", "Thorne et al. (2018a) has formalized this task, released a large-scale benchmark dataset FEVER (Thorne et al., 2018b), and designed the three-stage pipeline framework for FV, which consists of the document retrieval stage, the sentence selection stage and the claim verification stage.", "Most existing methods follow this framework and mainly focus on the last stage (Liu et al., 2020).", "For the document retrieval stage, most methods reuse the document retrieval component of top-performing systems (Hanselowski et al., 2018; Yoneda et al., 2018; Nie et al., 2019).", "For the sentence selection stage, there are three approaches commonly used, including keyword matching, supervised classification, and sentence similarity scoring (Thorne et al., 2018b).", "For the claim verification stage, most recent studies formulate this task as a graph reasoning task (Zhou et al., 2019; Liu et al., 2020; Ye et al., 2020; Zhong et al., 2020; Subramanian and Lee, 2020; Wang et al., 2020).", "Different from most existing methods that focus on claim verification, Yin and Roth (2018) proposed a supervised training method named TwoWingOS to jointly conduct sentence selection and claim verification.", "Nowadays pre-trained language models like BERT (Devlin et al., 2019) have been widely used in claim verification (Li et al., 2019; Zhou et al., 2019; Soleimani et al., 2020).", "Following this way we employed RoBERTa (Liu et al., 2019), an enhanced version of BERT, as the sentence encoder in our DQN-based approach in experiments.", "Reinforcement learning (RL) is about an agent interacting with the environment, objective to maximize the cumulative rewards of a sequence of states and actions by adjusting its policies.", "Q-Learning (Mnih et al., 2015) is a popular reinforcement learning technique.", "It aims to approximate the optimal value function Q ( o, a ) to measure the expected long-term rewards for a given pair of state o and action a .", "Deep Q-learning Network (DQN) (Mnih et al., 2015) is a combination of deep learning and Q-Learning.", "It typically uses the following Equation (1) derived from the Bellman equation (Cao and ZhiMin, 2019) to approximate the optimal Q-value function: Q ( o ( t ) , a ( t ) ) = E o ( t +1) [ r ( t ) + max a (cid:48) Q ( o ( t +1) , a (cid:48) )] , (1) where o ( t ) , a ( t ) , r ( t ) respectively denote the state, action and reward at step t , and [0 , 1] is a discounted factor for future rewards.", "Given a set of candidate sentences S = { s 1 , s 2 , . . . } , a claim c , a set of precise evidences E 2 S , and a true label y Y = { T,F,N } that determines whether every precise evidence supports or refutes the claim, where T / F / N denotes SUP-PORTS/REFUTES/NOT ENOUGH INFO, we aim to train a model to predict a precise evidence; more precisely, to train a model for retrieving an evidence E S and predicting a label y Y such that y = y and E = E for some E E .", "This goal is different from the goal targeted by existing methods, which aim to retrieve an evidence E S and predict a label y Y such that y = y and E E for some E E .", "We define the four ingredients of DQN namely states, actions, transitions and rewards as follows: State .", "Action .", "An action a is a sentence in S .", "Transition .", "A transition at step t is a tuple ( o ( t ) , a ( t ) , o ( t +1) ) , where o ( t ) = ( c, E ( t ) , y ) , o ( t +1) = ( c, E ( t +1) , y ) and E ( t +1) = E ( t ) { a ( t ) } .", "Reward .", "The reward r for a transition ( o ( t ) , a ( t ) , o ( t +1) ) is defined as r ( t ) = 1 , y = y ( y = N E E : a ( t ) E ) 1 , y (cid:54) = y | E ( t +1) | = K 0 , otherwise (2) where the number K is a hyper-parameter, and | S | denotes the cardinality of a set S .", "The core of our proposed approach is the DQN-based model, illustrated in Figure", "2. 3.2.1 Sentence Encoding Module We employ RoBERTa in this module to extract the final hidden state of (cid:104) s (cid:105) as the sentence representation, where (cid:104) s (cid:105) and (cid:104) /s (cid:105) mentioned in the following are the special classification tokens in RoBERTa.", "Specifically, following KGAT (Liu et al., 2020), we first concatenate the claim c , the document title l , and a sentence s ( resp. an action a ) as (cid:104) s (cid:105) c (cid:104) /s (cid:105) l (cid:104) /s (cid:105) s (cid:104) /s (cid:105) ( resp.", "(cid:104) s (cid:105) c (cid:104) /s (cid:105) l (cid:104) /s (cid:105) a (cid:104) /s (cid:105) ) and then feed it into RoBERTa to obtain the sentence representation h s R d 0 ( resp. the action representation h a R d 0 ), where d 0 is the dimension of the representation.", "We also feed the claim (cid:104) s (cid:105) c (cid:104) /s (cid:105) alone to obtain the claim representation h c R d 0 .", "Context sub-module .", "It is obvious that the sentences in an evidence are always contextual dependent, so we apply two different networks BiLSTM (Nguyen et al., 2016) and Transformer (Vaswani et al., 2017) for comparison.", "These two different networks are widely used to encode contextual-aware information of sequential text in the NLP community.", "Formally, we either define [ h (cid:48) E 0 , . . . , h (cid:48) E | E | 1 ] = BiLSTM ( h a , H E ) (3) if the BiLSTM network is used, or define [ h (cid:48) E 0 , . . . , h (cid:48) E | E | 1 ] = Transformer ( h a , H E ) (4) if the Transformer is used, where H E = [ h E 0 , . . . , h E | E | 1 ] , h E i R d 0 is the i -th sentence representation in E , h (cid:48) E i R d 1 is the corresponding context-aware sentence representation in E , and d 1 is the dimension of the representation.", "Aggregation sub-module .", "This sub-module is used to fuse the sentence representations in evidences to obtain an aggregated evidence representation.", "We also apply two different networks in this sub-module: Transformer and attention.", "Unlike the Transformer with self-attention in the first submodule, the query in this sub-module is the claim and the key/value is the context-aware sentence representation from the first sub-module.", "For the Figure 2: The architecture of the DQN-based model.", "e = | E | 1 (cid:88) i =0 i h (cid:48) i (5) i = exp( MLP ([ h c ; h (cid:48) i ])) | E | 1 (cid:88) j =0 exp( MLP ([ h c ; h (cid:48) j ])) (6)", "where e R d 1 is the aggregated evidence representation, MLP ( ) = Linear ( ReLU ( Linear ( ))) is a two-layer fully connected network using recti-fied linear unit as the activation function, and [; ] denotes the concatenation of two vectors.", "This module is used to obtain the Q-value vector for all labels, simply written as Q ( o, a ; ) for denoting the set of learnable parameters, which is formally defined as", "where MLP ( ) = Linear ( ReLU ( Linear ( ))) is similar to MLP ( ) used in Equation (6) except that different parameters in linear layers are used, W R d 0 d 0 is a learnable matrix, and Q ( o, a ; ) R d 2 for d 2 the number of different labels.", "Given a transition ( o ( t ) , a ( t ) , o ( t +1) ) and its reward r ( t ) , we use the Double Deep Q-learning Network (DDQN) (Mnih et al., 2015) technique to train our", "et al., 2015).", "This error is formally defined as = Q y ( o ( t ) , a ( t ) ; ) v ( o ( t +1) , r ( t ) ) (8) where v ( ) denotes the target value defined as v ( o, r ) = (cid:40) r, if | E | = K r + Q y ( o, a ; ) otherwise (9) for a = arg max a S \\ EQ y ( o, a ; ) .", "In the above equation, Q ( ; ) is the target network in DDQN, Q y denotes the Q-value of y for y the predicted label in o , E is the predicted evidence in o , and [0 , 1] is a hyper-parameter representing the discount factor.", "where B is a batch of transition-reward pairs.", "Algorithm 1 shows how to train the DQN-based model.", "First, we initialize three replay memories, the DQN-based model, and the target network in Line 1-3.", "Then, in Line 9-17, we obtain the training Algorithm 1: Model training for DQN, where the memory capacity M , the maximum evidence size K , the maximum number of epochs T and the reset interval C are hyper-parameters.", "transition-reward pairs by letting the DQN-based model interact with the environment in an (cid:15) -greedy exploration-exploitation way (Mnih et al., 2015).", "Finally, in Line 19, we sample a mini-batch of transition-reward pairs to update the DQN-based model, while in Line 20, for every C steps we reset the target network to the DQN-based model.", "Algorithm 2 shows how to retrieve a pair (candi-date list, score list) for each label, where the can-Algorithm", "didate list stores progressively enlarged sentence sets, where each sentence set is a candidate of the predicted evidence, and the score list stores the strengths that the corresponding candidates support the label.", "We enlarge the two-list pair for each label through a greedy-search way (Line 3-10).", "Specifi-cally, for each label, we first select the action with the largest Q-value (Line 5), then update the state by adding the chosen action into its predicted evidence (Line 7), and finally add the evidence and score into the corresponding list (Line 8).", "Algorithm 3 shows how to compute the target evidence-label pair from the (candidate list, score list) pairs obtained by Algorithm 2, where the thresholds are determined by Algorithm", "4. In this algorithm, we first use the condition given by Algorithm 4 to predict N (Line 3), and then refine the prediction of T (Line 6) and F (Line 9) in turn.", "In Line 2, we focus on the evidences with the highest score for T and F , while we ignore the evidence for N , due to the following reasons: (1) there are no supporting sentences in the evidence for N ; (2) we follow a strategy commonly used in existing methods for FV, i.e. , focusing only on the evidence for T and F .", "Algorithm 4 shows how to search for the best thresholds ( T , F , N ) to maximize the Label Accuracy (LA) over the development set.", "We first call Algorithm 2 to construct a set of tuples ( q T , q F , q N , y ) from the development set, each of which corresponds to a development instance, where q T , q F and q N are respectively the output score lists for the three labels T, F and N , and y is the corresponding true label (Line 1).", "We then go through the following two stages.", "The first stage (Line 3-11) finds a threshold N that can maximize LA for label N , where maximizing LA is amount to maximizing the difference between the number of correctly and incorrectly predicted instances.", "The second stage (Line 12-28) finds the thresholds T and F that can maximize LA for label T and F , respectively, where those instances that satisfy the conditions for N are neglected (Line 13).", "Our experiments are conducted on the large-scale benchmark dataset FEVER (Thorne et al., 2018a), which consists of 185,455 annotated claims with a set of 5,416,537 Wikipedia documents from the June 2017 Wikipedia dump.", "All claims are labeled as SUPPORTS, REFUTES, or NOT ENOUGH INFO.", "What's more, each claim for SUPPORTS and REFUTES is accompanied by some evidences extracted from Wikipedia documents.", "The dataset partition is kept the same with Thorne et al. (2018b) as shown in Table", "1. 4.1.2 Evaluation Metrics The task has five evaluation metrics: 1) FEVER, the primary scoring metric that measures the accuracy of claim verification with a requirement that the predicted evidences fully covers the ground-true evidences for SUPPORTS and REFUTES claims; 2) Label Accuracy (LA), the accuracy of claim verification without considering the validity of the Method T F NT-T -1.23361155390739 -1.26671668887138 0.0153777748346328 T-A -0.0631487071514129 0.0747150778770446 -1.48811344802379 BiLSTM-T 0.184351719915866 -0.64785711467266 -0.465365642681717 BiLSTM-A -0.0904324240982532 -0.795884847640991 -0.403448916971683 Table 2: The thresholds determined by Algorithm", "predicted evidences; 3) Precision (Pre), the macro-precision of the evidences for SUPPORTS and REFUTES claims; 4) Recall, the macro-recall of the evidences for SUPPORTS and REFUTES claims; 5) F1, the F 1 -score of the evidences for SUPPORTS and REFUTES claims.", "We choose F1 as our main metric because it can directly show the performance of methods on retrieval of precise evidences.", "Document retrieval .", "The document retrieval stage is kept the same as previous work (Hanselowski et al., 2018; Zhou et al., 2019; Liu et al., 2020; Ye et al., 2020).", "Given a claim, the method first utilizes the constituency parser from AllenNLP (Gardner et al., 2018) to extract potential entities from the claim.", "Then it uses the entities as search queries to find the relevant documents via the online Me-diaWiki API 2 .", "The convinced articles are reserved (Hanselowski et al., 2018).", "Sentence selection and claim verification .", "We implement our DQN-based model with PyTorch and train it with the AdamW (Loshchilov and Hut-ter, 2019) optimizer while keeping the sentence encoding module frozen and inheriting the RoBERTa implementation from Wolf et al. (2020) 3 .", "Specifically, the learning rate is 5e-6, the batch size is 128, the training epochs is 30, the iteration steps (or largest evidence size, i.e. , K ) is 5, the discount factor is 0.95, and the layer number of the context sub-module is", "3. Prioritized experience replay memory (Schaul et al., 2016) with a capacity of 10,000 is used to store transitions.", "The target network is reset when DQN is updated every 10 times.", "The probability of (cid:15) -greedy policy starts at 0.9 and decays exponentially towards 0.05, and the rate of the decay is 1 2000 .", "Table 2 shows the thresholds 2 https://www.mediawiki.org/wiki/API: Main_page 3 https://github.com/huggingface/ pytorch-transformers T , F and N computed by Algorithm", "We compare our method with the following baselines, including six methods that focus on claim verification and one joint method TwoWingOS (Yin and Roth, 2018).", "The six methods include: (1) GEAR (Zhou et al., 2019) uses two kinds of attentions to conduct reasoning and aggregation in a graph model; (2) KGAT (Liu et al., 2020) employes the Kernel Graph Attention Network to capture fine-grained information over evidences for more accurate claim verification; (3) DREAM (Zhong et al., 2020) introduces semantic structures for evidences obtained by semantic role labeling in claim verification; (4) CorefBERT (Ye et al., 2020) extends KGAT and can explicitly model co-reference relationship in context; (5) HESM (Subramanian and Lee, 2020) is a framework that can encode and attend the claim and evidence sets at different levels of hierarchy; (6) DGAT (Wang et al., 2020) is a double graph attention network that performs well in multi-domain datasets.", "The join method TwoWingOS (Yin and Roth, 2018) exploits a two-wing optimization strategy that optimizes sentence selection and claim verification in a jointly supervised training scheme.", "As shown in Table 3, we implement four versions of the evidence encoding module and evaluate them on the DEV set and the blind TEST set.", "The FEVER metric of the top six methods is calculated with the imprecise evidences, so we introduce the FEVER@5 metric for a fair comparison.", "We analyze our method from the following four aspects.", "Comparison with the state-of-the-art methods .", "Results in Table 3 show that all versions (except BiLSTM-A) with post-processing significantly outperform the state-of-the-art methods on FEVER, Pre, and F1, especially for T-A on F1, which shows the superiority of our method in retrival of precise evidences.", "However, none of the four versions of our method can achieve the best result on FEVER@5, LA, and Recall.", "The reason for low recall is that the number of sentences in precise evidences is less than that in imprecise evidences, which means other methods have a higher probability to recall the ground-true evidences than ours.", "Besides, the relatively low LA is caused by the Method DEV TEST FEVER@5 FEVER LA Pre Recall F1 FEVER@5 FEVER LA Pre Recall F1 GEAR 70.69 -74.84 24.08 86.72 37.69 67.10 -71.60 -36.87 KGAT 76.11 -78.29 27.79 94.37 42.34 70.38 -74.07 25.21 87.47 39.14 DREAM --26.60 87.33 40.79 70.60 -76.85 25.63 85.57 39.45 CorefBERT ---71.80 --39.14 HESM 73.44 -75.77 --71.48 -74.64 -52.78 DGAT ---66.91 -71.79 -TwoWingOS -56.16 78.90 47.73 53.81 50.59 -54.33 75.99 44.68 49.91 47.15 Ours T-T 72.83 71.55 78.18 50.42 81.82 62.39 70.16 68.91 75.74 48.76 79.91 60.56 (w./o.) 72.90 70.00 74.87 70.43 68.23 73.13 T-A 73.32 72.79 78.35 54.75 79.92 64.98 70.81 70.28 76.14 52.24 77.93 62.55 (w./o.) 73.29 72.60 78.12 70.82 70.18 76.00 BiLSTM-T 73.15 63.77 73.91 48.06 71.06 57.34 70.54 61.51 70.20 45.97 69.43 55.32 (w./o.) 73.19 55.39 63.55 70.81 53.21 61.68 BiLSTM-A 72.99 70.88 77.79 35.50 76.54 48.50 70.11 68.21 75.53 33.76 74.50 46.46 (w./o.) 73.20 65.65 71.21 70.55 63.38 69.32 Table 3: Performance on DEV set and blind TEST set of FEVER (%).", "low Recall of precise evidences.", "To further clarify this point, we evaluate our method on a subset of the DEV set where the ground-true evidences are recalled successfully.", "Our method improves significantly the performance on this subset, as shown in Table 4, which justifies our point of view.", "FEVER is affected by the LA and Recall, thereby the low FEVER@5 is also due to the low recall of precise evidences.", "In addition, the results reported in Table 5 show that our method can significantly reduce the number of unnecessary sentences in a predicted evidence.", "Comparison between different versions .", "As shown in Table 3, T-T and T-A perform respectively better than BiLSTM-T and BiLSTM-A on almost all metrics except that T-T is slightly worse width FEVER@5 FEVER LA Pre Recall F1 1 60.73 54.91 72.69 52.76 58.57 55.51 (w./o.) 50.09 46.55 53.00 2 60.74 54.94 72.69 52.84 58.66 55.59 (w./o.) 50.09 46.53 53.00 3 60.70 54.96 72.69 52.84 58.67 55.60 (w./o.) 50.10 46.54 53.00 4 60.67 54.95 72.69 52.81 58.66 55.58 (w./o.) 50.09 46.54 53.00 5 60.68 54.95 72.69 52.84 58.68 55.61 (w./o.) 50.09 46.54 53.00 Table 6: The beam-search result of KGAT on the DEV set (%).", "than BiLSTM-A on FEVER@5, which suggests Transformer can encode better context-aware representations than BiLSTM in our context sub-module.", "Moreover, we find that T-A performs better than T-T on almost all metrics except Recall and that BiLSTM-A is worse than BiLSTM-T on Pre and F1.", "This contrary result shows that the performance of the aggregation sub-module is impacted by the context sub-module.", "Thus, the choice between Transformer and Attention should depend on the context sub-module.", "Overall, T-A achieves the best performance among all the four versions of our # label claim ground-true evidences predicted evidences GEAR KGAT Our method (T-A) 1 F Savages was exclusively a German film.", "Comparison on retrieval of precise evidences .", "TwoWingOS is a supervised-learning method that can also find precise evidences.", "Although it achieves slightly better performance on LA than ours, its F1 and other metrics are much worse, indicating that it performs worse than our method except for BiLSTM-A in retrieval of precise-evidences.", "We also enhance KGAT to conduct beam-search for finding precise evidences and report the results in Table 6.", "The F1 score of KGAT is always higher than TwoWingOS but is still lower than our method except for BiLSTM-A.", "Comparison between the methods with and without post-processing .", "It can be seen from Table 3 and Table 6 that, post-processing (namely threshold searching and final prediction from candidates) consistently improves FEVER and LA.", "Although with post-processing, our method (except T-A) achieves slightly lower scores on FEVER@5, KGAT still achieves significantly higher scores on FEVER@5 as on other metrics.", "These results show that post processing is very important in retrieval of precise evidences.", "In Table 7 we provide some cases to demonstrate the effectiveness of our method (T-A) in retrieving precise evidences.", "In case#1 and case#2, our method exactly finds ground-true evidences without introducing any unnecessary sentence, while GEAT and KGAT cannot.", "In case#3 and case#4, our method generates less unnessary sentences in prdicted evidents than GEAT and KGAT do.", "In this paper, we have proposed a novel DQN-based approach to finding precise evidences for fact verification.", "It provides a method to solve the precise-evidence problem by first employing a DQN to compute some candidates and then introducing a post-processing strategy to extract the target evidence and its label from the candidates.", "Experimental results show that the approach achieves state-of-the-art performance in terms of retrieval of precise evidences.", "Besides, to the best of our knowledge, it is the first attempt to employ DQN in the fact verification task.", "This work is supported by the Guangdong Province Science and Technology Plan projects (2017B010110011), the National Natural Science Foundation of China (No. 61876204, 61976232, and 51978675), the National Key R&D Program of China (No.2018YFC0830600), Guangdong Province Natural Science Foundation (No. 2018A030313086), All-China Federation of Returned Over-seas Chinese Research Project (No. 17BZQK216)." ]
[ "abstain", "abstain", "abstain", "objective", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "objective", "method", "abstain", "abstain", "abstain", "objective", "result", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "other", "method", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "other", "method", "method", "method", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "other", "abstain", "result", "abstain", "abstain", "abstain", "result", "result", "abstain", "other", "abstain", "abstain", "result", "abstain", "result", "abstain", "other", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "result", "abstain", "objective", "abstain", "abstain", "objective", "other" ]
[ "Recently, there has been much interest in the question of whether deep natural language understanding models exhibit systematicity generalizing such that units like words make consistent contributions to the meaning of the sentences in which they appear.", "There is accumulating evidence that neural models often generalize non-systematically.", "We examined the notion of systematicity from a linguistic perspective, defining a set of probes and a set of metrics to measure systematic behaviour.", "We also identified ways in which network architectures can generalize non-systematically, and discuss why such forms of generalization may be unsatisfying.", "As a case study, we performed a series of experiments in the setting of natural language inference (NLI), demonstrating that some NLU systems achieve high overall performance despite being non-systematic.", "Language allows us to express and comprehend a vast variety of novel thoughts and ideas.", "This creativity is made possible by compositionality the linguistic system builds utterances by combining an inventory of primitive units such as morphemes, words, or idioms (the lexicon ), using a small set of structure-building operations (the grammar ; Camap, 1947; Fodor and Pylyshyn, 1988; Hodges, 2012; Janssen et al., 2012; Lake et al., 2017b; Szab, 2012; Zadrozny, 1994; Lake et al., 2017a).", "One property of compositional systems, widely studied in the cognitive sciences, is the phenomenon of systematicity .", "Systematicity refers to the fact that lexical units such as words make consistent contributions to the meaning of the sentences in which they appear.", "Fodor and Pylyshyn Corresponding author (1988) provided a famous example: If a competent speaker of English knows the meaning of the sentence John loves the girl , they also know the meaning of The girl loves John .", "This is because for speakers of English knowing the meaning of the first sentence implies knowing the meaning of the individual words the , loves , girl , and John as well as grammatical principles such as how transitive verbs take their arguments.", "But knowing these words and principles of grammar implies knowing how to compose the meaning of the second sentence.", "Deep learning systems now regularly exhibit very high performance on a large variety of natural language tasks, including machine translation (Wu et al., 2016; Vaswani et al., 2017), question answering (Wang et al., 2018; Henaff et al., 2016), visual question answering (Hudson and Manning, 2018), and natural language inference (Devlin et al., 2018; Storks et al., 2019).", "Recently, however, researchers have asked whether such systems generalize systematically (see 4).", "Systematicity is the property whereby words have consistent contributions to composed meaning; the alternative is the situation where words have a high degree of contextually conditioned meaning variation .", "In such cases, generalization may be based on local heuristics (McCoy et al., 2019b; Niven and Kao, 2019), variegated similarity (Albright and Hayes, 2003), or local approximations (Veldhoen and Zuidema, 2017), where the contribution of individual units to the meaning of the sentence can vary greatly across sentences, interacting with other units in highly inconsistent and complex ways.", "This paper introduces several novel probes for testing systematic generalization.", "We employ an artificial language to have control over systematicity and contextual meaning variation.", "Applying our probes to this language in an NLI setting reveals that some deep learning systems which achieve very high accuracy on standard holdout evaluations do so in ways which are non-systematic: the networks do not consistently capture the basic notion that certain classes of words have meanings which are consistent across the contexts in which they appear.", "The rest of the paper is organized as follows.", "2 discusses degrees of systematicity and contextually conditioned variation; 3 introduces the distinction between openand closed-class words, which we use in our probes.", "5 introduces the NLI task and describes the artificial language we use; 6 discusses the models that we tested and the details of our training setup; 7 introduces our probes of systematicity and results are presented in 8.", "1 2 Systematicity and Contextual Conditioning Compositionality is often stated as the principle that the meaning of an utterance is determined by the meanings of its parts and the way those parts are combined (see, e.g., Heim and Kratzer, 2000).", "Systematicity, the property that words mean the same thing in different contexts, is closely related to compositionality; nevertheless, compositional systems can vary in their degree of systematicity.", "At one end of the spectrum are systems in which primitive units contribute exactly one identical meaning across all contexts.", "This high degree of systematicity is approached by artificial formal systems including programming languages and logics, though even these systems don't fully achieve this ideal (Cantwell Smith, 1996; Dutilh Novaes, 2012).", "The opposite of systematicity is the phenomenon of contextually conditioned variation in meaning where the contribution of individual words varies according to the sentential contexts in which they appear.", "Natural languages exhibit such context dependence in phenomena like homophony, polysemy, multi-word idioms, and co-compositionality.", "Nevertheless, there are many words in natural languageespecially closed-class words like quan-tifiers (see below)which exhibit very little variability in meaning across sentences.", "The logical extremea system where each word has a different and unrelated meaning every time it occursis clearly of limited usefulness since it would make generalization impossible.", "Nevertheless, learners with sufficient memory capacity and flexibility of representation, such as deep learning models, can learn systems with very high degrees of contextual conditioningin particular, higher than human language learners.", "An important goal for building systems that learn and generalize like people is to engineer systems with inductive biases for the right degree of systematicity.", "In 8, we give evidence that some neural systems are likely too biased toward allowing contextually conditioned meaning variability for words, such as quantifiers, which do not vary greatly in natural language.", "Natural language distinguishes between content or open-class lexical units and function or closed-class lexical units.", "The former refers to categories, such a nouns and verbs, which carry the majority of contentful meaning in a sentence and which permit new coinages.", "Closed-class units, by contrast, carry most of the grammatical structure of the sentence and consist of things like inflectional morphemes (like pluralizing -s in English) and words like determiners, quantifiers, and negation (e.g., all, some, the in English).", "These are mostly fixed; adult speakers do not coin new quantifiers, for example, the way that they coin new nouns.", "Leveraging this distinction gives rise to the possibility of constructing probes based on jabberwocky-type sentences .", "This term references the poem Jabberwocky by Lewis Carroll, which combines nonsense open-class words with familiar closed-class words in a way that allows speakers to recognize the expression as well formed.", "For example, English speakers identify a contradiction in the sentence All Jabberwocks flug, but some Jabberwocks don't flug , without a meaning for jabberwock and flug .", "This is possible because we expect the words all , some , but , and don't to contribute the same meaning as they do when combined with familiar words, like All pigs sleep, but some pigs don't sleep .", "Using jabberwocky-type sentences, we tested the generalizability of certain closed-class word representations learned by neural networks.", "Giving the networks many examples of each construction with a large variety of different content words that is, large amounts of highly varied evidence about the meaning of the closed-class wordswe asked during the test phase how fragile this knowledge is when transferred to new open-class words.", "That is, our probes combine novel open-class words with familiar closed-class words, to test whether the closed-class words are treated systematically by the network.", "For example, we might train the networks to identify contradictions in pairs like All pigs sleep; some pigs don't sleep , and test whether the network can identify the contradiction in a pair like All Jabberwocks flug; some Jabberwocks don't flug .", "A systematic learner would reliably identify the contradiction, whereas a non-systematic learner may allow the closed-class words ( all, some, don't ) to take on contextually conditioned meanings that depend on the novel context words.", "There has been much interest in the problem of systematic generalization in recent years (Bahdanau et al., 2019; Bentivogli et al., 2016; Lake et al., 2017a,b; Gershman and Tenenbaum, 2015; McCoy et al., 2019a; Veldhoen and Zuidema, 2017; Soulos et al., 2019; Prasad et al., 2019; Richardson et al., 2019; Johnson et al., 2017, inter alia).", "In contrast to our approach (testing novel words in familiar combinations), many of these studies probe systematicity by testing familiar words in novel combinations.", "Lake and Baroni (2018) adopt this approach in semantic parsing with an artificial language known as SCAN.", "Dasgupta et al. (2018, 2019) introduce a naturalistic NLI dataset, with test items that shuffle the argument structure of natural language utterances.", "In the in the inductive logic programming domain, Sinha et al. (2019) introduced the CLUTTR relational-reasoning benchmark.", "The novel-combinations-of-familiar-words approach was formalized in the CFQ dataset and associated distribution metric of Keysers et al. (2019).", "Ettinger et al. (2018) introduced a semantic-role-labeling and negation-scope labeling dataset, which tests compositional generalization with novel combinations of familiar words and makes use of syntactic constructions like relative clauses.", "Finally, Kim et al. (2019) explore pre-training schemes' abilities to learn prepositions and wh-words with syntactic transformations (two kinds of closed-class words which our work does not address).", "investigates learned representations, rather than developing probes of model behavior.", "This is done either through visualization (Veldhoen and Zuidema, 2017), training a second network to approximate learned representations using a symbolic structure (Soulos et al., 2019) or as a diagnostic classifier (Giulianelli et al., 2018), or reconstructing the semantic space through similarity measurements over representations (Prasad et al., 2019).", "We make use of the Natural language inference (NLI) task to study the question of systematicity.", "The NLI task is to infer the relation between two sentences (the premise and the hypothesis ).", "Sentence pairs must be classified into one of a set of predefined logical relations such as entailment or contradiction .", "For example, the sentence All mammals growl entails the sentence All pigs growl .", "A rapidly growing number of studies have shown that deep learning models can achieve very high performance in this setting (Evans et al., 2018; Conneau et al., 2017; Bowman et al., 2014; Yoon et al., 2018; Kiela et al., 2018; Munkhdalai and Yu, 2017; Rock-tschel et al., 2015; Peters et al., 2018; Parikh et al., 2016; Zhang et al., 2018; Radford et al., 2018; Devlin et al., 2018; Storks et al., 2019).", "We adopt the formulation of NLI known as natural logic (MacCartney and Manning, 2014, 2009; Lakoff, 1970).", "Natural logic makes use of seven logical relations between pairs of sentences.", "These are shown in Table 1.", "These relations can be interpreted as the set theoretic relationship between the extensions of the two expressions.", "For instance, if the expressions are the simple nouns warthog and pig , then the entailment relation ( (cid:64) ) holds between these extensions ( warthog (cid:64) pig ) since every warthog is a kind of pig.", "For higher-order operators such as quantifiers, relations can be defined between sets of possible worlds.", "For instance, the set of possible worlds consistent with the expression All blickets wug is a subset of the set of possible worlds consistent with the logically weaker expression All red blickets wug .", "Critically, the relationship between composed expressions such as All X Y and All P Q is determined entirely by the relations between X/Y and P/Q, respectively.", "Thus, natural logic allows us to compute the relation between the whole expressions using the relations between parts.", "We define an artificial language in which such alignments are easy to compute, and use this language to probe deep learning systems' ability to generalize systematically.", "In our artificial language, sentences are generated according to the six-position template shown in Table 2, and include a quantifier (position 1), noun (position 3), and verb (position 6), with optional preand post-modifiers (position 2 and", "4) and optional negation (position 5).", "For readability, all examples in this paper use real English words; however, simulations can use uniquely identified abstract symbols (i.e., generated by gensym ).", "We compute the relation between position-aligned pairs of sentences in our language using the natural logic system (described in 5.2).", "Quanti-fiers and negation have their usual natural-language semantics in our artificial language; preand post-modifiers are treated intersectively.", "Open-class items (nouns and verbs) are organized into linear hierarchical taxonomies, where each open-class word is the subor super-set of exactly one other open-class item in the same taxonomy.", "For example, since dogs are all mammals , and all mammals animals , they form the entailment hierarchy dogs (cid:64) mammals (cid:64) animals .", "We vary the number of distinct noun and verb taxonomies according to an approach we refer to as block structure, described in the next section.", "In natural language, most open-class words do not appear with equal probability with every other word.", "Instead, their distribution is biased and clumpy, with words in similar topics occurring together.", "To mimic such topic structure, we group nouns and verbs into blocks .", "Each block consists of six nouns and six verbs, which form taxonomic hierarchies (e.g., lizards / animals , run / move ).", "Nouns and verbs from different blocks have no taxonomic relationship (e.g., lizards and screwdrivers or run and read ) and do not co-occur in the same sentence pair.", "Because each block includes a six verbs and six nouns in a linear taxonomic hierarchy, no single block is intrinsically harder to learn than any other block.", "The same set of closed-class words appear with all blocks of open-class words, and their meanings are systematic regardless of the open-class words (nouns and verbs) they are combined with.", "For example, the quantifier some has a consistent meaning when it is applied to some screwdrivers or some animals .", "Because closed-class words are shared across blocks, models are trained on extensive and varied evidence of their behaviour.", "We present closed-class words in a wide variety of sentential contexts, with a wide variety of different open-class words, to provide maximal pressure against overfitting and maximal evidence of their consistent meaning.", "We now describe the structure of our training blocks, holdout test set, and jabberwocky blocks.", "We also discuss our two test conditions, and several other issues that arise in the construction of our dataset.", "Training set: For each training block, we sampled (without replacement) one sentence pair for every possible combination of open-class words, that is, every combination of nouns and verbs (cid:104) noun 1 , noun 2 , verb 1 , verb 2 (cid:105) .", "Closed-class words were sampled uniformly to fill each remaining positions in the sentence (see Table 2).", "A random subset of 20% of training items were reserved for validation (early stopping) and not used during training.", "Holdout test set: For each training block, we sampled a holdout set of forms using the same nouns and verbs, but disjoint from the training set just described.", "The sampling procedure was identical to that for the training blocks.", "These holdout items allow us to test the generalization of the models with known words in novel configurations (see 8.1).", "Jabberwocky test set: Each jabberwocky block consisted of novel open-class items (i.e., nouns and verbs) that did not appear in training blocks.", "For each jabberwocky block, we began by following a Position 1 2 3 4 5 6 Category quantifier nominal premodifier noun nominal postmodifier negation verb Status Obligatory Optional Obligatory Optional Optional Obligatory Class Closed Closed Open Closed Closed Open Example All brown dogs that bark don't run Table 2: A template for sentences in the artificial language.", "sampling procedure identical to that for the train-ing/holdout sets with these new words.", "Several of our systematicity probes are based on the behavior of neighboring pairs of test sentences (see 7).", "To ensure that all such necessary pairs were in the jabberwocky test set, we extended the initial sample with any missing test items.", "Training conditions: Since a single set of closed-class words is used across all blocks, adding more blocks increases evidence of the meaning of these words without encouraging overfitting.", "To study the effect of increasing evidence in this manner, we use two training conditions: small with 20 training blocks and large with 185 training blocks.", "Both conditions contained 20 jabberwocky blocks.", "The small condition consisted of 51 , 743 training, 10 , 399 validation, and 3 , 694 , 005 test (holdout and jabberwocky) pairs.", "The large condition consisted of 478 , 649 training, 96 , 005 validation, and 3 , 694 , 455 test items.", "Balancing: One consequence of the sampling method is that logical relations will not be equally represented in training.", "In fact, it is impossible to simultaneously balance the distributions of syntactic constructions, logical relations, and instances of words.", "In this trade-off, we chose to balance the distribution of open-class words in the vocabulary, as we are focused primarily on the ability of neural networks to generalize closed-class word meaning.", "Balancing instances of open-class words provided the greatest variety of learning contexts for the meanings of the closed-class items.", "We analyze performance on four simple baseline models known to perform well on standard NLI tasks, such as the Stanford Natural Language Inference datasets, (Bowman et al., 2015).", "Following Conneau et al. (2017), the hypothesis u and premise v are individually encoded by neural sequence encoders such as a long short-term memory (LSTM; Hochreiter and Schmidhuber, 1997) or gated recurrent unit (GRU; Cho et al., 2014).", "These vectors, together with their element-wise product u v and element-wise difference u v are fed into a fully connected multilayer perceptron layer to predict the relation.", "The encodings u and v are produced from an input sentence of M words, w 1 , . . . , w M , using a recurrent neural network, which produces a set of a set of M hidden representations h 1 , . . . , h t , where h t = f ( w 1 , . . . , w M ) .", "The sequence encoding is represented by its last hidden vector h T .", "The simplest of four models sets f to be a bidirectional gated recurrent unit (BGRU).", "This model concatenates the last hidden state of a GRU run forwards over the sequence and the last hidden state of GRU run backwards over the sequence, for example, u = [ h M , h M ] .", "Our second embedding system is the Infersent model reported by Conneau et al. (2017), a bidirectional LSTM with max pooling (INFS).", "This is a model where f is an LSTM.", "Each word is represented by the concatenation of a forward and backward representation: h t = [ h t , h t ] .", "We constructed a fixed vector representation of the sequence h t by selecting the maximum value over each dimension of the hidden units of the words in the sentence.", "Our third model is a self-attentive sentence encoder (SATT) which uses an attention mechanism over the hidden states of a BiLSTM to generate the sentence representation (Lin et al., 2017).", "This attention mechanism is a weighted linear combination of the word representations, denoted by u = (cid:80) M i h i , where the weights are calculated as follows: h i = tanh ( W h i + b w ) i = e h i (cid:62) u w (cid:80) i e h i (cid:62) u w where, u w is a learned context query vector and ( W, b w ) are the weights of an affine transformation.", "This self-attentive network also has multiple views of the sentence, so the model can attend to multiple parts of the given sentence at the same time.", "Finally, we test the Hierarchical Convolution-alNetwork (CONV) architecture from (Conneau et al., 2017) which is itself inspired from the model AdaSent (Zhao et al., 2015).", "This model has four convolution layers; at each layer the intermediate representation u i is computed by a max-pooling operation over feature maps.", "The final representation is a concatenation u = [ u 1 , ..., u l ] where l is the number of layers.", "In this section, we study the systematicity of the models described in 6.1.", "Recall that systematicity refers to the degree to which words have consistent meaning across different contexts, and is contrasted with contextually conditioned variation in meaning.", "We describe three novel probes of systematicity which we call the known word perturbation probe , the identical open-class words probe , and the consistency probe .", "All probes take advantage of the distinction between closed-class and open-class words reflected in the design of our artificial language, and are performed on sentence pairs with novel open-class words (jabberwocky-type sentences; see 5.5 ).", "We now describe the logic of each probe.", "We test whether the models treat the meaning of closed-class words systematically by perturbing correctly classified jabberwocky sentence pairs with a closed-class word.", "More precisely, for a pair of closed-class words w and w (cid:48) , we consider test items which can be formed by substitution of w by w (cid:48) in a correctly classified test item.", "We allow both w and w (cid:48) to be any of the closed-class items, including quantifiers, negation, nominal post-modifiers, or the the empty string (cid:15) (thus modeling insertions and deletions of these known, closed-class items).", "Suppose that Example 1 was correctly classified.", "Substituting some for all in the premise of yields Example 2, and changes the relation from entailment ( (cid:64) ) to reverse entailment ( (cid:65) ).", "(2) Some blickets wug.", "All blockets wug.", "There are two critical features of this probe.", "First, because we start from a correctly-classified jabberwocky pair, we can conclude that the novel words (e.g., wug and blickets above) were assigned appropriate meanings.", "Second, since the perturbation only involves closed-class items which do not vary in meaning and have been highly trained, the perturbation should not affect the models ability to correctly classify the resulting sentence pair.", "If the model does misclassify the resulting pair, it can only be because a perturbed closed-class word (e.g., some ) interacts with the open-class items (e.g., wug ), in a way that is different from the pre-perturbation closed-class item (i.e., all ).", "This is non-systematic behavior.", "In order to rule out trivially correct behavior where the model simply ignores the perturbation, we consider only perturbations which result in a change of class (e.g., (cid:64) (cid:55) (cid:65) ) for the sentence pair.", "In addition to accuracy on these perturbed items, we also examine the variance of model accuracy on probes across different blocks.", "If a model's accuracy varies depending only on the novel open-class items in a particular block, this provides further evidence that it does not treat word meaning systematically.", "Some sentence pairs are classifiable without any knowledge of the novel words' meaning; for example, pairs where premise and hypothesis have identical open-class words.", "An instance is shown in Example 3: the two sentences must stand in contradiction, regardless of the meaning of blicket or wug .", "(3) All blickets wug.", "Some blickets don't wug.", "The closed-class items and compositional structure of the language is sufficient for a learner to deduce the relationships between such sentences, even with unfamiliar nouns and verbs.", "Our second probe, the identical open-class words probe , tests the models' ability to correctly classify such pairs.", "Consider Examples 4 and 5, which present the same two sentences in opposite orders.", "(4) All blickets wug.", "All red blickets wug.", "(5) All red blickets wug.", "All blickets wug.", "In Example 4, the two sentences stand in an entailment ( (cid:64) ) relation.", "In Example 5, by contrast, the two sentences stand in a reverse entailment ( (cid:65) ) relation.", "This is a logically necessary consequence of the way the relations are defined.", "Reversing the order of sentences has predictable effects for all seven natural logic relations: in particular, such reversals map (cid:64) (cid:55) (cid:65) and (cid:65) (cid:55) (cid:64) , leaving all other relations intact.", "Based on this observation, we develop a consistency probe of systematicity.", "We ask for each correctly classified jabberwocky block test item, whether the corresponding reversed item is also correctly classified.", "The intuition behind this probe is that whatever meaning a model assumes for the novel open-class words, it should assume the same meaning when the sentence order is reversed.", "If the reverse is not correctly classified, then this is strong evidence of contextual dependence in meaning.", "In this section, we report the results of two control analyses, and that of our three systematicity probes described above.", "We first establish that the models perform well on novel configurations of known words.", "Table 3 reports accuracy on heldout sentence pairs, described in 5.5.", "The table reports average accuracies across training blocks together with the standard deviations of these statistics.", "As can be seen in the table, all models perform quite well on holdout forms across training blocks, with very little variance.", "Because these items use the same sampling scheme and vocabulary as the trained blocks, these simulations serve as a kind of upper bound on the performance and a lower bound on the variance that we can expect from the more challenging jabberwocky-block-based evaluations below.", "Our three systematicity probes employ jabberwocky-type sentencesnovel open-class words in sentential frames built from known closed-class words.", "Since models are not Condition BGRU CONV SATT INFS mean(sd) mean(sd) mean(sd) mean(sd) small 95.1 0 .", "trained on these novel words, it is important to establish that they are from the same distribution as the trained words and, thus, that the models' performance is not driven by some pathological feature of the novel word embeddings.", "Trained word embeddings were initialized randomly from N (0 , 1) and then updated during training.", "Novel word embeddings were simply drawn from N (0 , 1) and never updated.", "Figure 1 plots visualizations of the trained and novel open-class word embeddings in two dimensions, using t-SNE parameters computed over all open-class words (Maaten and Hinton, 2008).", "Trained words are plotted as + , novel words as .", "Color indicates the proportion of test items containing that word that were classified correctly.", "As the plot shows, the two sets of embeddings overlap considerably.", "Moreover, there does not appear to be a systematic relationship between rates of correct classification for items containing novel words and their proximity to trained words.", "We also performed a resampling analysis, determining that novel vectors did not differ significantly in length from trained vectors ( p = 0 . 85 ).", "Finally, we observed mean and standard deviation of the pairwise cosine similarity between trained and novel words to be 0 .", "999 and 0 .", "058 respectively, confirming that there is little evidence the distributions are different.", "Recall from 7.1 that the known word perturbation probe involves insertion, deletion, or substitution", "of a trained closed-class word in a correctly classified jabberwocky-type sentence pair.", "Figure 2 plots the results of this probe.", "Each point represents a perturbation type a group of perturbed test items that share their before/after target perturbed closed-class words and before/after relation pairs.", "The upper plot displays the mean accuracy of all perturbations, averaged across blocks, and the lower plot displays the standard deviations across blocks.", "All models perform substantially worse than the holdout-evaluation on at least some of the perturbations.", "In addition, the standard deviation of accuracy between blocks is higher than the holdout tests.", "As discussed in 7.1, low accuracy on this probe indicates that closed-class words do not maintain a consistent interpretation when paired with different open-class words.", "Variance across blocks shows that under all models the behavior of closed-class words is highly sensitive to the novel words they appear with.", "Performance is also susceptible to interference from sentence-level features.", "For example, consider the perturbation which deletes a post-modifier from a sentence pair in negation, yielding a pair in cover relation.", "The self-attentive encoder performs perfectly when this perturbation is applied to a premise ( 100% 0 . 00% ), but not when applied to a hypothesis ( 86 . 60% 18 . 08% ).", "Similarly, deleting the adjective red from the hypothesis of a forward-entailing pair results in an unrelated sentence pair ( 84 . 79% 7 . 50% ) or another forward-entailing pair ( 92 . 32% , 3 . 60% ) or an equality pair ( 100% 0 . 00% ).", "All the possible perturbations we studied exhibit similarly inconsistent performance.", "Recall that the identical open-class words probe consist of sentence pairs where all open-class lexical items were identical.", "Table 4 shows the accuracies for these probes, trained on the small language.", "Average accuracies across jabberwocky blocks are reported together with standard deviations.", "Accuracy on the probe pairs fails to reach the holdout test levels for most models and most relations besides # , and variance between blocks is much higher than in the holdout evaluation.", "Of special interest is negation ( ), for which accuracy is dramatically lower and variance dramatically higher than the holdout evaluation.", "The results are similar for the large language condition, shown in Table 5.", "Although model accuracies improve somewhat, variance remains higher than the heldout level and accuracy lower.", "Recall that these probe-items can be classified while ignoring the specific identity of their open-class words.", "Thus, the models inability to leverage this fact, and high variance across different sets novel open-class words, illustrates their sensitivity to context.", "The consistency probe tests abstract knowledge of relationships between logical relations, such as the fact that two sentences that stand in a contradiction still stand in a contradiction after reversing their order.", "Results of this probe in the small-language condition are in Table 6: For each type of relation, we show the average percentage of correctly-labeled Relation BGRU CONV SATT INFS mean(sd) mean(sd) mean(sd) mean(sd) # 99.82 0 .", "The best-performing model on negation reversal is SATT, which correctly labeled reversed items 66 .", "92% of the time.", "Although performance on negation is notably more difficult than the other relations, every model, on every relation, exhibited inter-block variance higher than that of the hold-out evaluations.", "Furthermore, as can be seen in Table 7, the large language condition yields little improvement.", "Negation pairs are still well below the hold-out test threshold, still with a high degree of variation.", "Variation remains high for many relations, which is surprising because the means report accuracy on test items that were chosen specifically because the same item, in a reverse order, was already correctly labeled.", "Reversing the order of sentences causes the model to misclassify the resulting pair, more often for some blocks than others.", "Systematicity refers to the property of natural language representations whereby words (and other units or grammatical operations) have consistent meanings across different contexts.", "Our probes test whether deep learning systems learn to represent linguistic units systematically in the natural lan-Relation BGRU CONV SATT INFS mean(sd) mean(sd) mean(sd) mean(sd) # 98.45 0 .", "guage inference task.", "Our results indicate that despite their high overall performance, these models tend to generalize in ways that allow the meanings of individual words to vary in different contexts, even in an artificial language where a totally systematic solution is available.", "This suggests the networks lack a sufficient inductive bias to learn systematic representations of words like quantifiers, which even in natural language exhibit very little meaning variation.", "Our analyses contain two ideas that may be useful for future studies of systematicity.", "First, two of our probes (known word perturbation and consistency) are based on the idea of starting from a test item that is classified correctly, and applying a transformation that should result in a classifiable item (for a model that represents word meaning systematically).", "Second, our analyses made critical use of differential sensitivity (i.e., variance) of the models across test blocks with different novel words but otherwise identical information content.", "We believe these are a novel ideas that can be employed in future studies.", "We thank Brendan Lake, Marco Baroni, Adina Williams, Dima Bahdanau, Sam Gershman, Ishita Dasgupta, Alessandro Sordoni, Will Hamilton, Leon Bergen, the Montreal Computational and Quantitative Linguistics, and Reasoning and Learning Labs at McGill University for feedback on the manuscript.", "We are grateful to Facebook AI Research for providing extensive compute and other support.", "We also gratefully acknowledge the support of the Natural Sciences and Engineering Research Council of Canada, the Fonds de Recherche du Qubec, Socit et Culture, and the Canada CIFAR AI Chairs Program." ]
[ "abstain", "abstain", "method", "method", "objective", "objective", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "objective", "method", "result", "abstain", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "abstain", "objective", "method", "abstain", "other", "objective", "other", "other", "other", "other", "other", "abstain", "other", "other", "method", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "objective", "objective", "objective", "other", "other", "other" ]
[ "We propose a generative model for a sentence that uses two latent variables, with one intended to represent the syntax of the sentence and the other to represent its semantics.", "We show we can achieve better disentanglement between semantic and syntactic representations by training with multiple losses, including losses that exploit aligned paraphrastic sentences and word-order information.", "We also investigate the effect of moving from bag-of-words to recurrent neural network modules.", "We evaluate our models as well as several popular pretrained embeddings on standard semantic similarity tasks and novel syntactic similarity tasks.", "Empirically, we find that the model with the best performing syntactic and semantic representations also gives rise to the most disentangled representations.", "1 1 Introduction As generative latent variable models, especially of the continuous variety (Kingma and Welling, 2014; Goodfellow et al., 2014), have become increasingly important in natural language processing (Bowman et al., 2016; Gulrajani et al., 2017), there has been increased interest in learning models where the latent representations are disentangled (Hu et al., 2017).", "Much of the recent NLP work on learning disentangled representations of text has focused on disentangling the representation of attributes such as sentiment from the representation of content, typically in an effort to better control text generation (Shen et al., 2017; Zhao et al., 2017; Fu et al., 2018).", "In this work, we instead focus on learning sentence representations that disentangle the syntax and the semantics of a sentence.", "We are moreover interested in disentangling these representa-1 Code and data are available at github.com/ mingdachen/disentangle-semantics-syntax tions not for the purpose of controlling generation, but for the purpose of calculating semantic or syntactic similarity between sentences (but not both).", "To this end, we propose a generative model of a sentence which makes use of both semantic and syntactic latent variables, and we evaluate the induced representations on both standard semantic similarity tasks and on several novel syntactic similarity tasks.", "We use a deep generative model consisting of von Mises Fisher (vMF) and Gaussian priors on the semantic and syntactic latent variables (respec-tively) and a deep bag-of-words decoder that conditions on these latent variables.", "Following much recent work, we learn this model by optimizing the ELBO with a VAE-like (Kingma and Welling, 2014; Rezende et al., 2014) approach.", "Our learned semantic representations are evaluated on the SemEval semantic textual similarity (STS) tasks (Agirre et al., 2012; Cer et al., 2017).", "Because there has been less work on evaluating syntactic representations of sentences, we propose several new syntactic evaluation tasks, which involve predicting the syntactic analysis of an unseen sentence to be the syntactic analysis of its nearest neighbor (as determined by the latent syntactic representation) in a large set of annotated sentences.", "In order to improve the quality and disentanglement of the learned representations, we incorporate simple additional losses in our training, which are designed to force the latent representations to capture different information.", "In particular, our semantic multi-task losses make use of aligned paraphrase data, whereas our syntactic multi-task loss makes use of word-order information.", "Additionally, we explore different encoder and decoder architectures for learning better syntactic representations.", "Experimentally, we find that by training in this way we are able to force the learned representations to capture different information (as measured by the performance gap between the latent representations on each task).", "Moreover, we find that we achieve the best performance on all tasks when the learned representations are most disentangled.", "There is a growing amount of work on learning interpretable or disentangled latent representations both in machine learning (Tenenbaum and Freeman, 2000; Reed et al., 2014; Makhzani et al., 2015; Mathieu et al., 2016; Higgins et al., 2016; Chen et al., 2016; Hsu et al., 2017) and in various NLP applications, including sentence sentiment and style transfer (Hu et al., 2017; Shen et al., 2017; Fu et al., 2018; Zhao et al., 2018, inter alia ), morphological reinflection (Zhou and Neubig, 2017), semantic parsing (Yin et al., 2018), text generation (Wiseman et al., 2018), and sequence labeling (Chen et al., 2018).", "Another related thread of work is text-based variational au-toencoders (Miao et al., 2016; Bowman et al., 2016; Serban et al., 2017; Xu and Durrett, 2018).", "In terms of syntax and semantics in particular, there is a rich history of work in analyzing their interplay in sentences (Jurafsky, 1988; van Valin, Jr., 2005).", "We do not intend to claim that the two can be entirely disentangled in distinct representations.", "Rather, our goal is to propose modica of knowledge via particular multi-task losses and measure the extent to which this knowledge leads learned representations to favor syntactic or semantic information from a sentence.", "There has been prior work with similar goals for representations of words (Mitchell and Steedman, 2015) and bilexical dependencies (Mitchell, 2016), finding that decomposing syntactic and semantic information can lead to improved performance on semantic tasks.", "We find similar trends in our results, but at the level of sentence representations.", "A similar idea has been explored for text generation (Iyyer et al., 2018), where adversarial examples are generated by controlling syntax.", "Some of our losses use sentential paraphrases, relating them to work in paraphrase modeling (Wi-eting et al., 2016; Wieting and Gimpel, 2018).", "Deudon (2018) recently proposed a variational framework for modeling paraphrastic sentences, but our focus here is on learning disentangled representations.", "As part of our evaluation, we develop novel syntactic similarity tasks for sentence representations learned without any syntactic supervision.", "These evaluations relate to the broad range of work in unsupervised parsing (Klein and Manning, 2004) and part-of-speech tagging (Christodoulopoulos et al., 2010).", "However, our evaluations differ from previous evaluations in that we employ k -nearest-neighbor syntactic analyzers using our syntactic representations to choose nearest neighbors.", "There is a great deal of work on applying multitask learning to various NLP tasks (Plank et al., 2016; Rei, 2017; Augenstein and Sgaard, 2017; Bollmann et al., 2018, inter alia ) and, recently, as a way of improving the quality or disentanglement of learned representations (Zhao et al., 2017; Goyal et al., 2017; Du et al., 2018; John et al., 2018).", "Our goal is to extract the disentangled semantic and syntactic information from sentence representations.", "To achieve this, we introduce the vMF-Gaussian Variational Autoencoder (VGVAE).", "As shown in Figure 1, VGVAE assumes a sentence is generated by conditioning on two independent variables: semantic variable y and syntactic variable z .", "In particular, our model gives rise to the following joint likelihood p ( x, y, z ) = p ( y ) p ( z ) p ( x | y, z ) = p ( y ) p ( z ) TY t =1 p ( x t | y, z ) , where x t is the t th word of x , T is the sentence length, and p ( x t | y, z ) is given by a softmax over a vocabulary of size V .", "Further details on the parameterization are given below.", "To perform inference, we assume a factored posterior q ( y, z | x ) = q ( y | x ) q ( z | x ) , as has been used in prior work ( Zhou and Neubig, 2017; Chen et al., 2018).", "Learning of VGVAE maximizes a lower bound on marginal log-likelihood: log p ( x ) E y q ( y | x ) z q ( z | x ) [log p ( x | z, y ) log q ( z | x ) p ( z ) log q ( y | x ) p ( y ) ] = E y q ( y | x ) z q ( z | x ) [log p ( x | z, y )] KL ( q ( z | x ) k p ( z )) KL ( q ( y | x ) k p ( y )) def == ELBO (1) 3.1 Parameterizations VGVAE uses two distribution families in defining the posterior over latent variables, namely, the von Mises-Fisher (vMF) distribution and the Gaussian distribution.", "vMF Distribution.", "vMF can be regarded as a Gaussian distribution on a hypersphere with two parameters: and .", "R m is a normalized vector (i.e. k k 2 = 1 ) defining the mean direction.", "R 0 is often referred to as a concentration parameter analogous to the variance in a Gaussian distribution.", "vMF has been used for modeling similarity between two sentences (Guu et al., 2018), which is particularly suited to our purpose here, since we will evaluate our semantic representations in the context of modeling paraphrases (See Sections 4.1 and 4.2 for more details).", "Therefore, we assume q ( y | x ) follows vMF ( ( x ) , ( x )) and the prior p ( y ) follows the uniform distribution vMF ( , 0) .", "With this choice of prior and posterior distribution, the KL ( q ( y | x ) k p ( y )) appearing in the ELBO can be computed in closed-form: I m/ 2 ( ) I m/ 2 1 ( ) + ( m/ 2 1) log ( m/", "2) log(2 ) log I m/ 2 1 ( )+ m 2 log + log 2 log ( m 2 ) , (2) where I v is the modified Bessel function of the first kind at order v and ( ) is the Gamma function.", "We follow Davidson et al. (2018) and use an acceptance-rejection scheme to sample from vMF.", "N ( ( x ) , diag ( ( x ))) and that the prior p ( z ) is N (0 , I d ) , where I d is an d d identity matrix.", "Since we only consider a diagonal covariance matrix, the KL divergence term KL ( q ( z | x ) k p ( z )) can also be computed efficiently: 1 2( X i log i + X i i + X i 2 i d ) (3) Inference and Generative Models.", "The inference models q ( y | x ) and q ( z | x ) are two independent word averaging encoders with additional linear feedforward neural networks for producing ( x ) and ( x ) (or ( x ) ).", "The generative model p ( x | y, z ) is a feedforward neural network g with the output being a bag of words.", "In particular, the expected output log-probability (the first term in Eq.", "1) is computed as follows: E y q ( y | x ) z q ( z | x ) [log p ( x | y, z )] = E y q ( y | x ) z q ( z | x ) \" TX t =1 log exp g ([ y ; z ]) x t P Vj =1 exp g ([ y ; z ]) j # Where V is the vocabulary size, [; ] indicates concatenation, T is the sentence length and x t is the index of the t 'th word's word type. Recurrent Neural Networks. To facilitate better learning of syntax, we also consider replacing both the generative and inference models with RNN-based sequence models, rather than bag-of-words models. In this setting, the generative model p ( x | y, z ) is a unidirectional long-short term memory network (LSTM; Hochreiter and Schmidhuber, 1997) and a linear feedforward neural network for predicting the word tokens (shown in Figure 2). The expected output log-probability is computed as follows: E y q ( y | x ) z q ( z | x ) [log p ( x | y, z )] = E y q ( y | x ) z q ( z | x ) \" TX t =1 log p ( x t | y, z, x 1: t 1 ) # Where V is the vocabulary size, T is the sentence length and x t is the index of the t 'th word's word type.", "The inference model q ( y | x ) is still a word averaging encoder, but q ( z | x ) is parameterized by encoder encoder Figure 2: Diagram showing LSTM decoder that uses the semantic variable y and the syntactic variable z .", "a bidirectional LSTM, where we concatenate the forward and backward hidden states and then take the average.", "The output of the LSTM is then used as input to a feedforward network with one hidden layer for producing ( x ) and ( x ) (or ( x ) ).", "In the following sections, we will introduce several losses that will be added into the training of our base model, which empirically shows the ability of further disentangling the functionality between the semantic variable y and the syntactic variable z .", "We attempt to improve the quality and disentanglement of our semantic and syntactic representations by introducing additional losses, which encourage y to capture semantic information and z to capture syntactic information.", "We elaborate on these losses below.", "Our first loss is a paraphrase reconstruction loss (PRL).", "The key assumption underlying the PRL is that for a paraphrase pair x 1 , x 2 , the semantic information is equivalent between the two sentences and only the syntactic information varies.", "To impose such constraints, PRL is defined as E y 2 q ( y | x 2 ) z 1 q ( z | x 1 ) [ log p ( x 1 | y 2 , z 1 )]+ E y 1 q ( y | x 1 ) z 2 q ( z | x 2 ) [ log p ( x 2 | y 1 , z 2 )] (4) That is, we swap the semantic variables, keep the syntactic variables, and attempt to reconstruct the sentences (shown in Figure 3).", "While instead of using a multi-task objective we could directly model paraphrases x 1 and x 2 as being generated by the same y (which naturally suggests a product-of-experts style posterior, as in Wu and Goodman (2018)), we found that for the purposes of our downstream tasks training with the multi-task loss gave superior results.", "Our second loss is a discriminative paraphrase loss (DPL).", "The DPL explicitly encourages the similarity of paraphrases x 1 , x 2 to be scored higher than the dissimilar sentences n 1 , n 2 (i.e., negative samples; see Sec. 5 for more details) by a given margin .", "As shown in Figure 3, the similarity function in this loss only uses the semantic variables in the sentences.", "The loss is defined as max(0 , d ( x 1 , x 2 ) + d ( x 1 , n 1 ))+ max(0 , d ( x 1 , x 2 ) + d ( x 2 , n 2 )) (5) The similarity function we choose is the cosine similarity between the mean directions of the semantic variables from the two sentences: d ( x 1 , x 2 ) = cosine( ( x 1 ) , ( x 2 )) (6) 4.3 Word Position Loss It has been observed in previous work that word order typically contributes little to the modelling of semantic similarity (Wieting et al., 2016).", "We interpret this as evidence that word position information is more relevant to syntax than semantics, at least as evaluated by STS tasks.", "To guide the syntactic variable to represent word order, we introduce a word position loss (WPL).", "Although our word averaging encoders only have access to the bag of words of the input, using this loss can be viewed as a denoising autoencoder where we have maximal input noise (i.e., an orderless representation of the input) and the encoders need to learn to reconstruct the ordering.", "For both word averaging encoders and LSTM encoders, WPL is parameterized by a three-layer feedforward neural network f ( ) with input from the concatenation of the samples of the syntactic variable z and the embedding vector e i at input position i ; we then attempt to predict a one-hot vector representing the position i .", "More specifically, we define WPL def == E z q ( z | x ) \" X i log softmax ( f ([ e i ; z ])) i # where softmax ( ) i indicates the probability at position i .", "KL Weight.", "Following previous work on VAEs (Higgins et al., 2016; Alemi et al., 2016), we attach a weight to the KL divergence and tune it based on development set performance.", "Negative Samples.", "When applying DPL, we select negative samples based on maximizing cosine similarity to sentences from a subset of the data.", "In particular, we accumulate k mini-batches during training, yielding a mega-batch S (Wieting and Gimpel, 2018).", "Then the negative samples are selected based on the following criterion: n 1 = argmax n S n 6 = x 2 cosine( ( x 1 ) , ( n )) where x 1 , x 2 forms the paraphrase pair and the mega-batch size is fixed to k = 20 for all of our experiments.", "Since all of our models are trained from scratch, we observed some instabilities with DPL during the initial stages of training.", "We suspect that this is because the negative samples at these initial stages are of low quality.", "To overcome this issue, DPL is included starting at the second epoch of training so that the models can have a warm start.", "We subsampled half a million paraphrase pairs from ParaNMT-50M (Wieting and Gimpel, 2018)", "as our training set.", "We use SemEval semantic textual similarity (STS) task 2017 (Cer et al., 2017) as a development set.", "For semantic similarity evaluation, we use the STS tasks from 2012 to 2016 (Agirre et al., 2012, 2013, 2014, 2015, 2016) and the STS benchmark test set (Cer et al., 2017).", "For evaluating syntactic similarity, we propose several evaluations.", "One uses the gold parse trees from the Penn Treebank (Marcus et al., 1993), and the others are based on automatically tagging and parsing five million paraphrases from ParaNMT-50M; we describe these tasks in detail below.", "For hyperparameters, the dimensions of the latent variables are 50.", "The dimensions of word embeddings are 50.", "We use cosine similarity as similarity metric for all of our experiments.", "We tune the weights for PRL and reconstruction loss from 0.1 to 1 in increments of 0.1 based on the development set performance.", "We use one sample from each latent variable during training.", "When evaluating VGVAE based models on STS tasks, we use the mean direction of the semantic variable y , while for syntactic similarity tasks, we use the mean vector of the syntactic variable z .", "Our baselines are a simple word averaging (WORDAVG ) model and bidirectional LSTM averaging (BLSTMAVG ) model, both of which have been shown to be very competitive for modeling semantic similarity when trained on paraphrases (Wieting and Gimpel, 2018).", "Specifically, WORDAVG takes the average over the word embeddings in the input sequence to obtain the sentence representation.", "BLSTMAVG uses the averaged hidden states of a bidirectional LSTM as the sentence representation, where forward and backward hidden states are concatenated.", "These models use 50 dimensional word embeddings and 50 dimensional LSTM hidden vectors per direction.", "These baselines are trained with DPL only.", "Additionally, we scramble the input sentence for BLSTMAVG since it has been reported benefi-cial for its performance in semantic similarity tasks (Wieting and Gimpel, 2017).", "We also benchmark several pretrained embeddings on both semantic similarity and syntactic similarity datasets, including GloVe (Pennington et al., 2014), 3 SkipThought (Kiros et al., 2015), 4 3 We use 300 dimensional Common Crawl embeddings available at nlp.stanford.edu/projects/glove 4 github.com/ryankiros/skip-thoughts semantic var.", "InferSent (Conneau et al., 2017), 5 ELMo (Peters et al., 2018), 6 and BERT (Devlin et al., 2018).", "7 For GloVe, we average word embeddings to form sentence embeddings.", "For ELMo, we average the hidden states from three layers and then average the hidden states across time steps.", "For BERT, we use the averaged hidden states from the last attention block.", "As shown in Table 1, the semantic and syntactic variables of our base VGVAE model show similar performance on the STS test sets.", "As we begin adding multi-task losses, however, the performance of these two variables gradually diverges, indicating that different information is being captured in the two variables.", "More interestingly, note that when any of the three losses is added to the base VGVAE model (even the WPL loss which makes no use of paraphrases), the performance of the semantic variable increases and the performance of the syntactic variable decreases; 5 We use model V1 available at github.com/ facebookresearch/InferSent 6 We use the original model available at allennlp.", "org/elmo 7 We use bert-large-uncased available at github.com/ huggingface/pytorch-pretrained-BERT this suggests that each loss is useful in encouraging the latent variables to learn complementary information.", "Indeed, the trend of additional losses both increasing semantic performance and decreasing syntactic performance holds even as we use more than two losses, except for the single case of VGVAE + PRL + DPL, where the syntactic performance increases slightly.", "Finally, we see that when the bag-of-words VGVAE model is used with all of the multi-task losses (ALL), we observe a large gap between the performance of the semantic and syntactic latent variables, as well as strong performance on the STS tasks that outperforms all baselines.", "Using LSTM modules further strengthens the disentanglement between the two variables and leads to even better semantic performance.", "While using an LSTM encoder and a bag-of-words decoder is difficult to justify from a generative modeling perspective, we include results with this con-figuration to separate out the contributions of the LSTM encoder and decoder.", "So far, we have only confirmed empirically that the syntactic variable has learned to not capture semantic information.", "To investigate what the syntactic variable has captured, we propose several syntactic similarity tasks.", "In particular, we consider using the syntactic latent variable in calculating nearest neighbors for a 1-nearest-neighbor syntactic parser or part-of-speech tagger.", "We use our latent variables to define the similarity function in these settings and evaluate the quality of the output parses and tag sequences using several metrics.", "Our first evaluation involves constituency parsing, and we use the standard training and test splits from the Penn Treebank.", "We predict a parse tree for each sentence in the test set by finding its nearest neighbor in the training set based on the cosine similarity of the mean vectors for the syntactic variables.", "The parse tree of the nearest neighbor will then be treated as our prediction for the test sentence.", "Since the train and test sentences may differ in length, standard parse evaluation metrics are not applicable, so we use tree edit distance ( Zhang and Shasha, 1989) 8 to compute the distance between two parse tree without consider-8 github.com/timtadh/zhang-shasha Constituent Parsing (TED, ) Constituent Parsing ( F 1 , ) POS Tagging (%Acc., ) GloVe 120.8 27.3 23.9 SkipThought 99.5 30.9 29.6 InferSent 138.9 28.0 25.1 ELMo 103.8 30.4 27.8 BERT 101.7 28.6 25.4 Random baseline 121.4 19.2 12.9 Upper bound performance 51.6 71.1 62.3 WORDAVG 107.0 25.5 21.4 BLSTMAVG 106.8 25.7 21.6 semantic var.", "To better understand the difficulty of this task, we introduce two baselines.", "The first randomly selects a training sentence.", "We calculate its performance by running it ten times and then reporting the average.", "We also report the upper bound performance given the training set.", "Since computing tree edit distance is time consuming, we subsample 100 test instances and compute the minimum tree edit distance for each sampled instance.", "Thus, this number can be seen as the approximated upper bound performance for this task given the training set.", "To use a more standard metric for these syntactic similarity tasks, we must be able to retrieve training examples with the same number of words as the sentence we are trying to parse.", "We accordingly parse and tag the five million paraphrase subset of the ParaNMT training data using Stanford CoreNLP (Manning et al., 2014).", "To form a test set, we group sentences in terms of sentence length and subsample 300 sentences for each sentence length.", "After removing the paraphrases of the sentences in the test set, we use the rest of the training set as candidate sentences for nearest neighbor search, and we restrict nearest neighbors to have the same sentence length as the sentence we are attempting to parse or tag, which allows us to use standard metrics like labeled F 1 score and tagging accuracy for evaluation.", "As shown in Table 2, the syntactic variables and semantic variables demonstrate similar trends across these three syntactic tasks.", "Interestingly, both DPL and PRL help to improve the performance of the syntactic variables, even though these two losses are only imposed on the semantic variables.", "We saw an analogous pattern in Table 1, which again suggests that by pushing the semantic variables to learn information shared by paraphrastic sentences, we also encourage the syntactic variables to capture complementary syntactic information.", "We also find that adding WPL brings the largest improvement to the syntactic variable, and keeps the syntactic information carried by the semantic variables at a relatively low level.", "Finally, when adding all three losses, the syntactic variable shows the strongest performance across the three tasks.", "In addition, we observe that the use of the LSTM encoder improves syntactic performance by a large margin and the LSTM decoder improves further, which suggests that the use of the LSTM decoder contributes to the amount of syntactic information represented in the syntactic variable.", "Among pretrained representations, SkipThought shows the strongest performance overall and ELMo has the second best performance in the last two columns.", "While InferSent performs worst in the first column, it gives reasonable performance for the other two.", "BERT performs 5 10 15 20 25 20 40 60 80 100 Constituent Parsing BestRandomALLALL + LSTM enc.", "To investigate the performance gap between the bag-of-words VGVAE and VGVAE with LSTM modules, in Figure 4 we plot the performance of our models and baselines as the length of the target sentence increases.", "We see that performance in all settings degrades as the sentences get longer.", "This may be due to the fact that the data is much sparser as sentence length increases (leaving fewer candidate nearest neighbors for prediction).", "We also see that above 4 words or so the performance gap between the bag-of-words VGVAE and VGVAE with LSTM modules becomes more and more obvious.", "This may be because the bag-of-words encoder has a harder time capturing syntactic information as sentence length increases.", "In addition, there is a slight improvement from using an LSTM decoder when the sentence length increases beyond 12 or so, which suggests that a bag-of-words decoder may struggle to capture certain parts of the syntactic information in the sentence, even when using an LSTM encoder.", "To qualitatively evaluate our latent variables, we find (via cosine similarity) nearest neighbor sentences to test set examples in terms of both the semantic and syntactic representations.", "We also find nearest neighbors of words (which we view as single-word sentences).", "We discuss the results of this analysis below.", "that the most similar words found by the syntactic variable share the same part-of-speech tags with the query words.", "For example, starting is close to getting and taking, even though these words are not semantically similar.", "Words retrieved according to the semantic variable, however, are more similar semantically, e.g., begin and starts.", "As another example, times is similar to words that are either related to descriptions of frequency (e.g., twice and often) or related to numbers (e.g., thousand, seven).", "As shown in Table 4, sentences that are similar in terms of their semantic variables tend to have similar semantics.", "However, sentences that are similar in terms of their syntactic variables are mostly semantically unrelated but have similar surface forms.", "For example, you 're gon na save her life . has the same meaning as you will save her . while having a similar syntactic structure to you 're gon na give a speech . (despite having very different meanings).", "As another example, although the semantic variable does not find a good match for i have much more colours at home ., which can be attributed to the limited size of candidate sentences, the nearest syntactic neighbor (you have a beautiful view from here .) has a very similar syntactic structure to the query sentence.", "In this paper we explored simple methods to disentangle syntax and semantics in latent representations of sentences.", "One goal was to measure the impact of simple decisions on the disentanglement of both the semantic and syntactic variables, even when restricting ourselves to simplified bag-of-words encoders.", "Due to the constrained nature of these bag-of-words models, we found that it was important to use different word embedding spaces for the semantic and syntactic encoders.", "In preliminary experiments, we experimented with the use of the same word embedding space but distinct feed-forward layers in the two latent variable encoders.", "However, this setting proved extremely difficult to achieve a disentanglement between syntax and semantics.", "Hence an important component of disentanglement with these bag-of-words encoders is the use of different word embedding spaces.", "starting syntactic: getting heading sitting chasing taking require trying sharing bothering pushing paying semantic: begin start stopping forward rising wake initial starts goes started again getting beginning area syntactic: engines certificate guests bottle responsibility lesson pieces suit bags vessel applications semantic: sector location zone fields rooms field places yard warehouse seats coordinates territory considered syntactic: stable limited odd scary classified concerned awful purple impressive embarrassing jealous semantic: thought assumed regard reasons wished understood purposes seemed expect guessed meant jokes syntactic: gentlemen photos finding baby missile dna parent shop murder science recognition sheriff semantic: funny humor prize stars cookie paradise dessert worthy smile happiness thrilled ideal kidding times syntactic: princess officer wounds plan gang ships feelings user liar elements coincidence degrees pattern semantic: twice later thousand pages seven every once often decade forgotten series four eight day time Table 3: Examples of the most similar words to particular query words using syntactic variable (first row) or semantic variable (second row).", "We also conducted experiments using LSTM encoders and decoders as recurrent neural networks are a natural way to capture syntactic information in a sentence.", "We found this approach to give us additional benefits for both disentangling semantics and syntax and achieving better results overall.", "Nonetheless, we find it encouraging that even when using bag-of-words encoders, our multi-task losses are able to achieve a separation as measured by our semantic and syntactic similarity tasks.", "We proposed a generative model and several losses for disentangling syntax and semantics in sentence representations.", "We also proposed syntactic similarity tasks for measuring the amount of disentanglement between semantic and syntactic representations.", "We characterized the effects of the losses as well as the use of LSTM modules on both semantic tasks and syntactic tasks.", "Our models achieve the best performance across both sets of similarity tasks when the latent representations are most disentangled.", "We would like to thank the anonymous reviewers, NVIDIA for donating GPUs used in this research, and Google for a faculty research award to K. Gimpel that partially supported this research." ]
[ "objective", "result", "objective", "objective", "result", "abstain", "abstain", "method", "objective", "objective", "method", "method", "method", "objective", "result", "abstain", "objective", "result", "result", "other", "other", "other", "abstain", "objective", "other", "abstain", "other", "method", "abstain", "objective", "other", "objective", "other", "method", "method", "abstain", "method", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "method", "method", "method", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "method", "method", "method", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "method", "abstain", "abstain", "other", "method", "result", "result", "objective", "objective", "abstain", "result", "other" ]
[ "Aspect-based sentiment analysis (ABSA), which aims to identify fine-grained opinion polarity towards a specific aspect, is a challenging subtask of sentiment analysis (SA).", "In this paper, we construct an auxiliary sentence from the aspect and convert ABSA to a sentence-pair classification task, such as question answering (QA) and natural language inference (NLI).", "We fine-tune the pre-trained model from BERT and achieve new state-of-the-art results on SentiHood and SemEval-2014 Task 4 datasets.", "The source codes are available at https://github.com/ HSLCY/ABSA-BERT-pair .", "Sentiment analysis (SA) is an important task in natural language processing.", "It solves the computational processing of opinions, emotions, and subjectivity sentiment is collected, analyzed and summarized.", "It has received much attention not only in academia but also in industry, providing real-time feedback through online reviews on websites such as Amazon, which can take advantage of customers' opinions on specific products or services.", "The underlying assumption of this task is that the entire text has an overall polarity.", "However, the users' comments may contain different aspects, such as: This book is a hardcover version, but the price is a bit high.", "The polarity in appearance' is positive, and the polarity regarding price' is negative.", "Aspect-based sentiment analysis (ABSA) (Jo and Oh, 2011; Pontiki et al., 2014, 2015, 2016) aims to identify fine-grained polarity towards a specific aspect.", "This task allows users to evaluate aggregated sentiments for each aspect of a given product or service and gain a more granular understanding of their quality.", "Both SA and ABSA are sentence-level or document-level tasks, but one comment may refer to more than one object, and sentence-level tasks cannot handle sentences with multiple targets.", "Therefore, Saeidi et al. (2016) introduce the task of targeted aspect-based sentiment analysis (TABSA), which aims to identify fine-grained opinion polarity towards a specific aspect associated with a given target.", "The task can be divided into two steps: (1) the first step is to determine the aspects associated with each target; (2) the second step is to resolve the polarity of aspects to a given target.", "The earliest work on (T)ABSA relied heavily on feature engineering (Wagner et al., 2014; Kiritchenko et al., 2014), and subsequent neural network-based methods (Nguyen and Shirai, 2015; Wang et al., 2016; Tang et al., 2015, 2016; Wang et al., 2017) achieved higher accuracy.", "Recently, Ma et al. (2018) incorporate useful commonsense knowledge into a deep neural network to further enhance the result of the model.", "Liu et al. (2018) optimize the memory network and apply it to their model to better capture linguistic structure.", "More recently, the pre-trained language models, such as ELMo (Peters et al., 2018), OpenAI GPT (Radford et al., 2018), and BERT (Devlin et al., 2018), have shown their effectiveness to alleviate the effort of feature engineering.", "Especially, BERT has achieved excellent results in QA and NLI.", "However, there is not much improvement in (T)ABSA task with the direct use of the pre-trained BERT model (see Table 3).", "We think this is due to the inappropriate use of the pre-trained BERT model.", "Since the input representation of BERT can represent both a single text sentence and a pair of text sentences, we can convert (T)ABSA into a sentence-pair classification task and fine-tune the pre-trained BERT.", "In this paper, we investigate several methods of constructing an auxiliary sentence and transform (T)ABSA into a sentence-pair classification task.", "We fine-tune the pre-trained model from BERT and achieve new state-of-the-art results on (T)ABSA task.", "We also conduct a comparative experiment to verify that the classification based on a sentence-pair is better than the single-sentence classification with fine-tuned BERT, which means that the improvement is not only from BERT but also from our method.", "In particular, our contribution is two-fold:", "1. We propose a new solution of (T)ABSA by converting it to a sentence-pair classification task.", "2. We fine-tune the pre-trained BERT model and achieve new state-of-the-art results on SentiHood and SemEval-2014 Task 4 datasets.", "TABSA In TABSA, a sentence s usually consists of a series of words: { w 1 , , w m } , and some of the words { w i 1 , , w i k } are pre-identified targets { t 1 , , t k } , following Saeidi et al. (2016), we set the task as a 3-class classification problem: given the sentence s , a set of target entities T and a fixed aspect set A = { general, price, transit location, safety } , predict the sentiment polarity y { positive, negative, none } over the full set of the target-aspect pairs { ( t, a ) : t T, a A } .", "As we can see in Table 1, the gold standard polarity of (LOCATION2, price) is negative, while the polarity of (LOCATION1, price) is none.", "ABSA In ABSA, the target-aspect pairs { ( t, a ) } become only aspects a .", "This setting is equivalent to learning subtask 3 (Aspect Category Detection) and subtask 4 (Aspect Category Polarity) of SemEval-2014 Task 4 1 at the same time.", "For simplicity, we mainly describe our method with TABSA as an example.", "We consider the following four methods to convert the TABSA task into a sentence pair classification task: 1 http://alt.qcri.org/semeval2014/task4/ Example: LOCATION2 is central London so extremely expensive, LOCATION1 is often considered the coolest area of London.", "Sentences for QA-M The sentence we want to generate from the target-aspect pair is a question, and the format needs to be the same.", "For example, for the set of a target-aspect pair (LOCATION1, safety), the sentence we generate is what do you think of the safety of location 1 ?", "Sentences for NLI-M For the NLI task, the conditions we set when generating sentences are less strict, and the form is much simpler.", "The sentence created at this time is not a standard sentence, but a simple pseudo-sentence, with (LOCA-TION1, safety) pair as an example: the auxiliary sentence is: location 1 safety.", "Sentences for QA-B For QA-B, we add the label information and temporarily convert TABSA into a binary classification problem ( label { yes, no } ) to obtain the probability distribution.", "At this time, each target-aspect pair will generate three sequences such as the polarity of the aspect safety of location 1 is positive, the polarity of the aspect safety of location 1 is negative, the polarity of the aspect safety of location 1 is none.", "We use the probability value of yes as the matching score.", "For a target-aspect pair which generates three sequences ( positive, negative, none ), we take the class of the sequence with the highest matching score for the predicted category.", "Sentences for NLI-B The difference between NLI-B and QA-B is that the auxiliary sentence changes from a question to a pseudo-sentence.", "The auxiliary sentences are: location 1 safety positive, location 1 safety negative, and location 1 safety none.", "After we construct the auxiliary sentence, we can transform the TABSA task from a single sentence classification task to a sentence pair classification task.", "As shown in Table 3, this is a necessary operation that can significantly improve the experimental results of the TABSA task.", "BERTBERT (Devlin et al., 2018) is a new language representation model, which uses bidirectional transformers to pre-train a large corpus, and fine-tunes the pre-trained model on other tasks.", "We fine-tune the pre-trained BERT model on TABSA task.", "Let's take a brief look at the input representation and the fine-tuning procedure.", "The input representation of the BERT can explicitly represent a pair of text sentences in a sequence of tokens.", "For a given token, its input representation is constructed by summing the corresponding token, segment, and position embeddings.", "For classification tasks, the first word of each sequence is a unique classification embedding ([CLS]).", "BERT fine-tuning is straightforward.", "To obtain a fixed-dimensional pooled representation of the input sequence, we use the final hidden state (i.e., the output of the transformer) of the first token as the input.", "We denote the vector as C RH .", "Then we add a classification layer whose parameter matrix is W RK H , where K is the number of categories.", "Finally, the probability of each category P is calculated by the softmax function P = softmax ( CWT ) .", "BERT-single for (T)ABSA BERT for single sentence classification tasks.", "Suppose the number of target categories are n t and aspect categories are n a .", "We consider TABSA as a combination of n t n a target-aspect-related sentiment classification problems, first classifying each sentiment classification problem, and then summarizing the results obtained.", "For ABSA, We fine-tune pre-trained BERT model to train n a classifiers for all aspects and then summarize the results.", "BERT-pair for (T)ABSA BERT for sentence pair classification tasks.", "Based on the auxiliary sentence constructed in Section 2.2, we use the sentence-pair classification approach to solve (T)ABSA.", "Corresponding to the four ways of constructing sentences, we name the models: BERT-pair-QA-M, BERT-pair-NLI-M, BERT-pair-QA-B, and BERT-pair-NLI-B.", "We evaluate our method on the SentiHood (Saeidi et al., 2016) dataset 2 , which consists of 5,215 sentences, 3,862 of which contain a single target, and the remainder multiple targets.", "Each sentence contains a list of target-aspect pairs { t, a } with the sentiment polarity y .", "Ultimately, given a sentence s and the target t in the sentence, we need to: (1) detect the mention of an aspect a for the target t ; (2) determine the positive or negative sentiment polarity y for detected target-aspect pairs.", "We also evaluate our method on SemEval-2014 Task 4 (Pontiki et al., 2014) dataset 3 for aspect-based sentiment analysis.", "The only difference from the SentiHood is that the target-aspect pairs { t, a } become only aspects a .", "This setting allows us to jointly evaluate subtask 3 (Aspect Category Detection) and subtask 4 (Aspect Category Polar-ity).", "We use the pre-trained uncased BERT-base model 4 for fine-tuning.", "The number of Transformer blocks is 12, the hidden layer size is 768, the number of self-attention heads is 12, and the total number of parameters for the pre-trained model is 110M.", "When fine-tuning, we keep the dropout probability at 0.1, set the number of 2 Dataset mirror: https://github.com/uclmr/jack/tree/master /data/sentihood 3 http://alt.qcri.org/semeval2014/task4/ 4 https://storage.googleapis.com/bert models/2018 10 18/ uncased L-12 H-768 A-12.zip Model Aspect Sentiment Acc.", "LR (Saeidi et al., 2016): a logistic regression classifier with n-gram and pos-tag features.", "LSTM-Final (Saeidi et al., 2016): a biLSTM model with the final state as a representation.", "LSTM-Loc (Saeidi et al., 2016): a biLSTM model with the state associated with the target position as a representation.", "LSTM+TA+SA (Ma et al., 2018): a biLSTM model which introduces complex target-level and sentence-level attention mechanisms.", "SenticLSTM (Ma et al., 2018): an upgraded version of the LSTM+TA+SA model which introduces external information from SenticNet (Cambria et al., 2016).", "Dmu-Entnet (Liu et al., 2018): a bidirectional EntNet (Henaff et al., 2016) with external memory chains with a delayed memory update mechanism to track entities.", "During the evaluation of SentiHood, following Saeidi et al. (2016), we only consider the four most frequently seen aspects (general, price, transit-location, safety).", "When evaluating the aspect detection, following Ma et al. (2018), we use strict accuracy and Macro-F1, and we also report AUC.", "In sentiment classification, we use accuracy and macro-average AUC as the evaluation indices.", "Results on SentiHood are presented in Table 3.", "The results of the BERT-single model on aspect detection are better than Dmu-Entnet, but the accuracy of sentiment classification is much lower than that of both SenticLstm and Dmu-Entnet, with a difference of 3.8 and 5.5 respectively.", "However, BERT-pair outperforms other models on aspect detection and sentiment analysis by a substantial margin, obtaining 9.4 macro-average F1 and 2.6 accuracies improvement over Dmu-Entnet.", "Overall, the performance of the four BERT-pair models is close.", "It is worth noting that BERT-pair-NLI models perform relatively better on aspect detection, while BERT-pair-QA models perform better on sentiment classification.", "Also, the BERT-pair-QA-B and BERT-pair-NLI-B models can achieve better AUC values on sentiment classification than the other models.", "The benchmarks for SemEval-2014 Task 4 are the two best performing systems in Pontiki et al. (2014) and ATAE-LSTM (Wang et al., 2016).", "When evaluating SemEval-2014 Task 4 subtask 3 and subtask 4, following Pontiki et al. (2014), we use Micro-F1 and accuracy respectively.", "Results on SemEval-2014 are presented in Table 4 and Table 5.", "We find that BERT-single Models P R F1 XRCE 83.23 81.37 82.29 NRC-Canada 91.04 86.24 88.58 BERT-single 92.78 89.07 90.89 BERT-pair-QA-M 92.87 90.24 91.54 BERT-pair-NLI-M 93.15 90.24 91.67 BERT-pair-QA-B 93.04 89.95 91.47 BERT-pair-NLI-B 93.57 90.83 92.18 Table 4: Test set results for Semeval-2014 task 4 Subtask 3: Aspect Category Detection.", "has achieved better results on these two subtasks, and BERT-pair has achieved further improvements over BERT-single.", "The BERT-pair-NLI-B model achieves the best performance for aspect category detection.", "For aspect category polarity, BERT-pair-QA-B performs best on all 4-way, 3-way, and binary settings.", "Why is the experimental result of the BERT-pair model so much better?", "On the one hand, we convert the target and aspect information into an auxiliary sentence, which is equivalent to exponentially expanding the corpus.", "A sentence s i in the original data set will be expanded into ( s i , t 1 , a 1 ) , , ( s i , t 1 , a n a ) , , ( s i , t n t , a n a ) in the sentence pair classification task.", "On the other hand, it can be seen from the amazing improvement of the BERT model on the QA and NLI tasks (Devlin et al., 2018) that the BERT model has an advantage in dealing with sentence pair classification tasks.", "This advantage comes from both unsupervised masked language model and next sentence prediction tasks.", "TABSA is more complicated than SA due to additional target and aspect information.", "Directly fine-tuning the pre-trained BERT on TABSA does not achieve performance growth.", "However, when we separate the target and the aspect to form an auxiliary sentence and transform the TABSA into a sentence pair classification task, the scenario is similar to QA and NLI, and then the advantage of the pre-trained BERT model can be fully utilized.", "Our approach is not limited to TABSA, and this construction method can be used for other similar tasks.", "For ABSA, we can use the same approach to construct the auxiliary sentence with only aspects.", "In BERT-pair models, BERT-pair-QA-B and BERT-pair-NLI-B achieve better AUC values on sentiment classification, probably because of the modeling of label information.", "In this paper, we constructed an auxiliary sentence to transform (T)ABSA from a single sentence classification task to a sentence pair classification task.", "We fine-tuned the pre-trained BERT model on the sentence pair classification task and obtained the new state-of-the-art results.", "We compared the experimental results of single sentence classification and sentence pair classification based on BERT fine-tuning, analyzed the advantages of sentence pair classification, and ver-ified the validity of our conversion method.", "In the future, we will apply this conversion method to other similar tasks.", "We would like to thank the anonymous reviewers for their valuable comments.", "The research work is supported by Shanghai Municipal Science and Technology Commission (No. 16JC1420401 and 17JC1404100), National Key Research and Development Program of China (No. 2017YFB1002104), and National Natural Science Foundation of China (No. 61672162 and 61751201)." ]
[ "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "objective", "objective", "result", "objective", "objective", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "method", "method", "method", "abstain", "result", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "objective", "result", "method", "other", "other" ]
[ "There is content such as hate speech, offensive, toxic or aggressive documents, which are perceived differently by their consumers.", "They are commonly identified using classifiers solely based on textual content that generalize pre-agreed meanings of difficult problems.", "Such models provide the same results for each user, which leads to high misclassification rate observable especially for contentious, aggressive documents.", "Both document controversy and user nonconformity require new solutions.", "Therefore, we propose novel personalized approaches that respect individual beliefs expressed by either user conformity-based measures or various embeddings of their previous text annotations.", "We found that only a few annotations of most controversial documents are enough for all our personalization methods to significantly outperform classic, generalized solutions.", "The more controversial the content, the greater the gain.", "The personalized solutions may be used to efficiently filter unwanted aggressive content in the way adjusted to a given person.", "Unfortunately, in the pursuit of knowledge on the Internet, one may come across content that they consider inappropriate for various reasons, such as being too aggressive.", "Many users notoriously come across content that offends them while surfing the Internet.", "This can cause discomfort and discourage from further expansion of knowledge.", "To avoid this, it is important to effectively filter out content that a given user may find unwanted.", "This poses a risk of erroneous assessment of whether a given text is considered inappropriate by a given person.", "For that purpose, we need to extend commonly applied generalizing solutions and develop personalized methods that take into account beliefs and preferences of the individual user.", "We expect this information can be obtained from the individual's prior opinions about the offensiveness of some texts.", "Then, it is crucial to select the relevant texts that allow deriving as much information about users preferences as possible.", "Our new idea is to use some known, most controversial texts whose offensiveness is very ambiguous and depends more on subjective personal judgment.", "We examined how many documents has to be annotated by a given user to encapsulate their beliefs sufficiently and to improve personalized reasoning.", "Independently, we considered personal measures quantifying conformity of each individual.", "In other words, we measured to what extent a person evaluates documents similarly to others, i.e. \"is a part of the mainstream\".", "The conformity measures are used as input features for the classifier.", "This way, it is possible to find out the user beliefs based on their opinions regarding a relatively small number of texts.", "In this paper, we present novel methods of personalized aggressive content detection based on the representation of user opinion about aggressive texts.", "We propose: (1) conformity-based personalization, (2) class-based embeddings, and (3) annotation-based embeddings (Sec. 6).", "Our experiments were performed on the only relevant dataset Wikipedia Talk Labels: Aggression (Sec. 3).", "Having defined and calculated controversy of documents and conformity of users (Sec. 4), we validated our methods.", "The results revealed that additional individualized features: simple user conformity measures computed on few texts or embeddings of even four controversial texts significantly boost our personalized classification (Sec. 8).", "The gain provided by our personalized methods is greater for more controversial documents.", "This work is based on the results obtained in the article (Koco n et al., 2021).", "In addition, in paper (Milkowski et al., 2021), we showed that the personalized approach is also effective for other subjective problems in NLP, such as recognizing emotions elicited by text.", "The source code we used to conduct experiments and evaluation is publicly available in CLARIN-PL GitHub repository 1 .", "It is observable a steady increase in the number of offensive (Levmore and Nussbaum, 2010), hate (Breckheimer, 2001; Brown, 2018), aggressive, toxic, cyberbullying (Chen et al., 2012), or simply socially unacceptable online messages (Ljubeic et al., 2019).", "There are many definitions of offensive speech, which can be summarised as speech that targets specific social groups in a way that is harmful to them (Jacobs, 2002).", "Some countries, such as the USA, protect the rights to use this type of speech as an acceptable form of political expression (Heyman, 2008).", "In turn, the law prohibits hate speech in many EU countries (Rosenfeld, 2002).", "Such laws pose a challenge for operators of social networking sites and other online services to identify and moderate unacceptable content.", "Large companies such as Facebook and Google are often accused of not doing enough to ensure that their platforms are not used to attack other people (Ben-David and Fernndez, 2016).", "On the other hand, attempts to automatically control content often lead to the accidental blocking of content that was not intended to offend anyone.", "Ambiguity of the definition of offensiveness is a serious problem.", "This inconsistency is visible in many reviews related to automatic detection of hate speech (Fortuna and Nunes, 2018; Schmidt and Wiegand, 2017; Alrehili, 2019; Poletto et al., 2020) or more specifically on aggressiveness detection (Sadiq et al., 2021; Modha et al., 2020).", "Automatic recognition of offensive speech is the subject of many NLP workshops, such as Se-meval 2019 (Zampieri et al., 2019b), GermEval 2018 (Wiegand et al., 2018), FIRE/HASOC 2019 (Mandl et al., 2019) or PolEval 2019 (Ptaszynski et al., 2019).", "Classic methods do not consider context and word order, e.g. the bag-of-words model (Zhang et al., 2010) or TF-IDF (Sahlgren et al., 2018).", "The representation may be extended with additional ontologies (Bloehdorn and Hotho, 2004) or WordNets (Scott and Matwin, 1998; Piasecki et al., 2009; Misiaszek et al., 2014; Janz et al., 2017; Kocon et al., 2019b) and used with SVM (Razavi 1 https://github.com/CLARIN-PL/ controversy-conformity et al., 2010) or logistic regression models (Waseem and Hovy, 2016; Sahlgren et al., 2018; Kocon et al., 2018; Kocon and Maziarz, 2021).", "New methods often use word embeddings (Wiegand et al., 2018; Bojanowski et al., 2017; ukasz Augustyniak et al., 2021) (Wiegand et al., 2018; Bojanowski et al., 2017) mixed with character embeddings (Augusty-niak et al., 2019), together with deep neural networks, e.g. CNN (Zampieri et al., 2019a) or LSTM (Yenala et al., 2017).", "The current state-of-the-art are Transformer-based architectures such as BERT (Devlin et al., 2019), ALBERT (Lan et al., 2019), XLNet (Yang et al., 2019) or RoBERTa (Liu et al., 2019).", "Nevertheless all these methods focus solely on the text itself.", "Any wider context has been considered very rarely, e.g. as time, thread or author's social network features (Ziems et al., 2020).", "In articles focused on detection of aggressiveness (Modha et al., 2018; Risch and Krestel, 2018; Safi Samghabadi et al., 2020), the most often used were datasets shared at the Workshops on Trolling, Aggression and Cyberbullying (TRAC) (Kumar et al., 2018, 2020) at LREC.", "Few others also used the Wikipedia Talk Labels: Aggression (Wulczyn et al., 2017b), where all individual annotations are available, not just the majority vote.", "Unfortunately, we have not found any other aggression dataset, for which this information would also be given.", "Moreover the authors focus mainly on the multilingual aspect of the aggression detection (Modha et al., 2018; Risch and Krestel, 2018; Safi Samghabadi et al., 2020).", "In addition to deep neural models, less complex methods such as logistic regression are also used (Modha et al., 2018; Risch and Krestel, 2018).", "To the best of our knowledge, there are no work that dealt with the subjective problem of aggressiveness detection in the personalized way.", "The disagreement between annotators is usually measured by a single value, e.g. using Cohen's kappa or Krippendorf's alpha, and not investigated further.", "The researchers prefer a higher agreement level rather than controversy.", "Therefore, majority annotation is used in modeling, which to some extent leads to the loss of valuable information.", "There are several studies focusing on the problem of the disagreement in data annotations.", "This provides valuable information not only about the annotators, but also about the instances by reflecting their ambiguity (Aroyo and Welty, 2013).", "There may be no single right label for every text.", "The disagreement was used to divide annotators into polarized groups (Akhtar et al., 2020) or to filter out the spammers (Raykar and Yu, 2012; Sobern et al., 2013).", "In (Gao et al., 2019), attention was also drawn to the problem of conformity bias, where the reviewers tend to issue similar opinions.", "Less frequently, the disagreement is examined at the instance level, to measure its controversy or ambiguity, as in (Aroyo and Welty, 2013).", "For example, (Chklovski and Mihalcea, 2003) used confusion matrices in word sense tagging task to create and explore coarse sense clusters.", "We used the Wikipedia Talk Labels: Aggression data, gathered in the Wikipedia Detox project (Wul-czyn et al., 2017b,a).", "Unlike other collections, it provides information about all annotations given by Crowdflower workers (not only the majority vote) for 100k+ comments from English Wikipedia.", "The assigned aggression score ranged from very aggressive (-3), via neutral (0), to very friendly (3).", "It was binarized to '1 aggressive' for negative scores or '0 nonaggressive' for neutral or friendly annotations.", "The dataset contained a suggested data split into train , dev and test set.", "To enable our experiments, we removed annotations assigned by workers with less than 100 annotations in the train set, <20 in the dev set or <20 in the test set.", "Otherwise, we would not have data to extract user beliefs from and to perform personalization.", "We also removed users who did not assign any aggressive label in the dev set.", "Information about at least one text, that a specific user considered aggressive was crucial to model his individual perception of such content.", "Finally, there were 2,450 annotators left (Tab. 1), so we randomly divided them into 10 equal-sized folds.", "The train set is used to calculate the representations (embeddings) of documents being classified.", "This is the only data exploited in the classic, generalizing approach (our baseline).", "The dev set provides information about user beliefs, i.e. their previous annotations.", "Individualized input features are extracted from dev data: (1) conformity measures and (2) personal embeddings in class-based and annotation-based personalization.", "Personalization-related calculations on the dev set refer to both training and testing procedure.", "The documents from the test set are embedded and classified by the trained model for the validation purposes.", "For training and testing purposes, both controversy Contr for documents and conformity GConf WConf for users are calculated within the dev set.", "Controversy Contr ( d ) [0 , 1] of document d is an entropy-based measure expressed in the following", "where n 0 d , n 1 d is the number of negative and positive annotations assigned to document d , respectively; n d is the total number of document d 's annotations, n d = n 0 d + n 1 d ; n cd n d approximates the probability that annotation of document d is of class c .", "Contr ( d ) = 0 means that all users annotated d the same, Contr ( d ) = 1 when 50% of users perceived it aggressive and 50% not.", "Controversy Contr ( d ) is used to rank documents from the dev dataset.", "The most controversial texts (top k ) are embedded in class-based or annotation-based personalization.", "Independently, controversy is computed within the test data in order to investigate differences in reasoning quality for more and less controversial documents.", "General conformity GConf ( a, C ) [0 , 1] of human a quantifies how often a belongs to the majority of annotators evaluating individual texts.", "It can be of different kind depending on the class C we consider: GConf ( a, C ) = (cid:80) d A a 1 { l d C l d = l d,a } (cid:80) d A a 1 { l d C } , where A a is the set of documents annotated by a ; C denotes the conformity type related to the considered classes, i.e. C = { 0 } , { 1 } or { 0 , 1 } ; l d,a is the class label assigned by a to document d ; l d is the d 's class label obtained by majority voting.", "In case of equal annotations for both classes document d is considered aggressive.", "GConf ( a, C ) = 1 when a annotated all documents d A a the same like the others and no one annotated it otherwise.", "Note that depending on C , conformity can be calculated in three variants: for nonaggressive ( C = { 0 } ), aggressive ( C = { 1 } ) or any documents ( C = { 0 , 1 } ) annotated by a .", "Such three conformity values are used as input features in conformity-based personalization, Sec. 7.", "Weighted conformity W Conf ( a, C ) [0 , 1] is similar to general conformity GConf ( a, C ) but it respects the size of the group the annotator belongs", "to, while evaluating the document.", "The larger the group with annotator a , the greater annotator a conformity: W Conf ( a, C ) = (cid:80) d A (cid:80) c C n cd n d 1 { l d,a = c } (cid:80) d A a 1 { l d,a C } .", "To have some insight into our data, we calculated controversy Contr ( d ) on each dataset (train/dev/test).", "Fig. 2 presents the distribution of annotations for controversy measure in the dev and test set.", "In both, the ratio of aggressive to nonaggressive documents is increasing and reaching 0.5 for the most controversial documents, i.e. Contr ( d ) = 1 resulting from the same number of aggressive and nonaggressive votes.", "The examples of such texts are following: \"Your behaviour is inappropriate and your reaction is ludicrous. Do they give out admin rights in cornflake packets now?\" , n 0 d = n 1 d = 5 .", "\"Far from being ridiculous, it is the recommended approach to follow on wikipedia. We don't simply state what either side claims, rather we report on how they are viewed by neutral 3rd party sources. Take it to WP:NPOVN if you don't believe me, rather than indulging in your continued disruptive habit of always having the WP:LASTWORD.\" , n 0 d = n 1 d = 14 .", "It was the main inspiration for our personalized methods.", "We also checked contribution of aggressive texts for the consecutive most controversial documents included in the personal user embeddings, Fig. 3. Figure 3: Contribution of aggressive texts in the following positions of the individual ranking of most controversial documents annotated by a given user.", "We assume that personal beliefs can be expressed by user activity, i.e. their individual annotations.", "It means that we can use information about k documents previously annotated by the user in the form of their embeddings or user conformity measures.", "It leads us to three novel personalization methods: (1) conformity-based , (2) text-based , and (3) annotation-based , Fig. 4. According to our initial studies, the most informative were user annotations provided for most controversial documents.", "In conformity-based personalization , we exploited simple conformity measures that represent the beliefs of one user in the aggregated way: GConf and WConf .", "Each of them can deliver three separate values: for only aggressive, only nonaggressive, and all texts.", "Finally, we examined input feature sets based on only GConf , only WConf , and on both, Sec. 7.", "We also propose two versions of personal embeddings for previously annotated texts: class-based and annotation-based .", "The class-based embedding consists of two fastText embeddings of k documents from the dev set that the user rated as (1) nonaggressive and (2) separately as aggressive, Fig. 4. Each of the two embeddings can aggregate any and different number of previous user annotations; the embedding size is static for every k .", "If the user has not annotated any texts of given class (e.g. aggressive), the embedding represents an empty string (zeros).", "Overall, it is a very rare case in our experiments, mostly happening for k = 1 .", "The annotation-based embeddings consider all k user annotations individually.", "For each such text d , we use the following features: (1) the embedding of the d 's content, (2) its controversy", "Contr(d) , (3) the percentage of users who rated d as nonaggressive, (4) the rating of the given user (0/1), and (5) the information on whether this rating is consistent with the the majority rating.", "Thus, we receive a relatively large number of input features: 300+ k 304 .", "Our general personalized aggressiveness detection procedure is as follows: 1. We ask users to annotate k most controversial documents from the pre-defined set (here dev ).", "2. Information from the first step is used to extract individually-specific features reflecting personal user beliefs, i.e. conformity measures or embeddings of these k texts (class-based and annotation-based methods).", "3. A subset of the same users (upper rows in Fig. 1) annotate next documents.", "The data about their following annotations (embeddings of texts from train ) together with data from step 2. are used to train the classifier.", "4. For some other users (lower rows in Fig. 1), we also collect their annotations (the test set).", "Together with the information about their individual preferences (step 2.) they are used for validation (testing) purposes only.", "To validate our three personalized methods, we utilized Wikipedia Talk Labels: Aggression , see Sec. 3. We applied 10-fold cross-validation based on users.", "The first nine sets are used to train the model (upper rows in Fig. 1), while the remaining 10th set for testing (lower rows in Fig. 1).", "The results presented in plots are averaged over all ten folds.", "Since only dev texts with annotations are assumed to represent prior knowledge about users, they were used to test personalization scenarios for each of our three methods: class-based, annotation-based, and conformity-based.", "The last one was in three variants: only three GConf ( a, C ) measures (for C = { 0 } , { 1 } , { 0 , 1 } ), only three W Conf ( a, C ) measures, all six conformity values.", "Thus, we analyzed five methods in total.", "For each of them, we considered: (1) different number k", "=1,2,..20 of texts d previously annotated by user a : d A a (for conformity-based methods | A a | = k , (2) different selection procedures for texts d A a used to represent a 's beliefs (person-alization): (2a) k most controversial texts d A a , Figure 4: A classic approach generalizing output based solely on textual content (the same decision for all users) an upper flow (our baseline).", "(2b) k class-balanced most controversial (like 2a but with class balancing), (2c) most aggressive d A a (rank according to % of aggressive annotations among all for d ), (2d) random selection of k texts d A a .", "In total, we tested: 10 folds x (5 methods x 20 distinct k no. of texts x 4 selection + 1 baseline) = 4,010 models.", "The logistic regression models were optimized during the training process by using the L2 regularization and the early stopping mechanism.", "Both of them aim to prevent overfitting and the early stopping mechanism additionally ensures that the model instance that achieved the best loss function score is preserved.", "The models were run on Intel Xeon Processor E5-2650 v4.", "We also compared our personalized methods with the baseline, i.e. the commonly investigated approach generalizing user perception.", "It exploited only the evaluated text embeddings as the input.", "We considered classification performance not only for the whole test set but also in its breakdown of 10 percentage buckets according to three independent rankings of test docs: (1) most controversial ( Contr ( d ) ), (2) with least conformity GConf ( a, { 0 , 1 } ) , averaged over all a T est annotating d , (3) least W Conf ( a, { 0 , 1 } ) .", "Here, the measures were computed for the test set only, not for dev .", "It was used to investigate where our models more outperform the baseline.", "In order to generate text embeddings in each personalization method, we used the fastText library (Bojanowski et al., 2017; Joulin et al., 2017).", "It offers pre-trained word vectors for 157 languages, based on the continuous bag of words (CBOW) model in a 300-dimensional space, with character n-grams of length 5.", "Both class-based and annotation-based methods were tested using various rankings while selecting texts for personal embeddings: most controversial, class-balanced most controversial, most aggressive, and random.", "The conformity-based methods were evaluated in terms of the measure variant used: general conformity, weighted conformity, and both, all with random selection of texts.", "The results for three conformity-based personalization methods, i.e. three different sets of input conformity features (Sec. 7) and various number", "of texts used to calculate user conformity are shown in Fig. 5a.", "The greater k results in more precise evaluation of user conformity.", "It also directly and positively impacts on model performance, although gains for k > 15 are very small.", "Additionally, we considered the performance for more and less controversial documents in the test set, Fig. 6a.", "It is clearly visible that the non-personalized method is completely lost for the most controversial documents.", "However, our conformity-based models lose relatively less.", "It appears that their gain (smaller loss) is greater for 30% most controversial texts.", "In other words, the greater controversy, the greater gain from personalization.", "Fig. 5b describes evaluation of class-based embeddings for various text selection approaches and different number of previously annotated texts.", "The performance was shown only for texts from the aggression class (the same plot shapes were for macro F1 and both classes).", "The models using the most controversial texts for selection reached the best results in 14 out of 20 cases (70%).", "The highest F1 score was achieved for only 4 texts representing user beliefs.", "It was greater than the model without any personalization by over 7pp.", "Annotation-based embeddings were tested for the same rankings as in Sec. 8.2, Fig. 5c.", "The most controversial texts used to generate user representations and feed the model provided the best results in 17 out of 20 cases (85%).", "The best performance was achieved while using 18 texts to represent user personal beliefs then, the input consisted of 5,772 features.", "The F1 score of this model was greater than the baseline by over 10pp.", "The greater gain compared to the not personalized method is exposed for 50% of the most controversial texts in the test set; the greatest for 10% of the most controversial even 22.7 percentage points (twice better: 44.0% vs. 21.3%), Fig. 6b.", "The best models from each personalization method, which were achieved for annotations of most controversial texts, are compared in Fig. 5d.", "Models based on annotation-based embeddings provided significantly better results than the others in 10 out of 20 cases of k values (50%).", "The conformity-based models performed better than other models in Figure 6: Performance of two personalized methods proposed in the paper, only for the aggression class:", "3 out of 20 cases (15%); it referred to the smallest number of texts considered ( k = 1 3 ).", "The highest value of F1 score was achieved by the model using 18 texts to represent user personal beliefs.", "However, this solution used 5,772 input features, whereas the much simpler conformity-based model with 306 input features was only 2.7 percentage points worse.", "Simultaneously, conformity-based model training time was 38.6 times faster than the annotation-based one, Fig. 7.", "Practically, we would like to avoid bothering the user with too many previous annotations, i.e. we may want to limit k to just a few, for example k = 4 .", "Then, we should select k most controversial texts and use either class-based or conformity-based personalization.", "They learn just as fast but keep the same performance: 7.3 percentage points, 5.7 percentage points greater F1 for class aggressive , respectively, and 3.9 percentage points, 3.2 percentage points greater macro F1 (for both classes), respectively.", "Random selection of k texts for personalization is almost always worse than dedicated rankings, Fig. 6b,c.", "Most controversial texts turned out to be the best option that usually outperformed the most aggressive and class-balanced most controversial.", "A valuable observation from our experiments is that already one document used to valuate user beliefs is enough to significantly improve reasoning, Fig. 5d.", "Anyway, more texts in personalization keep boosting the performance, but about 4-5 previously annotated most controversial documents seem to be a reasonable trade-off between reasoning quality Figure 7: Training computing time for personalization methods in reference to no personalization method.", "Annotation-based embeddings most precisely express user opinions, but it comes at the cost of linearly longer learning and demand for more samples.", "They also cannot easily adapt to different number k of personalization documents.", "We decided to utilize very fast logistic regression model with fastText embeddings, since we wanted to examine thousands of models related to multiple scenarios, not all are presented here.", "We belief our personalization methods establish a new research direction: how to effectively and efficiently embed user beliefs?", "We expect new methods will be developed for that purpose.", "One of the most important postulate derived from our research is the demand for new datasets collections.", "We need annotations of individual humans rather than aggregated and agreed general beliefs received by majority voting, by annotator training, or by removal of controversial texts.", "Besides, our personalization methods may be applied to any NLP problem with inconsistencies between people.", "It especially refers to diverse emotions evoked by textual content, hate speech, detection of cyberbullying or offensive, toxic, abusive, harmful, or socially unaccepted content.", "The common problem of imbalanced classes in aggressiveness detection (Tab. 1, Fig. 12) will be addressed in future work.", "The main conclusion from our research is that the natural controversies associated with individual perceptions of contents should not be overlooked or reduced but rather directly exploited in personalized solutions.", "Ultimately, this reflects the diversity in our societies.", "Our three new personalization methods make use of texts previously annotated by a given user by means of conformity measures, class-based or annotation-based embeddings.", "Just a few documents are able to capture individual user beliefs, the more so, the more controversial documents they relate to.", "As a result, all our methods outperform classic solutions that generalize offensiveness understanding.", "The gain is greater for more controversial documents.", "The personalization solutions can also be applied to other NLP problems, where the content tends to be subjectively perceived as hate speech, cyberbullying, abusive or offensive, as well as in prediction of emotions elicited by text (Kocon et al., 2019a; Milkowski et al., 2021) and even in sentiment analysis (Kocon et al., 2019; Kanclerz et al., 2020).", "We keep working on testing of our methods on more resource-demanding but also more SOTA language representations: XLNet (Yang et al., 2019), RoBERTa (Liu et al., 2019), and XLM-RoBERTa (Conneau et al., 2020).", "This work was financed by (1) the National Science Centre, Poland, project no. 2020/37/B/ST6/03806; (2) the Polish Ministry of Education and Science, CLARIN-PL Project; (3) the European Regional Development Fund as a part of the 2014-2020 Smart Growth Operational Programme, CLARIN Common Language Resources and Technology Infrastructure, project no.", "POIR.04.02.00-00C002/19." ]
[ "abstain", "abstain", "abstain", "abstain", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "result", "abstain", "objective", "result", "method", "method", "abstain", "abstain", "objective", "objective", "method", "method", "result", "method", "result", "result", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "method", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "objective", "abstain", "result", "abstain", "abstain", "abstain", "other", "other" ]
[ "Multi-label emotion classification is an important task in NLP and is essential to many applications.", "In this work, we propose a sequence-to-emotion (Seq2Emo) approach, which implicitly models emotion correlations in a bi-directional decoder.", "Experiments on SemEval'18 and GoEmotions datasets show that our approach outperforms state-of-the-art methods (without using external data).", "In particular, Seq2Emo outperforms the binary relevance (BR) and classifier chain (CC) approaches in a fair setting.", "1 1 Introduction Emotion classification from text (Yadollahi et al., 2017; Sailunaz et al., 2018) plays an important role in affective computing research, and is essential to human-like interactive systems, such as emotional chatbots (Asghar et al., 2018; Zhou et al., 2018; Huang et al., 2018; Ghosal et al., 2019).", "Early work treats this task as multi-class classification (Scherer and Wallbott, 1994; Mohammad, 2012), where each data instance (e.g., a sentence) is assumed to be labeled with one and only one emotion.", "More recently, researchers relax such an assumption and treat emotion analysis as multi-label classification (MLC, Mohammad et al., 2018; Demszky et al., 2020).", "In this case, each data instance may have one or multiple emotion labels.", "This is a more appropriate setting for emotion analysis, because an utterance may exhibit multiple emotions (e.g., angry and sad, surprise and joy).", "The binary relevance approach (BR, Godbole and Sarawagi, 2004) is widely applied to multi-label emotion classification.", "BR predicts a binary indicator for each emotion individually, assuming that the emotions are independent given the input sentence.", "However, evidence in psychotherapy 1 Our code is available at https://github.com/ chenyangh/Seq2Emo suggests strong correlation among different emotions (Plutchik, 1980).", "For example, hate may co-occur more often with disgust than joy.", "An alternative approach to multi-label emotion classification is the classifier chain (CC, Read et al., 2009).", "CC predicts the label(s) of an input in an autoregressive manner, for example, by a sequence-to-sequence (Seq2Seq) model (Yang et al., 2018).", "However, Seq2Seq models are known to have the problem of exposure bias (Bengio et al., 2015), i.e., an error at early steps may affect future predictions.", "In this work, we propose a sequence-to-emotion (Seq2Emo) approach, where we consider emotion correlations implicitly.", "Similar to CC, we also build a Seq2Seq-like model, but predict a binary indicator of an emotion at each decoding step of Seq2Seq.", "We do not feed predicted emotions back to the decoder; thus, our model does not suffer from the exposure bias problem.", "Compared with BR, our Seq2Emo model implicitly considers the correlation of emotions in the hidden states of the decoder, and with an attention mechanism, our Seq2Emo is able to focus on different words in the input sentence that are relevant to the current emotion.", "We evaluate our model for multi-label emotion classification on SemEval'18 (Mohammad et al., 2018) and GoEmotions (Demszky et al., 2020) benchmark datasets.", "Experiments show that Seq2Emo achieves state-of-the-art results on both datasets (without using external data).", "In particular, Seq2Emo outperforms both BR and CC in a fair, controlled comparison.", "Emotion classification is an activate research area in NLP.", "It classifies text instances into a set of emotion categories, e.g., angry, sad, happy, and surprise.", "Well-accepted emotion categorizations include the six basic emotions in Ekman (1984) and the eight primary emotions in Plutchik's wheel of emotions (1980).", "Early work uses manually constructed emotion lexicons for the emotion classification task (Tokuhisa et al., 2008; Wen and Wan, 2014; Shahraki and Zaiane, 2017).", "Such lexicon resources include WordNet-Affect (Strapparava and Valitutti, 2004), EmoSenticNet (Poria et al., 2014), and the NRC Emotion Intensity Lexicon (Moham-mad, 2018).", "Distant supervision (Mintz et al., 2009) has been applied to emotion classification, as researchers find existing labeled datasets are small for training an emotion classifier.", "For example, Mohammad (2012) finds that social media users often use hashtags to express emotions, and thus certain hashtags can be directly regarded as the noisy label of an utterance.", "Likewise, Felbo et al. (2017) use emojis as noisy labels for emotion classification.", "Such distant supervision can also be applied to pretrain emotion-specific embeddings and language models (Tang et al., 2014; Ghosh et al., 2017).", "In addition, Yu et al. (2018) apply multi-task learning to combine polarity sentiment analysis and multi-label emotion classification with dual attention.", "Different from the above studies that use extra emotional resources, our work focuses on modeling the correlations among emotions.", "This improves multi-label emotion classification without using additional data.", "A similar paper to ours is the Sequence Generation Model (SGM, Yang et al., 2018).", "SGM accomplishes multi-label classification by an autoregressive Seq2Seq model, and is an adaptation of classifier chains (Read et al., 2009) in the neural network regime.", "Our paper models emotion correlation implicitly by decoder hidden states and does not suffer from the drawbacks of autoregressive models.", "Consider a multi-label emotion classification problem.", "Suppose we have K predefined candidate emotions, and an utterance or a sentence x can be assigned with one or more emotions.", "We represent the target labels as y = ( y 1 , , y K ) { 0 , 1 } K with y i = 1 representing that the i th emotion is on.", "Our Seq2Emo is a Seq2Seq-like framework, shown as Figure", "1. It encodes x with an LSTM, and iteratively performs binary classifications over y i with another LSTM as the decoder.", "Formally, let a sentence be x = (x 1 , , x M ) .", "We first encode each word x i with GloVe embeddings (Pennington et al., 2014), denoted by GloVe(x i ) .", "We further use the ELMo contextual embeddings (Peters et al., 2018), which processing the entire sentence x by a pretrained LSTM.", "The corresponding hidden state is used as the embedding representation of a word x i in its context.", "This is denoted by ELMo( x ) i .", "We use a two-layer bi-directional LSTM on the above two embeddings.", "The forward LSTM, for example, has the form h Et = LSTM E ([GloVe(x t ); ELMo( x ) t ] , h Et 1 ) where the superscript E denotes the encoder.", "Likewise, the backward LSTM yields the representation h Et .", "They are concatenated as h Et = [ h E ; h E ] .", "Here, we use BiLSTM for simplicity, following Sanh et al. (2019) and Huang et al. (2019).", "Other pretrained models, such as the Tranformer-based BERT (Devlin et al., 2019), may also be adopted.", "This, however, falls out of the scope of our paper, as we mainly focus on multi-label emotion classification.", "Empirical results on the GoEmotions dataset shows that, by properly addressing multi-label classification, our model outperforms a Transformer-based model (Table 2).", "Decoder.", "In Seq2Emo, an LSTM-based decoder is used to make sequential predictions on every candidate emotion.", "Suppose a predefined order of emotions is given, e.g., angry, joy, and sad.", "The decoder will perform a binary classification over these emotions in sequence.", "The order, in fact, does not affect our model much, as it is the same for all training samples and can be easily learned.", "In addition, we feed a learnable emotion embedding as input at each step of the decoder.", "This enhances the decoder by explicitly indicating which emotion is being predicted at a step.", "Different from a traditional Seq2Seq decoder, we do not feed previous predictions back as input, so as to avoid exposure bias.", "This also allows Seq2Emo to use a bi-directional LSTM as the decoder, which implicitly model the correlation among different emotions.", "where e j is the embedding for the j th emotion, and h Dj 1 is calculated by the attention mechanism in Luong et al. (2015).", "Here, the attention mechanism dynamically aligns source words when predicting the specific target emotion at a decoding step.", "Let j,i be the attention probability of the j th decoder step over the i th encoder step, computed by s j,i = ( h Dj ) (cid:62) W a h Ei (2) j,i = exp( s j,i ) (cid:80) Mi =1 exp( s j,i ) (3) where M is the number of encoder steps, and s j,i computes an unnormalized score for each pair of h Dj and h Ei with a learnable parameter matrix W a .", "Then, we compute an attention-weighted sum of encoder hidden states as the context vector c j : c j = M (cid:88) i =1 j,i h Ei (4) The context vector is concatenated with the LSTM hidden state as h Dj = [ c j ; h Dj ] .", "Likewise, we compute h D j for the backward decoder LSTM.", "They are further concatenated for predicting the emotion in question: P ( y j = 1 | x ) = ( w (cid:62) j [ h Dj ; h Dj ] + b j ) (5) where is a sigmoid function; w j and b j are the parameters for predicting the j th emotion.", "Notice that w j and b j are different at decoding different steps, because we are predicting different emotions.", "This treatment is similar to the binary relevance approach (BR, Godbole and Sarawagi, 2004).", "Our Seq2Emo implicitly models the correlations among emotions through the decoder's bidirectional LSTM hidden states, which is more suited to multi-label classification than BR's individual predictions.", "Our Seq2Emo also differs from the classifier chain approach (CC, Read et al., 2009), which uses softmax to predict the next plausible emotion from all candidates.", "Thus, CC has to feed the previous predictions as input, and suffers from the exposure bias problem.", "By contrast, we predict the presence of all the emotions in sequence.", "Hence, feeding back previous predictions is not necessary, and this prevents the exposure bias.", "In this sense, our model combines the merits of both BR and CC.", "Datasets.", "We conduct experiments on two multi-labeled emotion datasets: SemEval'18 (Affect in Tweets: Task E-c, Mohammad et al., 2018) and GoEmotions (Demszky et al., 2020).", "Compared with GoEmotions, SemEval'18 has fewer emotion categories, and is smaller in size.", "Both datasets come with standard train-dev-test splits.", "Appendix A shows the statistics of these datasets.", "Metrics.", "Following Yang et al. (2018) and Mohammad et al. (2018), we use Jaccard Index (Rogers and Tanimoto, 1960), Hamming Loss (Schapire and Singer, 1999), Macroand Micro-averaged F1 scores (Chinchor, 1992) as the evaluation metrics.", "Among them, Jaccard, Macroand Micro-F1 are different ways of counting correctly predicted labels (the higher, the better); Hamming Loss (HL) counts the misclassifications (the lower, the better).", "Baselines.", "On SemEval'18, we compare our system with the top submissions from the SemEval-2018 competition and recent development.", "NTUA-SLP (Baziotis et al., 2018) uses large amount of external emotion-related data to pretrain an LSTM-based model.", "TCS Research's system (Meish-eri and Dey, 2018) uses the support vector machine with mannually engineered features: output from LSTM models, emotion lexicons (Mo-hammad and Kiritchenko, 2015), and SentiNeural (Radford et al., 2017).", "PlusEmo2Vec (Park et al., 2018) combines neural network models, which are pretrained by using emojis as labels (Felbo et al., 2017).", "Apart from the competition, Yu et al. (2018) propose DATN, which introduces sentiment information through dual-attention.", "These aforementioned systems are based on the BR approach.", "SGM (Yang et al., 2018), however, is a CC-based model for multi-label classification.", "We include it as a baseline by using its publicly released code.", "2 Since GoEmotions dataset is fairly recent, we only include the results originally reported by Demszky et al. (2020).", "Settings.", "For the encoder, we set the two-layer bi-directional LSTM's dimension to 1200.", "Given the small number of emotions to embed, we set the dimension of decoder LSTM to 400.", "The GloVe embedding is 300 dimensional, and the ELMo embedding is 1024 dimensional.", "We use the Adam optimizer (Kingma and Ba, 2015), where the learning rate is set to 5e-4 initially and decayed with cosine annealing.", "The batch size is set to 16 for SemEval'18, and set to 32 for GoEmotions for effi-ciency concerns.", "We perform 5-fold cross-validation on the combined train-dev split for each experiment.", "Within each fold, we apply early stopping to prevent over-fitting and return the best model based on Jaccard accuracy for testing.", "We then merge the predicted results over the test set by majority voting.", "Additionally, we repeat each 5-fold experiment 5 times to further improve reduce noise.", "Overall performance.", "Table 1 presents the results on the SemEval'18 dataset.", "The proposed Seq2Emo outperforms the top submissions of the SemEval-2018 shared task in general.", "Compared with the median submission, Seq2Emo outperforms over 10% in the Jaccard accuracy.", "Admittedly, Seq2Emo performs slightly lower (but comparably) with NTUA-SLP and DATN, both introducing extra emotion/sentiment information through transfer learning.", "Our work, however, focuses on modeling the multi-label classification problem for emotion analysis and achieves high performance.", "While both NTUA-SLP and DATN are based on the BR approach, we implement additional baselines for fair comparison.", "In particular, we implement BR and BR-att variants, where the latter uses an attention mechanism when predicting the emotions, similar to our Seq2Emo.", "In the same spirit, we also implement a CC-based baseline, which is a Seq2Seq model predicting the next emotion among all candidates.", "For fair comparison, all of the BR, BR-att, and CC variants are trained with the same setting as our Seq2Emo.", "In this controlled setting, we observe that the proposed Seq2Emo consistently outperform BR, BR-att, and CC on the SemEval'18 dataset in all metrics.", "For the GoEmotions dataset, we show the results in Table", "2. Since it is a very new dataset, we can only find previous reported results from Demszky et al. (2020).", "In addition, we include BR, BR-att, and CC for fair comparison.", "Results show that Seq2Emo outperforms other models on most of the metrics, except that Seq2Emo is worse than CC on Jaccard accuracy.", "This is understandable, as we have quite a few metrics with different datasets.", "It is worth noting that the model of Demszky et al. (2020) is based on BERT (Devlin et al., 2019).", "We replicate their approach to obtain all the evaluation metrics.", "We observe that our replication achieves a similar Macro-F1 to Demszky et al. (2020), and thus our replication is fair.", "The results show that our Seq2Emo achieves comparable or higher performance than the BERT-based model.", "We run one-sided t-tests to compare Seq2Emo with the best competing model that does not use additional data, shown in Tables 1 and", "2. Results ver-# Model Jaccard Micro F. Macro F. HL 1 BERT (Demszky et al., 2020) 46.00 2 BERT (our implementation) 53.06 58.49 46.23 0.0312 3 BR 52.76 58.21 45.38 0.0312 4 BR-att 53.35 58.53 45.11 0.0310 5 CC 55.61 58.38 43.92 0.0352 6 Seq2Emo (uni) 53.07 58.76 45.30 0.0306 7 Seq2Emo 53.79 59.57 47.28 0.0302 t-test p < 0 .", "ify that most of the comparisons are statistically significant (although some are more significant than others).", "The two experiments provide consistent evidence on the effectiveness of our Seq2Emo.", "Seq2Emo with an uni-directional decoder.", "One of the virtues of Seq2Emo is that it can use a bi-directional LSTM decoder.", "To show its effectiveness, we perform experiments on Seq2Emo with an uni-directional decoder, denoted as Seq2Emo (uni).", "We show the results in Tables 1 and 2 for SemEval'18 and GoEmotions datasets, respectively.", "We first observe that Seq2Emo performs better than Seq2Emo (uni), which in turn is better than BR-att that predicts emotions individually.", "This confirms that our Seq2Emo is able to implicitly model the correlation of different emotions, and that a bi-directional decoder is better than a uni-directional one.", "Order of emotions.", "Both Seq2Emo and the classifier chain (CC) predict emotions sequentially.", "The difference is that our Seq2Emo predicts the presence (or not) of an emotion in a predefined order.", "CC predicts the next salient emotion au-toregressively, it learns the emotion order from the training data.", "We try different orders, including the original order in the dataset and the ascend-ing/descending order based on emotion frequency.", "We also try an order where the emotion frequency first increases and then decreases (concave-down), and vice versa (concave-up).", "We perform experiments on SemEval'18 and report the Jaccard accuracy and the standard deviations in Table", "3. The results show that Seq2Emo is the least affected by the order of the emotions, whereas the performance of CC varies largely.", "This verifies that the emotion order does not affect Seq2Emo much as it can be easily learned.", "CC is more sensitive to emotion order and has a larger variance, as it suffers from the exposure bias problem.", "pendix B. Results show that our Seq2Emo can attend to relevant words when predicting the emotion of interest.", "In this work, we propose Seq2Emo for multi-label emotion classification.", "Our approach implicitly models the relationship of different emotions in its bi-directional decoder, and is shown to be better than an individual binary relevance (BR) classifier.", "Our model does not suffer from the exposure bias problem and also outperforms the classifier chain (CC).", "In general, we achieve state-of-the-art performance for multi-emotion classification on the SemEval'18 and GoEmotions datasets (without using additional emotion labels).", "We acknowledge the support of the Natural Sciences and Engineering Research Council of Canada (NSERC) under grant Nos.", "RGPIN-2020-04465 and RGPIN-2020-04440.", "Chenyang Huang is supported by the Borealis AI Graduate Fellowship Program.", "Lili Mou and Osmar Zaane are supported by the Amii Fellow Program and the Canada CIFAR AI Chair Program.", "This research is also supported in part by Compute Canada ( www.computecanada.ca )." ]
[ "abstain", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "objective", "method", "method", "method", "method", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "objective", "result", "result", "result", "other", "other", "other", "other", "other" ]
[ "Conversational KBQA is about answering a sequence of questions related to a KB.", "Follow-up questions in conversational KBQA often have missing information referring to entities from the conversation history.", "In this paper, we propose to model these implied entities, which we refer to as the focal entities of the conversation.", "We propose a novel graph-based model to capture the transitions of focal entities and apply a graph neural network to derive a probability distribution of focal entities for each question, which is then combined with a standard KBQA module to perform answer ranking.", "Our experiments on two datasets demonstrate the effectiveness of our proposed method.", ": Mia Farrow : And Alan Arkin was behind ?", ": Schmendrick : So, who sang for the film?", "R : America : Genre of this band's music?", "R : Folk rock, Soft rock : By the way, who was the director?", "R : Jules Bass : What novel has the character named Nick Carraway?", "Recently, conversational Knowledge Base Question Answering (KBQA) has started to attract peo-ple's attention (Saha et al., 2018; Christmann et al., 2019; Guo et al., 2018; Shen et al., 2019).", "Motivated by real-world conversational applications, particularly personal assistants such as Apple Siri and Amazon Alexa, the task aims to answer questions over KBs in a conversational manner.", "Figure 1 shows an example of conversational KBQA.", "As we can see, the conversation can be roughly divided into two parts: Q1, Q2 and Q3 revolve around the book The Great Gatsby, while Q4 and Q5 revolve around its author, F. Scott Fitzgerald.", "Although these entities are not explicitly mentioned in the questions, they are implied by the conversation history, and they are critical for answering the questions.", "For example, Q3, when taken out of context, cannot be answered because Q3 itself does not state the title of the book being discussed.", "But since Q3 is a follow-up question of Q1, humans can easily infer that the book of interest here is The Great Gatsby and can hence answer the question correctly.", "regard the entity The Great Gatsby as the focus of the conversation at this point.", "When we move on to Q4, again, if the question is taken out of context, we cannot answer it.", "But by following the conversation flow, humans can guess that at this point the focus of the conversation has shifted to be F. Scott Fitzgerald (the answer to Q3), and based on this understanding, humans would have no problem answering Q4.", "We refer to The Great Gatsby and F. Scott Fitzgerald as the focal entities of the conversation.", "Based on the observation above, we hypothesize that it is important to explicitly model how a conversation transits from one focal entity to another in order to effectively address the conversational KBQA task.", "There are at least two scenarios where knowing the current focal entity helps answer the current question.", "(1) The current focal entity is the unspecified topic entity 1 of the current question.", "E.g., The Great Gatsby is the unspecified topic entity for Q3, which effectively should be what is the name of the author of The Great Gatsby ? (2) The current focal entity is closely related to the 1 In KBQA, a topic entity is an entity mentioned in the question and the starting point in the KB to search for answers.", "topic entity of the current question and can help narrow down the search space in case of ambiguity.", "E.g., knowing the focal entity is The Great Gatsby for Q2, the system can identify the correct subgraph of the KB that contains both Jay Gatsby (the topic entity) and The Great Gatsby for answer prediction, which is critical if there are more than one entities in the KB named Jay Gatsby.", "We can also see that simple entity coreference resolution techniques (e.g., Lee et al. (2017)) may not always help for conversational KBQA as no pronouns are used in many cases.", "Although existing work on conversational KBQA has tried to address the challenges of missing information in follow-up questions by modeling conversation history, most of it simply includes everything in the conversation history without considering focal entities.", "For example, Saha et al. (2018) leveraged a hierarchical encoder to encode all the questions and responses in the conversation history, but there was no explicit modeling of anything similar to focal entities.", "Guo et al. (2018) concatenated previous questions with the current question to fill in the missing information, but again there was no special treatment of entities.", "A more recent work (Christmann et al., 2019) believed that the answers to sequential questions should be closely connected to each other in the KB.", "Thus, they proposed an algorithm to keep a context graph in memory, expanding it as the conversation evolves to increase the connections between the questions.", "However, their method is inefficient in capturing the most significant information related to focal entities in a conversation history.", "In this paper, we explicitly model the focal entities and their transitions in a conversation in order to improve conversational KBQA.", "Based on several observations we have with focal entities, such as their tendencies to be topic entities or answer entities in the conversation history and their stickiness in a conversation, we propose to construct an Entity Transition Graph to elaborately model entities involved in the conversation as well as their interactions, and apply a graph-based neural network to derive a focal score for each entity in the graph, which represents the probability of this entity being the focal entity at the current stage of the conversation.", "The key intuition behind the graph neural network is to propagate an entity's focal score in the i -th turn of the conversation to its neighboring entities in the ( i + 1) -th turn of the conversation.", "This derived focal entity distribution is then incorporated into a standard single-turn KBQA system to handle the current question in the conversation.", "We evaluate our proposed method on two conversational KBQA datasets, ConvQuestions (Christ-mann et al., 2019) and ConvCSQA (which is a subset we derived from CSQA (Saha et al., 2018)).", "Experiment results show that compared with either a single-turn KBQA system or a system that simply encodes the entire conversation history without handling focal entities in a special way, our method can clearly perform better on both datasets.", "Our method also outperforms several existing systems that represent the state of the art on these benchmark datasets.", "We also conduct error analysis that sheds light on where further improvement is desired.", "We summarize our contributions of this paper as follows: (1) We propose to explicitly model the focal entities of a conversation in order to improve conversational KBQA.", "(2) We propose a graph-based neural network model to capture the transitions of focal entities and derive a focal entity distribution that can be plugged into a standard single-turn KBQA system.", "(3) We empirically demonstrate the effectiveness of our method on two datasets.", "Our method can outperform the state of the art by 9.5 percentage points on ConvQuestions and 14.3 percentage points on ConvCSQA 2 .", "A KBK consists of a large nubmer of triplets (cid:104) e s , r, e o (cid:105) , where e s and e o are entities and r indicates", "indicates their relation.", "We first define single-turn KBQA as follows.", "Given a KB K and a question q , the system is supposed to return one or more entities from K as the answer to q .", "In singleturn KBQA, different question-answer pairs D = { ( q 1 , a 1 ) , ( q 2 , a 2 ) , . . . } are independent.", "Conversational KBQA is a multiple-turn KBQA problem, where a sequence of question-answer pairs c = (( q 1 , a 1 ) , ( q 2 , a 2 ) , ..., ( q m , a m )) forms a complete conversation and a set of independent conversations D = { c 1 , c 2 , . . . } forms a conversational KBQA dataset.", "We refer to each question-answer pair as one turn of the conversation .", "A conversational KBQA system is supposed to return 2 Our code is available at https://github.com/ lanyunshi/ConversationalKBQA .", "A standard single-turn KBQA includes two main components: a Query Generator and an Answer Predictor .", "The Query Generator generates a set of candidate query graphs C for a given q .", "Specifically, we first assume that some entities relevant to q are first identified.", "These can be entities directly mentioned in q or other entities relevant to q but implicitly mentioned, such as the focal entities we introduced earlier.", "Starting from these entities, the Query Generator generates a set of candidate query graphs (Yih et al., 2016) from K , which lead to some candidate answers to the question.", "The second component of a single-turn KBQA system, the Answer Predictor, is a neural-network-based ranker that takes in the question as well as the generated query graphs as input and outputs a predicted answer a .", "For conversational KBQA, the initial question q 0 in a conversation c can be answered directly using an existing single-turn KBQA approach (Yu et al., 2017; Luo et al., 2018; Yih et al., 2016; Lan et al., 2019).", "When the single-turn KBQA system is used for answering follow-up questions, we make the following modifications: First, we assume that a focal entity distribution (which is the core of our method and will be presented in detail below) is derived from the conversation history.", "Then each focal entity is considered relevant to the current question and will be used to generate candidate query graphs by the Query Generator.", "Meanwhile, the probabilities of these focal entities (i.e., their focal scores) will be used by the Answer Predictor when it ranks the candidate query graphs.", "Our proposed method hinges on the notion of focal entities that we introduced in Section 1. Recall that a focal entity is the focus of the conversation at its current stage.", "To model focal entities, we propose to first use an Entity Transition Graph to model all the entities involved in the conversation so far and their interactions.", "These entities are candidate focal entities.", "The edges of the graph reflect how the conversation has shifted from one entity to another, and such transitions can help us estimate how likely an entity is the current focal entity, as we will explain in Section 3.2.", "This graph is incrementally constructed by a Graph Constructor after each turn of the conversation.", "To derive a focal score (i.e., a probability) for each entity in this graph, a Focal Entity Predictor employs a graph-based neural network and generates a new focal entity distribution based on the previous focal entity distribution as well as the conversation history, which is encoded by a Conversation History Encoder using a standard sequence model.", "Finally, the derived focal entity distribution is incorporated into the single-turn KBQA module presented in Section 2.2 to perform answer prediction.", "The overall architecture of our method is illustrated in Figure 2. 3.2 Entity Transition Graph and Graph Constructor Our Graph Constructor builds the Entity Transition Graph as follows.", "The initial Entity Transition Graph G (0) is set to be an empty graph.", "Let G ( t 1) denote the Entity Transition Graph before the t th turn of the conversation, and suppose we have processed the t -th question and obtained the answer entity a t (which is predicted) with the help of G ( t 1) .", "We now need to construct G t , which will be used to help answer q t +1 .", "Recall that the Answer Predictor presented in Section 2.2 obtains the answer entity a t by identifying a top-ranked query graph, which starts from either an entity in G ( t 1) or a topic entity mentioned in q t .", "Let S t denote all the entities except a t in this top-ranked query graph.", "The Graph Constructor adds the following nodes and edges to G ( t 1) in order to build G t .", "For each entity e S t , add e to the graph as a node if it does not exist in the graph yet.", "Also add a t to the graph as a node if it does not exist yet.", "For each newly added node e , add a self-loop edge from e to itself.", "For each entity e S t , add a forward edge from e to a t .", "For each entity e S t , add a backward edge from a t to e .", "For each entity e S 1 , i.e., the entities relevant to the first question, add a backward edge from a t to e .", "The way we construct the Entity Transition Graph as described above is based on the following observations with focal entities: (1) A focal entity is often an answer entity to a previous question.", "Therefore we include all previous answer entities in the graph.", "(2) A focal entity is also likely to be an entity relevant to a previous question that has led to the answer entity.", "We therefore also include those entities in the query graphs into the Entity Transition Graph.", "(3) The focal entity tends to stay unchanged and thus has a stickiness property in a conversation.", "Thus we add a self-loop edge for each node.", "(4) The focal entity may often go back to some entity relevant to the first question.", "Therefore, we always add an edge from the latest answer entity to entities relevant to the first question.", "(5) If an entity is frequently discussed in the conversation history, it might be more likely to be a focal entity.", "We thus give such entities more connectivities in the graph.", "To give a concrete example of the Entity Transition Graph, let us take a look at Figure 3. When we answer Q2, Nick Carrayway and The Great Gatsby are included in the graph because the top-ranked query graph of Q1 contains the entity Nick Carrayway and returns the entity The Great Gatsby.", "As the conversation proceeds, the Entity Transition Graph grows dynamically and we eventually obtain Figure 3", "(d) when we answer Q5.", "The objective of the Conversation History Encoder is to encode the textual context of the previous questions and their predicted answers, particularly information other than the entities (which is already captured by the Entity Transition Graph).", "The output of the Conversation History Encoder is a single vector and it will be fed into the Focal Entity Predictor as an additional input.", "Similar to previous methods (Serban et al., 2017; Saha et al., 2018), we leverage a hierarchical encoder to encode the conversation history, where a lower layer encodes individual questions and predicted answers independently and an upper layer connects the sequence of questions and answers to derive a single vector.", "Specifically, suppose we have completed ( t 1) turns of the conversation.", "The lower-layer encoder employs a standard sequence encoder (in our case a BiLSTM) to encode each question and each predicted answer so far.", "Let q i R d ( 1 i ( t 1) ) denote the encoded vector representation of q i , and similarly, let a i R d denote the encoded vector for a i .", "Next, the upper-layer encoder leverages a recurrent network to encode the vector sequence q 1 , a 1 , q 2 , a 2 , . . . and generate a sequence of hidden vectors.", "The last hidden vector, which we denote as h t 1 R d , will be used as the representation of the conversation history.", "It is worth noting that although our Conversation History Encoder is similar to how previous work encodes conversation history (Serban et al., 2017), previous work uses the representation h t 1 directly as part of the representation of the current question, which introduces noise.", "In contrast, we use it to help predict our focal entity distribution only.", "The Focal Entity Predictor employs a graph convolution network (GCN) (Kipf and Welling, 2017; Schlichtkrull et al., 2018) to derive a focal score for each node in the Entity Transition Graph at each turn of the conversation.", "First, we assume that each entity (i.e., node) e in the graph has a vector representation, and this representation is updated at each turn.", "Let us use e t to represent this vector at the t -th turn.", "For each interaction relation label (i.e., forward, backward and self-loop), we also use a vector to represent it at each turn, which we denote as r t .", "At the t -th turn, the vector representations of the entities and interaction relations are updated as follows: e t = (cid:88) ( r,e (cid:48) ) N ( e ) r e (cid:48) t 1 , (1) r = softmax ( r,e (cid:48) ) N ( e ) ( h (cid:124) t 1 r t 1 ) , (2) where N ( e ) is the set of nodes connect to e together with the connecting edges, and h t 1 is the output of the Conversation History Encoder as we have explained earlier.", "The formulas above show that the representation of e will be aggregated from the representations of its neighborhood entities from the last turn of the conversation, and the aggregation weights are derived based on the conversation history h t 1 as well as the nature of the interaction relation.", "For each node that is newly added to the Entity Transition Graph and each of the interaction relation labels, we initialize its vector representation to a random vector.", "To derive the focal score of entity e at the current turn, we make use of both e t and two additional features.", "Specifically, we obtain the out degree of each entity from the entire KB as one additional feature.", "We also assign a label to each entity to indicate whether it is from S t (as defined in Section 3.2) or is a t .", "We denote these two features as e out-degree and e temporal , where e out-degree is a scalar and e temporal R d is represented using embeddings.", "We now concatenate e t and e temporal as well as e out-degree to derive focal scores as follows: e t = [ e t e temporal e out-degree ] , (3) FocalScore t ( e ) = softmax e G c ( w (cid:124) t e t + b t ) , (4) where denotes concatenation, both w t and b t are parameters to be learned and they are specific to the t -th turn.", "Here FocalScore t ( e ) denotes the focal score, i.e., the probability that entity e would be the focal entity for the t -th question.", "Our training objective comes from two parts: First, we want to minimize the loss from incorrectly answering a question.", "For this, we use a standard cross entropy loss.", "Second, we want to supervise the training of the Focal Entity Predictor, but we do not have any ground truth for the focal entity distributions.", "We therefore produce pseudo ground truth as follows: If there is an entity that could generate at least one query graph resulting in the correct answer, we treat it as a correct focal entity for that question and assign a value of 1 to the entry for this entity in the distribution; otherwise, the value remains 0 .", "Finally, we normalize the distribution and obtain a pseudo distribution.", "We then try to minimize the KL-divergence between this pseudo ground truth of focal entity distribution and our predicted focal entity distribution.", "In this section, we first introduce two benchmark datasets and our experiment settings in Section 4.1 and Section 4.2.", "Next, we discuss the main results and analysis in Section 4.3 and Section 4.4.", "We further show the comparison with SOTA systems in Section 4.5 and some error analysis in Section 4.6.", "We use two datasets to evaluate our proposed method.", "The latest WikiData dump 3 is used as the KB for both datasets.", "Average accuracy and F1 score are employed to measure the performance.", "3 https://query.wikidata.org ConvQuestions: This is a large-scale conversational KBQA dataset 4 created via Amazon Mechanical Turk (Christmann et al., 2019).", "The questions cover topics in five domains.", "Each conversation contains 5 sequential questions with annotated ground truth answers.", "There are many questions with missing information in the conversations, which makes the dataset very suitable for evaluating our method.", "The dataset contains 6K, 2K and 2K conversations for training, development and testing, each evenly distributed across domains.", "ConvCSQA: This dataset comes from the the CSQA dataset 5 (Saha et al., 2018), originally created for a setting similar to conversational KBQA.", "However, one of the focuses of the original CSQA data was complex questions, which is not related to our work.", "Also, the CSQA data contains many questions in a conversation that do not have connections with preceding questions.", "We therefore elaborately selected conversational questions from CSQA to suit our needs, using the following strategies: 1) We collected the topic entities as well as the answer entities in the conversation history.", "If a follow-up question contains one of these entities, we kept the question; otherwise, we omitted it.", "2) If the question type description did not explicitly mention that this question contains an indirect subject, we removed it.", "3) We also filtered out the conversations with a length smaller than 5.", "As a result, we obtained a subset of CSQA that consists of 7K, 0.5K and 1K conversations for training, development and testing, respectively.", "The average number of questions per conversation is 5.36.", "We call this the ConvCSQA dataset.", "To evaluate the effectiveness of our proposed Entity Transition Graph and Focal Entity Predictor, we mainly compare the following three methods:", "SingleTurn: This is the method described in Section 2.2.", "Specifically, we first recognize the named entities in the questions via the AllenNLP NER tool 6 and retrieve the corresponding entities via SPARQL.", "To generate candidate query graphs, we consider all subgraphs that are 1 hop or 2 hops away from the topic entities (or focal entities in 4 https://convex.mpi-inf.mpg.de/ 5 https://amritasaha1812.github.io/ CSQA/ 6 https://demo.allennlp.org/ named-entity-recognition the case when the SingleTurn system is used in our method).", "Next, we employ the Answer Predictor that consists of two BiLSTMs to encode the question as well as each candidate subgraph independently.", "The final score is computed via the dot product of these two vectors.", "ConvHistory: This method follows a standard way of encoding the conversational history using a two-level hierarchical encoder (Serban et al., 2017).", "It does not explicitly model any focal entity.", "Our Method: This is our proposed method where we model the focal entities through the Entity Transition Graph and the Focal Entity Predictor.", "This method also uses the same hierarchical encoder as above to encode the conversation history.", "Implementation Details: We implement our method by PyTorch on Nvidia V440.64.00-32GB GPU cards.", "We employ GloVe 7 as our initialized word embeddings and set the maximum number of GCN layers as 10 .", "We apply grid search through pre-defined hyper-parameter spaces, specifically, hidden dimensionality amongst { 200 , 300 , 400 } , learning rate amongst { 3 e 3 , 3 e 4 , 3 e 5 } and dropout ratio amongst { 0 .", "2 , 0 .", "1 , 0 .", "0 } .", "The best hyper-parameter configuration is based on the best F1 score on the development set.", "Eventually, for each neural network model, we set the hidden dimensionality to 300 .", "A dropout layer is set before each MLP with a ratio of 0.1.", "We use the Adam optimizer (Kingma and Ba, 2015) with a learning rate of 3 e 5 , and the batch size is 1. The training epoch number is 100 .", "Table 1 shows the overall results.", "As we can see, our method clearly outperforms both SingleTurn and ConvHistory on both datasets.", "This confirms that with the additional components we added that model the focal entities, the method is able to make use of the conversation history more effectively to answer the follow-up questions compared with ConvHistory (which simply encode the entire history without specifically modeling focal entities).", "Surprisingly, we find that simply modeling the conversation history through a standard two-level hierarchical sequence model does not consistently 7 https://nlp.stanford.edu/projects/ glove/ Methods ConvQuestions ConvCSQA Dev Test Dev Test SingleTurn 29.7 27.3/30.5 61.8 56.8/65.0 ConvHistory 29.1 27.2/30.2 62.0 57.0/65.1 Our Method 31.9 29.8 / 33.3 63.2 57.8 / 66.9 Table 1: F1 results on development and Acc/F1 results on test of ConvQuestions and ConvCSQA.", "improve the performance.", "It suggests that including all the historical conversation information in a brute-force manner may not capture the most important conversation contexts effectively.", "Ablation Studies.", "Next, we remove the major components in Our Method one at a time and show the ablation results conducted on ConvQuestions in Table 2. Specifically, we 1) remove the effect of modeling conversation history by replacing r in Eqn.", "(1) with a uniform distribution; 2) remove graph information by replacing e t with h t 1 in Eqn.", "(3); 3) remove entity property by omitting e out-degree in Eqn.", "(3).", "The results in Table 2 show that all the above information helps our method to predict focal entities accurately and achieve the best KBQA results.", "Breakdown by Turns of Conversation.", "Our method is specifically designed for follow-up questions.", "Therefore, it would be interesting to see how the method fares for questions at different turns of the conversation.", "Is it more difficult to answer a question at a later turn of the conversation than an earlier question?", "We therefore show the results breakdown by turns of conversation in Table 3. We observe that as expected, for questions at later turns of a conversation, the performance drops for all three methods.", "We believe that for both ConHis-tory and Our Method, this is partially due to error propagation.", "On the other hand, compared with SingleTurn and ConvHistory, Our Method is still more robust when handling the follow-up questions at later turns of a conversation.", "Case Studies.", "To verify if our predicted focal entities are meaningful, we use two concrete examples to conduct a case study.", "Figure 4 displays two example conversations from ConvQuestions.", "We show the focal entity distributions for the sequence of questions in bar charts.", "We can see that the predicted focal entity distribution indeed follows the flow of the conversation.", "For example, the entity with the largest focal score in the first conversation transits from F.Scott Fitzgerald to Zelda Fitzger-ald, and then to St. Patrick's Cathedral, while in the second conversation it remains as Tupac Shakur throughout the conversation.", "We compare our proposed method with existing state-of-the-art systems in Table 4. Our method outperforms other systems on most questions and achieves overall 9 .", "5 and 14 .", "3 percentage points of improvement on ConvQuestions and ConvCSQA, respectively.", "CONVEX, Star and Chain employ expansion-based or rule-based strategies to identify the answer entities for follow-up questions.", "HRED+KVmem combines the hierarchical encoder with a Key-Value Memory network.", "D2A and MaSP are two seq2seq models to translate the questions into logical forms.", "Our system is developed based on a standard single-turn KBQA system.", "We strengthen it by modeling focal entity transitions, and it shows outstanding capability in answering co-referenced, ellipsis and verification questions.", "To better understand where our method has failed, we randomly sampled and analysed 100 questions with wrong predictions and manually inspected them.", "We find that the errors are mainly due to the following reasons.", "Mis-prediction of Relations ( 43% ) The major errors come from relation mis-predictions.", "In our model, relation prediction is done by a simple answer predictor.", "We expect that employing a more advanced encoder could reduce this type of errors.", "Query Generation Failure ( 29% ) There are many cases where the correct query graphs are difficult to be collected from the KB due to the incompleteness of the KB or the limitation of the query generator.", "Mis-linking of Topic Entities ( 22% ) The errors caused by wrong identification of the topic entities of questions also lead to incorrectness of the final answers, because if the entity linker links the question to a wrong entity, it is unlikely to answer the question correctly.", "This is a general challenge for KBQA.", "Single-turn KBQA task has been studied for decades.", "Traditional methods tried to retrieve the correct answers from the KB via either embedding-based methods (Bordes et al., 2014; Xu et al., 2019; Sun et al., 2018, 2019; Qiu et al., 2020; He et al., 2021) or semantic parsing-based methods (Berant et al., 2013; Yih et al., 2015; Luo et al., 2018; Zhang et al., 2019; Lan and Jiang, 2020).", "Conversational KBQA is a relatively new direction that builds on top of single-turn KBQA.", "8 Since the original D2A and MaSP codes leverage the ground truth topic entities and relations to pre-train the entity linker and relation predictor but we do not, we skip the pretraining procedure in our re-implementation.", "Conversational KBQA is related to dialogue systems and conversational QA in general, which require techniques to sequentially generate responses based on the interactions with users (Ghazvinine-jad et al., 2018; Rajendran et al., 2018; Das et al., 2017).", "A conversation history can be encoded via different techniques such as a hierarchical neural network (Serban et al., 2017; Reddy et al., 2019) or modeling the flow of the conversation along with a passage (Huang et al., 2019; Gao et al., 2019, 2020).", "Our work also intends to capture the flow of the conversation but we specifically model the transitions of focal entities.", "Regarding conversational KBQA, Saha et al. (2018) proposed a model consisting of a hierarchical encoder, a key-value memory network and a decoder.", "Guo et al. (2018) and Shen et al. (2019) employed a seq2seq model to encode the conversation history then output a sequence of actions to form an executable command.", "Some follow-up work (Guo et al., 2019; Shen et al., 2020) focused on the meta-learning setting or the effective search strategy under weak supervision, which is beyond the focus of this paper.", "Christmann et al. (2019) detected frontier nodes by expanding a subgraph, which are potential answer entities to the current question.", "Their motivation is relevant to ours but we target at modeling the focal entities in the conversation.", "Acknowledgements This research was supported by the Singapore Ministry of Education (MOE) Academic Research Fund (AcRF) Tier 1 grant.", "Marjan Ghazvininejad, Chris Brockett, Ming-Wei Chang, Bill Dolan, Jianfeng Gao, Wen-tau Yih, and Michel Galley.", "2018.", "A knowledge-grounded neural conversation model.", "In Proceedings of the AAAI Conference on Artificial Intelligence , pages 5110 5117.", "Daya Guo, Duyu Tang, Nan Duan, Ming Zhou, and Jian Yin.", "2018.", "Dialog-to-action: Conversational question answering over a large-scale knowledge base.", "In Proceedings of the 32nd International Conference on Neural Information Processing Systems , pages 29422951.", "Hsin-Yuan Huang, Eunsol Choi, and Wen tau Yih.", "2019.", "FlowQA: Grasping flow in history for conversational machine comprehension.", "In Proceedings of International Conference on Learning Representations .", "Diederik P. Kingma and Jimmy Ba.", "2015.", "Adam: A method for stochastic optimization.", "In Proceedings of International Conference on Learning Representations .", "Thomas N. Kipf and Max Welling.", "2017.", "Semi-supervised classification with graph convolutional networks.", "In Proceedings of International Conference on Learning Representations .", "In this paper, we present a method to model the transitions of focal entities in a conversation in order to improve conversational KBQA.", "Our method can outperform two baselines and achieve state-of-the-art performance on two benchmark datasets." ]
[ "abstain", "abstain", "objective", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "other", "objective", "other", "other", "other", "abstain", "other", "other", "other", "objective", "abstain", "other", "other", "other", "objective", "other", "other", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "method", "method", "abstain", "method", "method", "method", "method", "abstain", "method", "method", "method", "method", "method", "method", "abstain", "method", "method", "method", "method", "method", "method", "abstain", "result", "result", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "other", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "method", "other", "other", "abstain", "other", "other", "method", "other", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result" ]
[ "Emotion lexicons describe the affective meaning of words and thus constitute a centerpiece for advanced sentiment and emotion analysis.", "Yet, manually curated lexicons are only available for a handful of languages, leaving most languages of the world without such a precious resource for downstream applications.", "Even worse, their coverage is often limited both in terms of the lexical units they contain and the emotional variables they feature.", "In order to break this bottleneck, we here introduce a methodology for creating almost arbitrarily large emotion lexicons for any target language.", "Our approach requires nothing but a source language emotion lexicon, a bilingual word translation model, and a target language embedding model.", "Fulfilling these requirements for 91 languages, we are able to generate representationally rich high-coverage lexicons comprising eight emotional variables with more than 100k lexical entries each.", "We evaluated the automatically generated lexicons against human judgment from 26 datasets, spanning 12 typologically diverse languages, and found that our approach produces results in line with state-of-the-art monolingual approaches to lexicon creation and even surpasses human reliability for some languages and variables.", "Code and data are available at github.com/JULIELab/MEmoLon archived under DOI 10.5281/zenodo.3779901 .", "An emotion lexicon is a lexical repository which encodes the affective meaning of individual words (lexical entries).", "Most simply, affective meaning can be encoded in terms of polarity , i.e., the distinction whether an item is considered as positive, negative, or neutral.", "This is the case for many well-known resources such as WORDNET-AFFECT (Strapparava and Valitutti, 2004), SENTIWORDNET (Baccianella et al., 2010), or VADER (Hutto and Gilbert, 2014).", "Yet, an increasing number of researchers focus on more expressive encodings for affective states inspired by distinct lines of work in psychology (Yu et al., 2016; Buechel and Hahn, 2017; Sedoc et al., 2017; Abdul-Mageed and Un-gar, 2017; Bostan and Klinger, 2018; Mohammad, 2018; Troiano et al., 2019).", "Psychologists, on the one hand, value such lexicons as a controlled set of stimuli for designing experiments, e.g., to investigate patterns of lexical access or the structure of memory (Hofmann et al., 2009; Monnier and Syssau, 2008).", "NLP researchers, on the other hand, use them to augment the emotional loading of word embeddings (Yu et al., 2017; Khosla et al., 2018), as additional input to sentence-level emotion models so that the performance of even the most sophisticated neural network gets boosted (Mohammad and Bravo-Marquez, 2017; Mohammad et al., 2018; De Bruyne et al., 2019), or rely on them in a keyword-spotting approach when no training data is available, e.g., for studies dealing with historical language stages (Buechel et al., 2016).", "As with any kind of manually curated resource, the availability of emotion lexicons is heavily restricted to only a few languages whose exact number varies depending on the variables under scrutiny.", "For example, we are aware of lexicons for 15 languages that encode the emotional variables of Valence, Arousal, and Dominance (see Section 2).", "This number leaves the majority of the world's (less-resourced) languages without such a dataset.", "In case such a lexicon exists for a particular language, it is often severely limited in size, sometimes only comprising some hundreds of entries (Davidson and Innes-Ker, 2014).", "Yet, even the largest lexicons typically cover only some ten thousands of words, still leaving out major portions of the emotion-carrying vocabulary.", "This is especially true for languages with complex morphology or productive compounding, such as Finnish, Turkish, Czech, or German.", "Finally, the diversity of emotion representation schemes adds another layer of complexity.", "While psychologists and NLP researchers alike find that different sets of emotional variables are complementary to each other (Stevenson et al., 2007; Pinheiro et al., 2017; Barnes et al., 2019; De Bruyne et al., 2019), manually creating emotion lexicons for every language and every emotion representation scheme is virtually impossible.", "We here propose an approach based on cross-lingual distant supervision to generate almost arbitrarily large emotion lexicons for any target language and emotional variable, provided the following requirements are met: a source language emotion lexicon covering the desired variables, a bilingual word translation model, and a target language embedding model.", "By fulfilling these preconditions, we can automatically generate emotion lexicons for 91 languages covering ratings for eight emotional variables and hundreds of thousands of lexical entries each.", "Our experiments reveal that our method is on a par with state-of-the-art monolingual approaches and compares favorably with (sometimes even outperforms) human reliability.", "Representing Emotion.", "Whereas research in NLP has focused for a very long time almost exclusively on polarity , more recently, there has been a growing interest in more informative representation structures for affective states by including different groups of emotional variables (Bostan and Klinger, 2018).", "Borrowing from distinct schools of thought in psychology, these variables can typically be subdivided into dimensional vs. discrete approaches to emotion representation (Calvo and Mac Kim, 2013).", "The dimensional approach assumes that emotional states can be composed out of several foundational factors, most noticeably Valence (corresponding to polarity), Arousal (measur-ing calmness vs. excitement), and Dominance (the perceived degree of control in a social situation); VAD, for short (Bradley and Lang, 1994).", "Conversely, the discrete approach assumes that emotional states can be reduced to a small, evolutionary motivated set of basic emotions (Ekman, 1992).", "Although the exact division of the set has been subject of hot debates, recently constructed datasets (see Section 4) most often cover the categories of Joy , Anger , Sadness , Fear , and Disgust ; BE5, for short.", "Plutchik's Wheel of Emotion takes a middle ground between those two positions by postulating emotional categories which are yet grouped into opposite pairs along different levels of intensity (Plutchik, 1980).", "Another dividing line between representational approaches is whether target variables are encoded in terms of (strict) class-membership or scores for numerical strength.", "In the first case, emotion analysis translates into a (multi-class) classification problem, whereas the latter turns it into a regression problem (Buechel and Hahn, 2016).", "While our proposed methodology is agnostic towards the chosen emotion format, we will focus on the VAD and BE5 formats here, using numerical ratings (see the examples in Table 1) due to the widespread availability of such data.", "Accordingly, this paper treats word emotion prediction as a regression problem.", "Building Emotion Lexicons.", "Usually, the ground truth for affective word ratings (i.e., the assignment of emotional values to a lexical item) is acquired in a questionnaire study design where subjects (annotators) receive lists of words which they rate according to different emotion variables or categories.", "Aggregating individual ratings of multiple annotators then results in the final emotion lexicon (Bradley and Lang, 1999).", "Recently, this workflow has often been enhanced by crowdsourcing (Mohammad and Turney, 2013) and best-worst scaling (Kiritchenko and Mohammad, 2016).", "As a viable alternative to manual acquisition, such lexicons can also be created by automatic means (Bestgen, 2008; Koper and Schulte im Walde, 2016; Shaikh et al., 2016), i.e., by learning to predict emotion labels for unseen words.", "Researchers have worked on this prediction problem for quite a long time.", "Early work tended to focus on word statistics, often in combination with linguistic rules (Hatzivassiloglou and McKeown, 1997; Turney and Littman, 2003).", "More recent approaches focus heavily on word embeddings, either using semi-supervised graph-based approaches (Wang et al., 2016; Hamilton et al., 2016; Sedoc et al., 2017) or fully supervised methods (Rosenthal et al., 2015; Li et al., 2017; Rothe et al., 2016; Du and Zhang, 2016).", "Most important for this work, Buechel and Hahn (2018b) report on near-human performance using a combination of FASTTEXT vectors and a multi-task feed-forward network (see Section 4).", "While this line of work can add new words, it does not extend lexicons to other emotional variables or languages.", "A relatively new way of generating novel labels is emotion representation mapping (ERM), an annotation projection that translates ratings from one emotion format into another, e.g., mapping VAD labels into BE5, or vice versa (Hoffmann et al., 2012; Buechel and Hahn, 2016, 2018a; Alarcao and Fonseca, 2017; Landowska, 2018; Zhou et al., 2020; Park et al., 2019).", "While our work uses ERM to add additional emotion variables to the source lexicon, ERM alone can neither increase the coverage of a lexicon, nor adapt it to another language.", "Translating Emotions.", "The approach we propose is strongly tied to the observation by Lev-eau et al. (2012) and Warriner et al. (2013) who foundcomparing a large number of existing emotion lexicons of different languagesthat translational equivalents of words show strong stability and adherence to their emotional value.", "Yet, their work is purely descriptive.", "They do not exploit their observation to create new ratings, and only consider manual rather than automatic translation.", "Making indirect use of this observation, Mohammad and Turney (2013) offer machine-translated versions of their NRC Emotion Lexicon .", "Also, many approaches in cross-lingual sentiment analysis (on the sentence-level) rely on translating polarity lexicons (Abdalla and Hirst, 2017; Barnes et al., 2018).", "Perhaps most similar to our work, Chen and Skiena (2014) create (polarity-only) lexicons for 136 languages by building a multilingual word graph and propagating sentiment labels through that graph.", "Yet, their method is restricted to high frequency wordstheir lexicons cover between 12 and 4,653 entries, whereas our approach exceeds this limit by more than two orders of magnitude.", "Our methodology also resembles previous work which models word emotion for historical language stages (Cook and Stevenson, 2010; Hamilton et al., 2016; Hellrich et al., 2018; Li et al., 2019).", "Work in this direction typically comes up with a set of seed words with assumingly temporally stable affective meaning (our work assumes stability against translation) and then uses distributional methods to derive emotion ratings in the target language stage.", "However, gold data for the target language (stage) is usually inaccessible, often preventing evaluation against human judgment.", "In contrast, we here propose several alternative evaluation set-ups as an integral part of our methodology.", "Our methodology integrates (1) cross-lingual generation and expansion of emotion lexicons and (2) their evaluation against gold and silver standard data.", "Consequently, a key aspect of our workflow design is how data is split into train, dev, and test sets at different points of the generation process.", "Figure 1 gives an overview of our framework including a toy example for illustration.", "Lexicon Generation.", "We start with a lexicon ( Source ) of arbitrary size, emotion format 1 and source language which is partitioned into train, dev, and test splits denoted by Source-train , Source-dev , and Source-test , respectively.", "Next, we leverage a bilingual word translation model between source and desired target language to build the first target-side emotion lexicon denoted as TargetMT .", "Source words are translated according to the model, whereas target-side emotion labels are simply copied from the source to the target (see Section 2).", "Entries are assigned to train, dev, or test set according to their source-side assignment (cf. Figure 1).", "The choice of our translation service (see below) ensures that each source word receives exactly one translation.", "TargetMT is then used as the distant supervisor to train a model that predicts word emotions based on target-side word embeddings.", "TargetMT-train and TargetMT-dev are used to fit model parameters and optimize hyperparameters, respectively, whereas TargetMT-test is held out for later evaluation.", "Once finalized, the model is used to predict new labels for the words in TargetMT , resulting in a second target-side emotion lexicon denoted TargetPred .", "Our rationale for doing so is that a reasonably trained model should generalize well 1 This encompasses not only VA(D) and BE5, but also any sort of (real-valued) polarity encodings.", "over the entire TargetMT lexicon because it has access to the target-side embedding vectors.", "Hence, it may mitigate some of the errors which were introduced in previous steps, either by machine translation or by assuming that source-and target-side emotion are always identical.", "We validate this assumption in Section 6.", "We also predict ratings for all the words in the embedding model, leading to a large number of new entries.", "The splits are defined as follows: let MT train , MT dev , and MT test denote the set of words in train, dev, and test split of TargetMT , respectively.", "Likewise, let P train , P dev , and P test denote the splits of TargetPred and let E denote the set of words in the embedding model.", "Then P train := MT train P dev := MT dev \\ MT train P test := ( MT test E ) \\ ( MT dev MT train ) The above definitions help clarify the way we address polysemy.", "2 In short, our work evades this problem by dealing with lexical entries exclusively on the typerather than the senselevel.", "From a lexicological perspective, this may seem like a strong assumption.", "From a modeling perspective, however, it appears almost obvious as it aligns well with the major components of our methodology, i.e., lexicons, embeddings, and translation.", "The lexicons we work with follow the design of behavioral experiments: a stimulus (word type) is given to may result in multiple source entries translating to the same target-side word.", "3 This circumstance leads to partial duplicates in TargetMT , i.e., groups of entries with the same word type but different emotion values (because they were derived from distinct Source entries).", "Such overlap could do harm to the integrity of our evaluation since knowledge may leak from training to validation phase, i.e., by testing the model on words it has already seen during training, although with distinct emotion labels.", "The proposed data partitioning eliminates such distortion effects.", "Since partial duplicates receive the same embedding vector, the prediction model assigns the same emotion value to both, thus merging them in TargetPred .", "of the above generation method is that it allows us to create large-scale emotion lexicons for languages", "a subject and the response (rating) is recorded.", "The absence of sense-level annotation simplifies the mapping between lexicon and embedding entries.", "While sense embeddings form an active area of research (Camacho-Collados and Pilehvar, 2018; Chi and Chen, 2018), to the best of our knowledge, type-level embeddings yield state-of-the-art performance in downstream applications.", "3 Source-side polysemy, in contrast to its target-side counterpart, is less of a problem, because we receive only a single candidate during translation.", "This may result in cases where the translation misaligns with the copied emotion value in TargetMT .", "Yet, the prediction step partly mitigates such inconsistencies (see Section 6).", "for which gold data is lacking.", "But if that is the case, how can we assess the quality of the generated lexicons?", "Our solution is to propose two different evaluation scenariosa gold evaluation which is a strict comparison against human judgment, meaning that it is limited to languages where such data (denoted TargetGold ) is available, and a silver evaluation which substitutes human judgments by automatically derived ones (silver standard) which is feasible for any language in our study.", "The rationale is that if both, gold and silver evaluation, strongly agree with each other, we can use one as proxy for the other when no target-side gold data exists (examined in Section 6).", "Note that our lexicon generation approach consists of two major steps, translation and prediction .", "However, these two steps are not equally important for each generated entry in TargetPred .", "Words, such as German Sonnenschein for which a translational equivalent already exists in the Source (sunshine; see Figure 1), mainly rely on translation, while the prediction step acts as an optional refinement procedure.", "In contrast, the prediction step is crucial for words, such as Erdbeben , whose translational equivalents (earthquake) are missing in the Source .", "Yet, these words also depend on the translation step for producing training data.", "These considerations are important for deciding which words to evaluate on.", "We may choose to base our evaluation on the full TargetPred lexicon, including words from the training setafter all, the word emotion model does not have access to any target-side gold data.", "The problem with this approach is that it merges words that mainly rely on translation , because their equivalents are in the Source , and those which largely depend on prediction , because they are taken from the embedding model.", "In this case, generalizability of evaluation results becomes questionable.", "Thus, our evaluation methodology needs to ful-fill the following two requirements: (1) evaluation must not be performed on translational equivalents of the Source entries to which the model already had access during training (e.g., Sonnenschein and nuklear in our example from Figure 1); but, on the other hand, (2) a reasonable number of instances must be available for evaluation (ideally, as many as possible to increase reliability).", "The intricate cross-lingual train-dev-test set assignment of our generation methodology is in place so that we meet these two requirements.", "In particular, for our silver evaluation, we intersect TargetMT-test with TargetPred-test and compute the correlation of these two sets individually for each emotion variable.", "Pearson's r will be used as correlation measure throughout this paper.", "Establishing a test set at the very start of our workflow, Source-test , assures that there is a relatively large overlap between the two sets and, by extension, that our requirements for the evaluation are met.", "The gold evaluation is a somewhat more challenging case, because we can, in general, not guarantee that the overlap of a TargetGold lexicon with TargetPred-test will be of any particular size.", "For this reason, the words of the embedding model are added to TargetPred-test (see above), maximizing the expected overlap with TargetGold .", "In practical terms, we intersect TargetGold with TargetPred-test and compute the variable-wise correlation between these sets, in parallel to the silver evaluation.", "A complementary strategy for maximizing overlap, by exploiting dependencies between published lexicons, is described below.", "Gold Lexicons and Data Splits.", "We use the English emotion lexicon from Warriner et al. (2013) as first part of our Source dataset.", "This popular resource comprises about 14k entries in VAD format collected via crowdsourcing.", "Since manually gathered BE5 ratings are available only for a subset of this lexicon (Stevenson et al., 2007), we add BE5 ratings from Buechel and Hahn (2018a) who used emotion representation mapping (see Section 2) to convert the existing VAD ratings, showing that this is about as reliable as human annotation.", "As apparent from the previous section, a crucial aspect for applying our methodology is the design of the train-dev-test split of the Source because it directly impacts the amount of words we can test our lexicons on during gold evaluation.", "In line with these considerations, we choose the lexical items which are already present in ANEW (Bradley and Lang, 1999) as Source-test set.", "ANEW is the precursor to the version later distributed by Warriner et al. (2013); it is widely used and has been adapted to a wide range of languages.", "With this choice, it is likely that a resulting TargetPred-test set has a large overlap with the respective TargetGold lexicon.", "As for the TargetGold lexicons, we included every VA(D) and BE5 lexicon we could get hold of with more than 500 entries.", "This resulted in 26 datasets covering 12 quite diverse languages (see Table 2).", "Note that we also include English lexicons in the gold evaluation.", "In these cases, no translation will be carried out ( Source is identical to TargetMT ) so that only the expansion step is validated.", "Appendix A.1 gives further details on data preparation.", "Translation.", "We used the GOOGLECLOUDTRANSLATIONAPI 4 to produce word-to-word translation tables.", "This is a commercial service, total translation costs amount to 160 EUR.", "API calls were performed in November 2019.", "Embeddings.", "We use the fastText embedding models from Grave et al. (2018) trained for 157 languages on the respective WIKIPEDIA and the respective part of COMMONCRAWL .", "These resources not only greatly facilitate our work but also increase comparability across languages.", "The restriction to only 91 languages comes from intersecting the ones covered by the vectors with the languages covered by the translation service.", "4 https://cloud.google.com/translate/ Models.", "Since our proposed methodology is agnostic towards the chosen word emotion model, we will re-use models from the literature.", "In particular, we will rely on the multi-task learning feed-forward network (MTLFFN) worked out by Buechel and Hahn (2018b).", "This network constitutes the current state of the art for monolingual emotion lexicon creation (expanding an existing lexicon for a given language) for many of the datasets in Table", "2. The MTLFFN has two hidden layers of 256 and 128 units, respectively, and takes pre-trained embedding vectors as input.", "Its distinguishing feature is that hidden layer parameters are shared between the different emotion target variables, thus constituting a mild form of multi-task learning (MTL).", "We apply MTL to VAD and BE5 variables individually (but not between both groups), thus training two distinct emotion models per language, following the outcome of a development experiment.", "Details are given in Appendix A.2 together with the remainder of the model specifications.", "Being aware of the infamous instability of neural approaches (Reimers and Gurevych, 2017), we also employ a ridge regression model, an L 2 regularized version of linear regression, as a more robust, yet also powerful baseline (Li et al., 2017).", "The size of the resulting lexicons (a complete list is provided in Table 8 in the Appendix) ranges from roughly 100k to more than 2M entries mainly depending on the vocabulary of the respective embeddings.", "We want to point out that not every single entry should be considered meaningful because of noise in the embedding vocabulary caused by typos and tokenization errors.", "However, choosing the best size for an emotion lexicon necessarily translates into a quality-coverage trade-off for which there is no general solution.", "Instead, we release the full-size lexicons and leave it to prospective users to apply any sort of filtering they deem appropriate.", "Silver Evaluation.", "Figure 2 displays the results of our silver evaluation.", "Languages (x-axis) are sorted by their average performance over all variables (not shown in the plot; tabular data given in the Appendix).", "As can be seen, the evaluation results for English are markedly better than for any other language.", "This is not surprising since no (potentially error-prone) machine translation was performed.", "Apart from that, performance remains relatively stable across most of the languages and Figure 2: Silver evaluation results in Pearson's r .", "starts degrading more quickly only for the last third of them.", "In particular, for Valencetypically the easiest variable to predictwe achieve a strong performance of r > .", "7 for 56 languages.", "On the other hand, for Arousaltypically, the most difficult one to predictwe achieve a solid performance of r > .", "5 for 55 languages.", "Dominance and the discrete emotion variables show performance trajectories swinging between these two extremes.", "We assume that the main factors for explaining performance differences between languages are the quality of the translation and embedding models which, in turn, both depend on the amount of available text data (parallel or monolingual, respectively).", "Comparing MTLFFN and ridge baseline, we find that the neural network reliably outperforms the linear model.", "On average over all languages and variables, the MTL models achieve 6.7%-points higher Pearson correlation.", "Conversely, ridge regression outperforms MTLFFN in only 15 of the total 728 cases (91 languages 8 variables).", "Gold Evaluation.", "Results for VAD variables on gold data are given in Table", "3. As can be seen, our lexicons show a good correlation with human judgment and do so robustly, even for less-resourced languages, such as Indonesian (id), Turkish (tr), or Croatian (hr), and across affective variables.", "Perhaps the strongest negative outliers are the Arousal results for the two Chinese datasets (zh), which are likely to result from the low reliability of the gold ratings (see below).", "We compare these results against those from Buechel and Hahn (2018b) which were acquired on the respective TargetGold dataset in a monolingual fashion using 10-fold cross-validation (10-ID Shared (%) Joy Ang Sad Fea Dis en3 1033 99 .89 .83 .80 .82 .78 es4 363 41 .86 .84 .84 .84 .76 es5 6096 58 .64 .72 .72 .72 .63 es6 992 43 .80 .74 .71 .72 .68 de4 848 43 .80 .66 .52 .68 .42 pl3 1381 47 .78 .71 .66 .69 .71 tr2 721 35 .77 .69 .71 .70 .65 Mean .79 .74 .71 .74 .66 Table 4: Gold evaluation results for BE5 ( Joy , Ang er, Sad ness, Fea r, Dis gust) in Pearson's r .", "CV).", "We admit that those results are not fully comparable to those presented here because we use fixed splits rather than 10-CV.", "Nevertheless, we find that the results of our cross-lingual set-up are more than competitive, outperforming the monolingual results from Buechel and Hahn (2018b) in 17 out of 30 cases (mainly for Valence and Dominance, less often for Arousal).", "This is surprising since we use an otherwise identical model and training procedure.", "We conjecture that the large size of the English Source lexicon, compared to most TargetGold lexicons, more than compensates for error-prone machine translation.", "Table 4 shows the results for BE5 datasets which are in line with the VAD results.", "Regarding the ordering of the emotional variables, again, we find Valence to be the easiest one to predict, Arousal the hardest, whereas basic emotions and Dominance take a middle ground.", "Comparison against Human Reliability.", "We base this analysis on inter-study reliability (ISR), a rather strong criterion for human performance.", "ISR is computed, per variable, as the correlation between the ratings from two distinct annotation studies (Warriner et al., 2013).", "Hence, this analysis is restricted to languages where more than one gold lexicon exists per emotion format.", "We intersect the entries from both gold standards as well as the respective TargetPred-test set and compute the correlation between all three pairs of lexicons.", "If our lexicon agrees more with one of the gold standards than the two gold standards agree with each other, we consider this as an indicator for superhuman reliability (Buechel and Hahn, 2018b).", "As shown in Table 5, our lexicons are often competitive with human reliability for Valence (espe-cially for English and Chinese), but outperform G o l d 1 G o l d 2 S h a r e d E m o G 1 v s G 2 G 1 v s P r G 2 v s P r en1 en2 1032 V .953 .941 .922 A .760 .761 .711 D .794 .879 .782 es1 es2 610 V .976 .905 .912 A .758 .714 .725 es2 es3 222 V .976 .906 .907 A .710 .724 .691 de2 de3 498 V .963 .806 .812 A .760 .721 .663 pl1 pl2 445 V .943 .838 .852 A .725 .764 .643 zh1 zh2 140 V .932 .918 .898 A .482 .556 .455 Table 5: Comparison against human performance.", "human reliability in 4 out of 6 cases for Arousal, and in the single test case for Dominance.", "There are no cases of overlapping gold standards for BE5.", "This section investigates patterns in prediction quality across languages, validating design decisions of our methodology.", "Translation vs. Prediction.", "Is it beneficial to predict new ratings for the words in TargetMT rather than using them as final lexicon entries straight away?", "For each TargetGold lexicon (cf. Table 2), we intersect its word material with that in TargetMT and TargetPred .", "Then, we compute the correlation between TargetPred and TargetMT with the gold standard.", "This analysis was done on the respective train sets because using TargetMT rather than TargetPred is only an option for entries known at training time.", "Table 6 depicts the results of this comparison averaged over all gold lexicons.", "As hypothesized, the TargetPred lexicons agree, on average, more with human judgment than the TargetMT lexicons, suggesting that the word emotion model acts as a value-adding post-processor, partly mitigating rating inconsistencies introduced by mere translation of the lexicons.", "The observation holds for each individual emotion variable with particularly large benefits for Arousal, where the post-processed TargetPred lexicons are on average V a l A r o D o m J o y A n g S a d F e a D i s Pred .871 .652 .733 .767 .734 .692 .728 .650 MT .796 .515 .613 .699 .677 .636 .654 .579 Diff .076 .137 .119 .068 .057 .056 .074 .071 Table 6: Quality of TargetMT vs. TargetPred in terms of average Pearson correlation over all languages and gold standards.", "14%-points better compared to the translation-only TargetMT lexicons.", "This seems to indicate that lexical Arousal is less consistent between translational equivalents compared to other emotional meaning components like Valence and Sadness, which appear to be more robust against translation.", "Gold vs. Silver Evaluation.", "How meaningful is silver evaluation without gold data?", "We compute the Pearson correlation between gold and silver evaluation results across languages per emotion variable.", "For languages where we consider multiple datasets during gold evaluation, we first average the gold evaluation results for each emotion variable.", "As can be seen from Table 7, the correlation values range between r = .", "91 for Joy and r = .", "27 for Disgust.", "This relatively large disper-sion is not surprising when we take into account that we correlate very small data series (for Valence and Arousal there are just 12 languages for which both gold and silver evaluation results are available; for BE5 there are only 5 such languages).", "However, the mean over all correlation values in Table 7 is .", "64 , indicating that there is a relatively strong correlation between both types of evaluation.", "This suggests that the silver evaluation may be used as a rather reliable proxy of lexicon quality even in the absence of language-specific gold data.", "Emotion lexicons are at the core of sentiment analysis, a rapidly flourishing field of NLP.", "Yet, despite large community efforts, the coverage of existing lexicons is still limited in terms of languages, size, and types of emotion variables.", "While there are techniques to tackle these three forms of sparsity in isolation, we introduced a methodology which allows us to cope with them simultaneously by jointly combining emotion representation mapping, machine translation, and embedding-based lexicon expansion.", "Our study is large-scale in many respects.", "We created representationally complex lexicons comprising 8 distinct emotion variablesfor 91 languages with up to 2 million entries each.", "The evaluation of the generated lexicons featured 26 manually annotated datasets spanning 12 diverse languages.", "The predicted ratings showed consistently high correlation with human judgment, compared favorably with state-of-the-art monolingual approaches to lexicon expansion and even surpassed human inter-study reliability in some cases.", "The sheer number of test sets we used allowed us to validate fundamental methodological assumptions underlying our approach.", "Firstly, the evaluation procedure, which is integrated into the generation methodology, allows us to reliably estimate the quality of resulting lexicons, even without target language gold standard .", "Secondly, our data suggests that embedding-based word emotion models can be used as a repair mechanism , mitigating poor target-language emotion estimates acquired by simple word-to-word translation.", "Future work will have to deepen the way we deal with word sense ambiguity by way of exchanging the simplifying type-level approach our current work is based on with a semantically more informed sense-level approach.", "A promising direction would be to combine a multilingual sense inventory such as BABELNET (Navigli and Ponzetto, 2012) with sense embeddings (Camacho-Collados and Pilehvar, 2018).", "We would like to thank the anonymous reviewers for their helpful suggestions and comments, and Tinghui Duan, JULIELAB , for assisting us with the Chinese gold data.", "This work was partially funded by the German Federal Ministry for Economic Affairs and Energy (funding line Big Data in der makrookonomischen Analyse [Big data in macroeconomic analysis]; Fachlos 2; GZ 23305/003#002)." ]
[ "abstain", "abstain", "abstain", "method", "method", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "result", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "method", "other", "abstain", "other", "other", "other", "other", "method", "abstain", "abstain", "method", "other", "objective", "method", "method", "method", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "other", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "result", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "result", "method", "abstain", "abstain", "other", "other" ]
[ "Abstract Opinion summarization is the task of automatically creating summaries that reflect subjective information expressed in multiple documents, such as product reviews.", "While the majority of previous work has focused on the extractive setting, i.e., selecting fragments from input reviews to produce a summary, we let the model generate novel sentences and hence produce abstractive summaries.", "Recent progress in summarization has seen the development of supervised models which rely on large quantities of document-summary pairs.", "Since such training data is expensive to acquire, we instead consider the unsupervised setting, in other words, we do not use any summaries in training.", "We define a generative model for a review collection which capitalizes on the intuition that when generating a new review given a set of other reviews of a product, we should be able to control the amount of novelty going into the new review or, equivalently, vary the extent to which it deviates from the input.", "At test time, when generating summaries, we force the novelty to be minimal, and produce a text reflecting consensus opinions.", "We capture this intuition by defining a hierarchical variational autoencoder model.", "Both individual reviews and the products they correspond to are associated with stochastic latent codes, and the review generator (decoder) has direct access to the text of input reviews through the pointer-generator mechanism.", "Experiments on Amazon and Yelp datasets, show that setting at test time the review's latent code to its mean, allows the model to produce fluent and coherent summaries reflecting common opinions.", "Summarization of user opinions expressed in online resources, such as blogs, reviews, social media, or internet forums, has drawn much attention due to its potential for various information access applications, such as creating digests, search, and report", "generation (Hu and Liu, 2004; Angelidis and Lapata, 2018; Medhat et al., 2014).", "Although there has been significant progress recently in summarizing non-subjective context (Rush et al., 2015; Nallapati et al., 2016; Paulus et al., 2017; See et al., 2017; Liu et al., 2018), modern deep learning methods rely on large amounts of annotated data that are not readily available in the opinion-summarization domain and expensive to produce.", "Moreover, annotation efforts would have to be undertaken for multiple domains as online reviews are inherently multi-domain (Blitzer et al., 2007) and summarization systems highly domain-sensitive (Isonuma et al., 2017).", "Thus, perhaps unsurprisingly, there is a long history of applying unsupervised and weakly-supervised methods to opinion summarization (e.g., Mei et al. 2007; Titov and McDonald 2008; Angelidis and Lapata 2018), however, these approaches have primarily focused on extractive summarization, i.e., producing summaries by copying parts of the input reviews.", "In this work, we instead consider abstractive summarization which involves generating new phrases, possibly rephrasing or using words that were not in the original text.", "Abstractive summaries are often preferable to extractive ones as they can synthesize content across documents avoiding redundancy (Barzilay et al., 1999; Carenini and Cheung, 2008; Di Fabbrizio et al., 2014).", "In addition, we focus on the unsupervised setting and do not use any summaries for training.", "Unlike aspect-based summarization (Liu, 2012), which rewards the diversity of opinions, we aim to generate summaries that represent consensus (i.e., dominant opinons in reviews).", "We argue that such summaries can be useful for quick decision making, and to get an overall feel for a product or business (see the example in Table 1).", "More specifically, we assume we are provided with a large collection of reviews for various products and businesses and define a generative model of this collection.", "Intuitively, we want to design such a model that, when generating a review for a product 1 relying on a set of other reviews, we can control the amount of novelty going into the new review or, equivalently, vary the extent to which it deviates from the input.", "At test time, we can force the novelty to be minimal, and generate summaries representing consensus opinions.", "We capture this intuition by defining a hierarchical variational autoencoder (VAE) model.", "Both products and individual reviews are associated with latent representations.", "Product representations can store, for example, overall sentiment, common top-ics, and opinions expressed about the product.", "In contrast, latent representations of reviews depend on the product representations and capture the content of individual reviews.", "While at training time the latent representations are random variables, we fix them to their respective means at test time.", "As desired for summarization, these average' (or copycat') reviews differ in writing style from a typical review.", "For example, they do not contain irrelevant details that are common in customer reviews, such as mentioning the occasion or saying how many family members accompanied the reviewer.", "In order to encourage the summaries to include spe-1 For simplicity, we refer to both products (e.g., iPhone X) and businesses (e.g., a specific Starbucks branch) as products .", "cific details, the review generator (decoder') has direct access to the text of input reviews through the pointer-generator mechanism (See et al., 2017).", "In the example in Table 1, the model included specific information about the restaurant type and its location in the generated summary.", "As we will see in ablation experiments, without this conditioning, model performance drops substantially, as the summaries become more generic.", "We evaluate our approach on two datasets, Amazon product reviews and Yelp reviews of businesses.", "The only previous method dealing with unsupervised multi-document opinion summarization, as far as we are aware of, is MeanSum (Chu and Liu, 2019).", "Similarly to our work, they generate consensus summaries and consider the Yelp benchmark.", "Whereas we rely on continuous latent representations, they treat the summary itself as a discrete latent representation of a product.", "Although this captures the intuition that a summary should relay key information about a product, using discrete latent sequences makes optimization challenging; (Miao and Blunsom, 2016; Baziotis et al., 2019; Chu and Liu, 2019) all have to use an extra training loss term and biased gradient estimators.", "Our contributions can be summarized as follows: we introduce a simple end-to-end approach to unsupervised abstractive summarization; we demonstrate that the approach substantially outperforms the previous method, both when measured with automatic metrics and in human evaluation; we provide a dataset of abstractive summaries for Amazon products.", "As discussed above, we approach the summarization task from a generative modeling perspective.", "We start with a high level description of our model, then, in Sections 2.2 and 2.3, we describe how we estimate the model and provide extra technical details.", "In Section 3, we explain how we use the model to generate summaries.", "Our text collection consists of groups of reviews, with each group corresponding to a single product.", "Conditional independence of the reviews given the group representation c .", "(b) The r i 's decoder accesses other reviews of the group ( r 1 , ..., r i 1 , r i +1 , ..., r N ).", "Our latent summarization model (which we call COPYCAT ) captures this hierarchical organization and can be regarded as an extension of the vanilla text-VAE model (Bowman et al., 2016).", "COPYCAT uses two sets of latent variables as shown in Figure 1a.", "Namely, we associate each review group (equivalently, each product) with a continuous variable c , which captures the group's latent seman-tics'.", "In addition, we associate each individual review ( r i ) with a continuous variable z i , encoding the semantics of that review.", "The information stored in z i is used by the decoder p ( r i | z i ) to produce review text r i .", "The marginal log-likelihood of one group of reviews r 1: N = ( r 1 , . . . , r N ) is given by log p ( r 1: N ) = log (cid:90) (cid:34) p ( c ) N (cid:89) i =1 (cid:20)(cid:90) p ( r i | z i ) p ( z i | c )d z i (cid:21) d c (cid:35) , where we marginalize over variables c and z 1: N .", "When generating a new review r i , given the set of previous reviews r 1: i , the information about these reviews has to be conveyed through the latent representations c and z i .", "This bottleneck is undesirable, as it will make it hard for the model to pass fine-grain information.", "For example, at generation time, the model should be reusing named entities (e.g., product names or technical characteristics) from other reviews rather than hallucinating' or avoiding generating them at all, resulting in generic and non-informative text.", "We alleviate this issue by letting the decoder directly access other reviews.", "We can formulate this as an autoregressive model: p ( r 1: N | c ) = N (cid:89) i =1 p ( r i | r 1 , ..., r i 1 , c ) .", "As we discuss in Section 2.3, the conditioning is instantiated using the pointer-generator mechanism (See et al., 2017) and, thus, will specifically help in generating rare words (e.g., named entities).", "We want our summarizer to equally rely on every review, without imposing any order (e.g., temporal) on the generation process.", "Instead, as shown in Figure 1b, when generating r i , we let the decoder access all other reviews within a group, r i = ( r 1 , . . . , r i 1 , r i +1 , . . . , r N ) .", "This is closely related to pseudolikelihood estimation (Besag, 1975) or Skip-Thought's objective (Kiros et al., 2015).", "The final objective that we maximize for each group of reviews r 1: N : log (cid:90) p ( c ) N (cid:89) i =1 (cid:20)(cid:90) p ( r i | z i , r i ) p ( z i | c )d z i (cid:21) d c (2) We will confirm in ablation experiments that both hierarchical modeling (i.e., using c ) and the direct conditioning on other reviews are beneficial.", "As standard with VAEs and variational inference in general (Kingma and Welling, 2013), instead of directly maximizing the intractable marginal likelihood in Equation 2, we maximize its lower bound: 3", "L ( , ; r 1: N ) = E c q ( c | r 1: N ) (cid:34) N (cid:88) i =1 E z i q ( z i | r i ,c ) [log p ( r i | z i , r i )] N (cid:88) i =1 DKL [ q ( z i | r i , c ) || p ( z i | c )] (cid:35) DKL [ q ( c | r 1: N ) || p ( c )] .", "3 See the derivations in Appendix A.1.", "The lower bound includes two inference networks', q ( c | r 1: N ) and q ( z i | r i , c ) , which are neural networks parameterized with and will be discussed in detail in Section 2.3.", "They approximate the corresponding posterior distributions of the model.", "The first term is the reconstruction error: it encourages the quality reconstruction of the reviews.", "The other two terms are regularizers.", "They control the amount of information encoded in the latent representation by penalizing the deviation of the estimated posteriors from the corresponding priors, the deviation is measured in terms of the Kullback-Leibler (KL) divergence.", "The bound is maximized with respect to both the generative model's parameters and inference networks' parameters .", "Due to Gaussian assumptions, the Kullback-Leibler (KL) divergence terms are available in closed form, while we rely on the reparam-eterization trick (Kingma and Welling, 2013) to compute gradients of the reconstruction term.", "The inference network predicting the posterior for a review-specific variable q ( z i | r i , c ) is needed only in training and is discarded afterwards.", "In contrast, we will exploit the inference network q ( c | r 1: N ) when generating summaries, as discussed in Section 3.", "A GRU encoder (Cho et al., 2014) embeds review words w to obtain hidden states h .", "Those representations are reused across the system, e.g., in the inference networks and the decoder.", "The full architecture used to produce the latent codes c and z i is shown in Figure 2.", "We make Gaussian assumptions for all distributions (i.e. posteriors and priors).", "As in Kingma and Welling (2013), we use separate linear projections (LPs) to compute the means and diagonal log-covariances.", "2.3.2 Prior p ( c ) and posterior q ( c | r 1: N ) We set the prior over group latent codes to the standard normal distribution, p ( c ) = N ( c ; 0 , I ) .", "In order to compute the approximate posterior q ( c | r 1: N ) , we first predict the contribution (im-portance') of each word in each review ti to the code of the group: ti = exp( f ( m ti )) (cid:80) Nj =1 (cid:80) T j k exp( f ( m kj )) , where T i is the length of r i and f is a feed-forward neural network (FFNN) 4 which takes as input concatenated word embeddings and hidden states of the GRU encoder, m ti = [ h ti w ti ] , and returns a scalar.", "Next, we compute the intermediate representation with the weighted sum: h = (cid:80) Ni =1 (cid:80) T i t ti m ti .", "Finally, we compute the Gaussian's parameters using the affine projections: ( r 1: N ) = L h + b L log ( r 1: N ) = G h + b G 2.3.3 Prior p ( z i | c ) and posterior q ( z i | r i ,c ) To compute the prior on the review code z i , p ( z i | c ) = N ( z i ; ( c ) , I ( c )) , we linearly project the product code c .", "Similarly, to compute the parameters of the approximate posterior q ( z | r i , c ) = N ( z ; ( r i , c ) , I ( r i , c )) , we concatenate the last encoder's state h T i i of the review r i and c , and perform affine transformations.", "| To compute the distribution p ( r i | z i , r i ) , we use an auto-regressive GRU decoder with the attention mechanism (Bahdanau et al., 2015) and a pointer-generator network.", "We compute the context vector c ti = att ( s ti , h i ) by attending to all the encoder's hidden states h i of the other reviews r i of the group, where the decoder's hidden state s ti is used as a query.", "The 4 We use FFNNs with the tanh non-linearity in several model components.", "hidden state of the decoder is computed using the GRU cell as s ti = GRU ( s t 1 i , [ w ti c t 1 i z i ]) .", "The pointer-generator network computes two internal word distributions that are hierarchically aggregated into one distribution (Morin and Bengio, 2005).", "One distribution assigns probabilities to words being generated using a fixed vocabulary, and another one probabilities to be copied directly from the other reviews r i .", "In our case, the network helps to preserve details and, especially, to generate rare tokens.", "Given reviews r 1: N , we generate a summary that reflects common information using trained components of the model.", "Formally, we could sample a new review from p ( r | r 1: N ) = E c q ( c | r 1: N ) (cid:20) E z p ( z | c ) [ p ( r | z, r 1: N )] (cid:21) .", "As we argued in the introduction and will revisit in experiments, a summary or summarizing review, should be generated relying on the mean of the reviews' latent code.", "Consequently, instead of sampling z from p ( z | c ) = N ( z ; ( c ) , I ( c )) , we set it to ( c ) .", "We also found beneficial, in terms of evaluation metrics, not to sample c but instead to rely on the mean predicted by the inference network q ( c | r 1: N ) .", "Our experiments were conducted on business customer reviews from the Yelp Dataset Challenge and Amazon product reviews (He and McAuley, 2016).", "These were pre-processed similarly to Chu and Liu (2019), and the corresponding data statistics are Dataset Training Validation Yelp 38,776/1,012,280 4,311/113,373 Amazon 183,103/4,566,519 9,639/240,819 Table 2: Data statistics after pre-processing.", "shown in Table 2.", "Details of the pre-processing are available in Appendix A.2.", "These datasets present different challenges to abstractive summarization systems.", "Yelp reviews contain much personal information and irrelevant details which one may find unnecessary in a summary.", "Our summarizer, therefore, needs to distill important information in reviews while abstracting away from details such as a listing of all items on the menu, or mentions of specific dates or occasions upon which customers visited a restaurant.", "On the contrary, in Amazon reviews, we observed that users tend to provide more objective information and specific details that are useful for decision making (e.g., the version of an electronic product, its battery life, its dimensions).", "In this case, it would be desirable for our summarizer to preserve this information in the output summary.", "For evaluation, we used the same 100 human-created Yelp summaries released by Chu and Liu (2019).", "These were generated by Amazon Mechanical Turk (AMT) workers, who summarized 8 input reviews.", "We created a new test for Amazon reviews following a similar procedure (see Appendix A.6 for details).", "We sampled 60 products and 8 reviews for each product, and they were shown to AMT workers who were asked to write a summary.", "We collected three summaries per product, 28 products were used for development and 32 for testing.", "We used GRUs (Cho et al., 2014) for sequential encoding and decoding we used GRUs.", "We randomly initialized word embeddings that were shared across the model as a form of regularization (Press and Wolf, 2017).", "Further, optimization was performed using Adam (Kingma and Ba, 2014).", "In order to overcome the posterior collapse (Bow-man et al., 2016), both for our model and the vanilla VAE baseline, we applied cyclical annealing (Fu et al., 2019).", "The reported ROUGE scores are based on F1 (see Appendix A.3 for details on hy-perparameters).", "Opinosis is a graph-based abstractive summarizer (Ganesan et al., 2010) designed to generate short opinions based on highly redundant texts.", "Although it is referred to as abstractive, it can only select words from the reviews.", "LexRank is an unsupervised algorithm which selects sentences to appear in the summary based on graph centrality (sentences represent nodes in a graph whose edges have weights denoting similarity computed with tf-idf).", "A node's centrality can be measured by running a ranking algorithm such as PageRank (Page et al., 1999).", "MeanSum 5 is the unsupervised abstractive summarization model (Chu and Liu, 2019) discussed in the introduction.", "We also trained a vanilla text VAE model (Bow-man et al., 2016) with our GRU encoder and decoder.", "When generating a summary for r 1 , ..., r N , we averaged the means of q ( z i | r i ) .", "Finally, we used a number of simple summarization baselines.", "We computed the clustroid review for each group as follows.", "We took each review from a group and computed ROUGE-L with respect to all other reviews.", "The review with the highest ROUGE score was selected as the clustroid review.", "Furthermore, we sampled a random review from each group as the summary, and constructed the summary by selecting the leading sentences from each review of a group.", "Additionally, as an upper bound, we report the performance of an oracle review, i.e., the highest-scoring review in a group when computing ROUGE-L against reference summaries.", "5 For experiments on Yelp, we used the checkpoint provided by the authors, as we obtained very similar ROUGE scores when retraining the model.", "As can be seen in Tables 3 and 4, our model, Copycat, yields the highest scores on both Yelp and Amazon datasets.", "We observe large gains over the vanila VAE.", "We conjecture that the vanilla VAE struggles to properly represent the variety of categories under a single prior p ( z ) .", "For example, reviews about a sweater can result in a summary about socks (see example summmaries in Appendix).", "This contrasts with our model which allows each group to have its own prior p ( z | c ) and access to other reviews during decoding.", "The gains are especially large on the Amazon dataset, which is very broad in terms of product categories.", "Our model also substantially outperforms MeanSum.", "As we will confirm in human evaluation, MeanSum's summaries are relatively fluent at the sentence level but often contain hallucinations, i.e., information not present in the input reviews.", "Best-Worst Scaling We performed human evaluation using the AMT platform.", "We sampled 50 businesses from the human-annotated Yelp test set and used all 32 test products from the Amazon set.", "We recruited 3 workers to evaluate each tuple containing summaries from MeanSum, our model, LexRank, and human annotators.", "The reviews and summaries were presented to the workers in random order and were judged using Best-Worst Scaling (Louviere and Woodworth, 1991; Louviere et al., 2015).", "BWS has been shown to produce more reliable results than ranking scales (Kiritchenko and Mohammad, 2016).", "Crowdwork-ers were asked to judge summaries according to the Fluency Coherence Non Red.", "criteria listed below (we show an abridged version below, the full set of instructions is given in Appendix A.5).", "The non-redundancy and coherence criteria were taken from Dang (2005).", "Fluency : the summary sentences should be grammatically correct, easy to read and understand; Coherence : the summary should be well structured and well organized; Non-redundancy : there should be no unnecessary repetition in the summary; Opinion consensus : the summary should reflect common opinions expressed in the reviews; Overall : based on your own criteria (judgment) please select the best and the worst summary of the reviews.", "For every criterion, a system's score is computed as the percentage of times it was selected as best minus the percentage of times it was selected as worst (Orme, 2009).", "The scores range from -1 (unanimously worst) to +1 (unanimously best).", "On Yelp, as shown in Table 5, our model scores higher than the other models according to most criteria, including overall quality.", "The differences with other systems are statistically significant for all the criteria at p < 0 .", "01 , using post-hoc HD Tukey tests.", "The difference in fluency between our system and gold summaries is not statistically significant.", "The results on Amazon are shown in Table 6.", "Our system outperforms other methods in terms of fluency, coherence, and non-redundancy.", "As with Yelp, it trails LexRank according to the opinion consensus criterion.", "Additionally, LexRank is slightly preferable overall.", "All pairwise differences between our model and comparison systems are statistically significant at p < 0 .", "05 .", "Opinion consensus (OC) is a criterion that captures the coverage of common opinions, and it seems to play a different role in the two datasets.", "On Yelp, LexRank has better coverage compared to our model, as indicated by the higher OC score, but is not preferred overall .", "In contrast, on Amazon, while the OC score is on the same par, LexRank is preferred overall .", "We suspect that presenting a breadth of exact details on Amazon is more important than on Yelp.", "Moreover, LexRank tends to produce summaries that are about 20 tokens longer than ours resulting in better coverage of input details.", "Content Support The ROUGE metric relies on unweighted n-gram overlap and can be insensitive to hallucinating facts and entities (Falke et al., 2019).", "For example, referring to a burger joint as a veggie restaurant is highly problematic from a user perspective but yields only marginal differences in ROUGE.", "To investigate how well the content of the summaries is supported by the input reviews, we performed a second study.", "We used the same sets as in the human evaluation in Section 5.2, and split MeanSum and our system's summaries into sentences.", "Then, for each summary sentence, we assigned 3 AMT workers to assess how well the sentence is supported by the reviews.", "Workers were advised to read the reviews and rate sentences using one of the following three options.", "Full support : all the content is reflected in the reviews; Partial support : only some content is reflected in the reviews; No support : content is not reflected in the reviews.", "The results in Table 7 indicate that our model is better at preserving information than MeanSum.", "Ablations To investigate the importance of the model's individual components, we performed ablations by removing the latent variables ( z i and c , one at a time), and attention over the other reviews.", "The models were re-trained on the Amazon dataset.", "The results are shown in Table 8.", "They indicate that all components play a role, yet the most significant drop in ROUGE was achieved when the variable z was removed, and only c remained.", "Summaries obtained from the latter system were wordier and looked more similar to reviews.", "Dropping the attention (w/o r i ) results in more generic summaries as the model cannot copy details from the input.", "Finally, the smallest quality drop in terms of ROUGE-L was observed when the variable c was removed.", "In the introduction, we hypothesized that using the mean of latent variables would result in more grounded summaries reflecting the content of the input reviews, whereas sampling would yield texts with many novel and potentially irrelevant details.", "To empirically test this hypothesis, we sampled the latent variables during summary generation, as opposed to using mean values (see Section 3).", "We indeed observed that the summaries were wordier, less fluent, and less aligned to the input reviews, as is also reflected in the ROUGE scores (Table 8).", "Copy Mechanism Finally, we analyzed which words are copied by the full model during summary generation.", "Generally, the model copies around 3-4 tokens per summary.", "We observed a tendency to copy product-type specific words (e.g., shoes ) as well as brands and names.", "Extractive weakly-supervised opinion summarization has been an active area of research.", "A recent example is Angelidis and Lapata (2018).", "First, they learn to assign sentiment polarity to review segments in a weakly-supervised fashion.", "Then, they induce aspect labels for segments relying on R1 R2 RL w/o r i 0.2866 0.0454 0.1863 w/o c 0.2767 0.0507 0.1919 w/o z 0.2926 0.0416 0.1739 Sampling 0.2563 0.0434 0.1716 Full 0.3197 0.0581 0.2016 Table 8: Ablations, ROUGE scores on Amazon.", "a small sample of gold summaries.", "Finally, they use a heuristic to construct a summary of segments.", "Opinosis (Ganesan et al., 2010) does not use any supervision.", "The model relies on redundancies in opinionated text and PoS tags in order to generate short opinions.", "This approach is not well suited for the generation of coherent long summaries and although it can recombine fragments of input text, it cannot generate novel words and phrases.", "LexRank (Erkan and Radev, 2004) is an unsupervised extractive approach which builds a graph in order to determine the importance of sentences, and then selects the most representative ones as a summary.", "Isonuma et al. (2019) introduce an unsupervised approach for single review summarization, where they rely on latent discourse trees.", "Other earlier approaches (Gerani et al., 2014; Di Fabbrizio et al., 2014) relied on text planners and templates, while our approach does not require rules and can produce fluent and varied text.", "Finally, conceptually related methods were applied to unsupervised single sentence compression (West et al., 2019; Baziotis et al., 2019; Miao and Blunsom, 2016).", "The most related approach to ours is MeanSum (Chu and Liu, 2019) which treats a summary as a discrete latent state of an autoencoder.", "In contrast, we define a hierarchical model of a review collection and use continuous latent codes.", "In this work, we presented an abstractive summarizer of opinions, which does not use any summaries in training and is trained end-to-end on a large collection of reviews.", "The model compares favorably to the competitors, especially to the only other unsupervised abstractive multi-review summarization system.", "Furthermore, human evaluation of the generated summaries (by considering their alignment with the reviews) shows that those created by our model better reflect the content of the input.", "We would like to thank the anonymous reviewers for their valuable comments.", "Also, Stefanos Angelidis for help with the data as well as Jonathan Mallinson, Serhii Havrylov, and other members of Edinburgh NLP group for discussion.", "We gratefully acknowledge the support of the European Research Council (Titov: ERC StG BroadSem 678254; Lapata: ERC CoG TransModal 681760) and the Dutch National Science Foundation (NWO VIDI 639.022.518)." ]
[ "abstain", "objective", "abstain", "method", "objective", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "method", "abstain", "result", "method", "objective", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "result", "method", "abstain", "method", "abstain", "abstain", "objective", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "method", "method", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "method", "method", "method", "result", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "objective", "method", "abstain", "result", "other", "other", "other" ]
[ "Understanding and executing natural language instructions in a grounded domain is one of the hallmarks of artificial intelligence.", "In this paper, we focus on instruction understanding in the blocks world domain and investigate the language understanding abilities of two top-performing systems for the task.", "We aim to understand if the test performance of these models indicates an understanding of the spatial domain and of the natural language instructions relative to it, or whether they merely over-fit spurious signals in the dataset.", "We formulate a set of expectations one might have from an instruction following model and concretely characterize the different dimensions of robustness such a model should possess.", "Despite decent test performance, we find that state-of-the-art models fall short of these expectations and are extremely brittle.", "We then propose a learning strategy that involves data augmentation and show through extensive experiments that the proposed learning strategy yields models that are competitive on the original test set while satisfying our expectations much better.", "1 .", "Building agents that can understand and execute natural language instructions in a grounded environment is a hallmark of artificial intelligence (Winograd, 1972).", "There is wide applicability of this technology in navigation (Chen et al., 2019; Tellex et al., 2011; Chen and Mooney, 2011), collaborative building (Narayan-Chen et al., 2019), and several others areas (Li et al., 2020b; Brana-van et al., 2009).", "The key challenge underlying these and many other applications is the need to understand the natural language instruction (to the extent that it is possible) and ground relevant parts of it in the environment.", "While the use of deep networks has led to significant progress on several 1 Our code is publicly available at: http://cogcomp.org/page/publication_view/936 Figure 1: Task: Given a configuration of blocks and an instruction, predict the source and target location.", "benchmarks (Abiodun et al., 2018) an investigation into the instruction understanding capabilities of such systems remains lacking.", "We do not know the extent to which these models truly understand the spatial relations in the environment, nor their robustness to variability in the environment or in the instructions.", "This understanding is also important from the viewpoint of safety critical applications , where robustness to variability is essential.", "While robustness to input perturbations at test-time has been studied in computer vision (Goodfellow et al., 2014) and in certain natural language tasks (Alzantot et al., 2018; Wallace et al., 2019; Shah et al., 2020), it remains relatively elusive in the instruction following task in a grounded environment.", "This can be attributed to the difficulty in characterizing the different expectations of robustness in this setting, due to the multiple channels of input involved, which semantically constrain each other.", "The Blocks World domain is an ideal platform to study the abilities of a system to understand instructions (Winograd, 1972; Bisk et al., 2016; Narayan-Chen et al., 2019; Misra et al., 2017; Bisk et al., 2018; Mehta and Goldwasser, 2019; Tan and Bansal, 2018).", "Despite being seemingly simple, it presents key reasoning challenges, including compositional language understanding and spatial understanding, that need to be addressed in any instructional domain.", "In Bisk et al. (2016), the", "en-(a)", "vironment consists of a number of blocks placed on a flat board.", "The model is provided with the current configuration of blocks in the environment along with an instruction, and is tasked with executing the instruction by manipulating appropriate blocks.", "In this work, we follow the more challenging setting in Bisk et al. (2016) where the blocks are unlabeled, necessitating the use of involved referential expressions in the instructions.", "Fig.1 shows that the instruction and block configuration are semantically dependent, jointly determining the outcome.", "Despite the success of top performing models (Tan and Bansal, 2018; Bisk et al., 2016) on the test set for this task, we question if the models are able to reason about the complex language and spatial concepts of this task and generalize or are merely over fitting the test set.", "To investigate these questions we formulate the following expectations one should have from an instruction following model: (1) Identity Invariance Expectation : The performance of the model on an input should not degrade on slightly perturbing the input.", "(2) Symmetry Equivariance Expectation : A symmetric transformation of an input should cause an equivalent transformation of model prediction and performance should not degrade.", "(3) Length Invariance Expectation : The performance of a model should not depend on the length of the input, as long as the semantics is unchanged.", "Our expectations complement existing work in three dimensions: (1) is related to adversarial perturbations (Goodfellow et al., 2014) and (2) is related to equivariance of CNNs explored in computer vision (Cohen and Welling, 2016).", "It is also related to contrast sets (Gardner et al., 2020; Li et al., 2020a) and counterfactual data augmentation (Kaushik et al., 2019).", "Here, we extend the investigation to this new task of instruction following involving both natural language and an environment, discrete and continuous perturbations and both regression and classification tasks.", "Contrast (3) is related to Lake and Baroni (2018) where vulnerability to length in a toy sequence-sequence task was demonstrated.", "Here we show that length-based vulnerability exists in another modalitythe number of blocks present on the board, for this much more complicated task.", "While these form only a subset of the expectations one might have from an instruction following model, it already allows us to formally characterize some of the dimensions of robustness an instruction following agent must have.", "As an example, a tiny shift in the location of each block should not affect the model prediction (identity invariance).", "In Sec. 2, we formulate concrete perturbations to test whether a given model satisfies these expectations.", "The space of perturbations that we consider have the following attributes:", "(a) Semantic Preserving or Semantic Altering.", "(b) Linguistic or Geometric.", "(c) Discrete or Continuous.", "We find that both models studied suffer a large performance drop under each of the perturbations, and fall short of satisfying our expectations.", "We then present a data augmentation scheme designed to better address our expectations from such models.", "Our extensive experiments in Sec. 2.3 indicate that our learning strategy results in more robust models that perform much better on the perturbed test set while maintaining similar performance on the original test set.", "Given the block configuration W R 20 3 (three-dimensional coordinate locations of a maximum of 20 unlabeled blocks B = b 1 , ..., b 20 and an instruction I , the model has to move the appropriate block.", "There are two sub-tasks:", "(i) predicting the source block to be moved and", "(ii) predicting the Figure 3: Relative Performance Degradation for the source (classification and regression) and target (regression) sub-tasks.", "While the target output is always a location y R 3 , for the source task the model can either predict a particular block y { 1 , 2 , ..., 20 } (Tan and Bansal, 2018) or a particular source location y R 3 (Bisk et al., 2016).", "Let P denote a perturbation space and ( I (cid:48) , W (cid:48) ) be the perturbed version of ( I, W ) under P .", "Note that ( I (cid:48) , W (cid:48) ) can be chosen randomly or adversarially as the perturbation which maximizes the loss : ( I (cid:48) , W (cid:48) ) = arg max ( I (cid:48) ,W (cid:48) ) P (cid:96) ( f ( I (cid:48) , W (cid:48) ) , O ) .", "Here (cid:96) denotes a loss function and O denotes the gold source/target location.", "If the perturbation space is discrete and finite we can simple search over all candidate ( I (cid:48) , W (cid:48) ) to find the one with the maximum loss.", "If it is continuous and infinite, we can use a first order method (eg: First Order Gradient Signed Method FGSM (Goodfellow et al., 2014)) to find the adversarial ( I (cid:48) , W (cid:48) ) .", "Now we characterize P .", "Broadly, we have the following two types of perturbations:", "(i) Semantics Preserving (SP) : Perturbations when applied to either I or W , do not change the meaning of either.", "Since the modified instruction I (cid:48) or world state W (cid:48) is semantically unchanged, the model should perform similarly on the perturbed input.", "Informally, we want f ( I, W ) f ( I (cid:48) , W (cid:48) ) since I I (cid:48) and W W (cid:48) .", "SP perturbations can be of the following types: Linguistic (SPL) : Perturbations that do not change the overall meaning of the instruction.", "Consider Lexical Substitutions: We identify a list of synonyms for each of the shapes and spatial concepts ( C ) in this domain.", "2 For each test example which contains at least one of these 2 Examples of synonym sets for shapes in C are { tower ( s ) , stack ( s ) } , { block ( s ) , brick ( s ) , box ( es ) } .", "concepts we adversarially pick the one with the highest loss over all combinations of substitutions from the synonyms in C .", "Since the size of these synonym sets are small, an explicit search over all candidate substitutions is possible, although the search space grows combinatorially with the number of elements of C in I .", "Geometric (SPG) : These perturbations do not change the semantics of the board.", "Tiny changes in the block locations which preserve the overall semantics of W should not affect model predictions.", "We perturb each block location slightly in an adversarial direction 3 w.r.t W .", "Count (SPC) : We identify distractor blocks which do not affect the meaning of the instruction (Fig.", "2(b)).", "Large distance from the source and target location acts as a proxy for this.", "P comprises of deleting k blocks where k { 0 , 1 , 2 , ..., N } is chosen adversarially to generate W (cid:48) .", "We set N = 3 .", "4 Permutation (SPP) : These are perturbations where the order in which the block locations are fed to the model, are permuted: ( B ) = { b (1) , ..., b (20) } .", "While semantically nothing changes in the input ( ( I (cid:48) , W (cid:48) ) ( I, ( W )) , where denotes semantic equivalence), we see models still suffer a large performance drop, even for a random permutation .", "(ii) Semantic Altering (SA) : These perturbations create a new ( I (cid:48) , W (cid:48) ) pair with different semantics, using a simple transformation that we want the model to be equivariant to.", "A horizontal mirroring of W with a corresponding change in I (flipping all the left concept words to right and vice versa) 3 according to a FGSM attack with (cid:15) = 0 .", "05 4 Addition of such distractor blocks at locations far from the source and target locations, form a similar perturbation set that also leads to a significant performance drop for existing models (Appendix A).", "2.1 Model Performance vs Our Expectations The dataset from Bisk et al. (2016) 5 has 2493 training examples and 720 test examples.", "We evaluate the performance of our implementation of two models: from Bisk et al. (2016) and from Tan and Bansal (2018).", "One important difference between the two models is that while both models treat the target subtask T as a regression task (trained and evaluated using a normalized mean squared error called block distance BD), Tan and Bansal (2018) treats the source subtask as a classification task S cls (trained using cross entropy loss as (cid:96) and evaluated using classification accuracy", "Acc.) while Bisk et al. (2016) treats it as a regression task S reg (trained and evaluated using BD).", "We use both models for the source and the Bisk et al. (2016) model for the target subtask.", "We compare model performance on the original test set using standard evaluation and on the perturbed test set using a robust evaluation measure.", "The robust evaluation measure for S reg and T is max( BD ( f ( I, W ) , O ) , BD ( f ( I (cid:48) , W (cid:48) ) , O )) and min( Acc ( f ( I, W ) , O ) , Acc ( f ( I (cid:48) , W (cid:48) ) , O )) for S cls .", "This robust evaluation formulation is motivated by the requirement that models perform well on both the original and the perturbed examples.", "From Fig. 3 we see that models suffer a large performance drop of upto 87 .", "48% , 42 .", "86% and 5 https://groundedlanguage.github.io 30 .", "03% for the source-classification,-regression and target subtasks respectively, over different P .", "In this section we show that a simple data augmentation strategy improves model performance under robust evaluation on the perturbed test set.", "For each input ( I, W ) in the training data we add another example which is adversarial: ( I (cid:48) , W (cid:48) ) = arg max ( I (cid:48) ,W (cid:48) ) P (cid:96) ( f ( I (cid:48) , W (cid:48) ) , O ) .", "This perturbation set P used in training is the same one that is used for robust test evaluation.", "When P is continuous (eg: SPG), we use the FGSM attack to solve this maximization and obtain ( I (cid:48) , W (cid:48) ) .", "When P is discrete (eg: SPL, SPC) we search over the perturbation space to find the perturbation with the highest loss.", "We train the model on a combined dataset consisting of both the original train-set and the adversarially augmented data.", "This is an extension of Adversarial Training (Madry et al., 2017) to the instruction following task for", "(i) both discrete and continuous perturbations", "(ii) both regression and classification tasks.", "In this section we show the benefits of adversarially augmented robust training.", "Consider the models M std from Bisk et al. (2016) and Tan and Bansal (2018) which were shown to perform poorly under robust evaluation in Sec. 2.1.", "Here we compare their performance with their robustly trained variants M rob .", "For all models we perform standard evaluation and robust evaluations for each perturbation type.", "This is done for the source (classification and regression) and target sub-tasks.", "In Table 1 we show the results under the different settings, averaged over 5 runs.", "For every perturbation category and for all sub-tasks, we see that the robust models", "(i) outperform their standard counterparts in terms of robust evaluation metric and", "(ii) in some cases even on standard evaluation.", "Thus, knowledge-free robust training framework can produce models which are less brittle to perturbations with competitive standard performance on the original test set.", "In this paper we formulated the performance expectations for an instruction following system.", "Based on these expectations, we created several categories of perturbations and showed that existing models fail spectacularly on them.", "We then demonstrated the benefits of adversarial data augmentation on each perturbation category.", "This work was supported by Contracts W911NF-15-1-0461 and FA8750-19-2-0201 with the US Defense Advanced Research Projects Agency (DARPA), the Army Research Office under Grant Number W911NF-20-1-0080, and by ONR Contract N00014-19-1-2620.", "The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Department of Defense or the U.S. Government." ]
[ "abstain", "objective", "objective", "method", "result", "objective", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "result", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "result", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "method", "result", "objective", "other", "other" ]
[ "As an essential task in task-oriented dialog systems, slot filling requires extensive training data in a certain domain.", "However, such data are not always available.", "Hence, cross-domain slot filling has naturally arisen to cope with this data scarcity problem.", "In this paper, we propose a Coa rse-to-fine approa ch ( Coach ) for cross-domain slot filling.", "Our model first learns the general pattern of slot entities by detecting whether the tokens are slot entities or not.", "It then predicts the specific types for the slot entities.", "In addition, we propose a template regularization approach to improve the adaptation robustness by regularizing the representation of utterances based on utterance templates.", "Experimental results show that our model significantly outperforms state-of-the-art approaches in slot filling.", "Furthermore, our model can also be applied to the cross-domain named entity recognition task, and it achieves better adaptation performance than other existing baselines.", "The code is available at https: //github.com/zliucr/coach .", "Slot filling models identify task-related slot types in certain domains for user utterances, and are an indispensable part of task-oriented dialog systems.", "Supervised approaches have made great achievements in the slot filling task (Goo et al., 2018; Zhang et al., 2019), where substantial labeled training samples are needed.", "However, collecting large numbers of training samples is not only expensive but also time-consuming.", "To cope with the data scarcity issue, we are motivated to investigate cross-domain slot filling methods, which leverage knowledge learned in the source domains and adapt the models to the target domain with a minimum number of target domain labeled training samples.", "Can you put this tune onto latin dance cardio Playlist Music Item O O O O O O B I I O O O O B O O O O", "(a) Framework proposed by Bapna et al. (2017).", "Can you put this tune onto latin dance cardio O O O O B O B I I Slot Entity Playlist Music Item Step 1 Step 2 Step 2", "(b) Our proposed framework, Coach .", "classification models from adapting to the target domain without any target domain supervision signals.", "Recently, Bapna et al. (2017) proposed a cross-domain slot filling framework, which enables zero-shot adaptation.", "As illustrated in Figure 1a, their model conducts slot filling individually for each slot type.", "It first generates word-level representations, which are then concatenated with the representation of each slot type description, and the predictions are based on the concatenated features for each slot type.", "Due to the inherent variance of slot entities across different domains, it is diffi-cult for this framework to capture the whole slot entity (e.g., latin dance cardio in Figure 1a) in the target domain.", "There also exists a multiple prediction problem.", "For example, tune in Figure 1a could be predicted as B for both music item and playlist, which would cause additional trouble for the final prediction.", "We emphasize that in order to capture the whole slot entity, it is pivotal for the model to share its parameters for all slot types in the source domains and learn the general pattern of slot entities.", "Therefore, as depicted in Figure 1b, we propose a new cross-domain slot filling framework called Coach , Can you put this tune onto latin dance car dio Can you put this restaurant type onto artist Template Generation Encoder Conditional Random Field (CRF) O O B 3-way (B-I-O) Classification Per Token ... ...", "a coarse-to-fine approach.", "It first coarsely learns the slot entity pattern by predicting whether the tokens are slot entities or not.", "Then, it combines the features for each slot entity and predicts the specific ( fine ) slot type based on the similarity with the representation of each slot type description.", "In this way, our framework is able to avoid the multiple predictions problem.", "Additionally, we introduce a template regularization method that delexical-izes slot entity tokens in utterances into different slot labels and produces both correct and incorrect utterance templates to regularize the utterance representations.", "By doing so, the model learns to cluster the representations of semantically similar utterances (i.e., in the same or similar templates) into a similar vector space, which further improves the adaptation robustness.", "Experimental results show that our model surpasses the state-of-the-art methods by a large margin in both zero-shot and few-shot scenarios.", "In addition, further experiments show that our framework can be applied to cross-domain named entity recognition, and achieves better adaptation performance than other existing frameworks.", "Coarse-to-fine methods in NLP are best known for syntactic parsing (Charniak et al., 2006; Petrov, 2011).", "Zhang et al. (2017) reduced the search space of semantic parsers by using coarse macro grammars.", "Different from the previous work, we apply the idea of coarse-to-fine into cross-domain slot filling to handle unseen slot types by separating the slot filling task into two steps (Zhai et al., 2017; Guerini et al., 2018).", "Coping with low-resource problems where there are zero or few existing training samples has always been an interesting and challenging task (Kingma et al., 2014; Lample et al., 2018; Liu et al., 2019a,b; Lin et al., 2020).", "Cross-domain adaptation addresses the data scarcity problem in low-resource target domains (Pan et al., 2010; Jaech et al., 2016; Guo et al., 2018; Jia et al., 2019; Liu et al., 2020; Winata et al., 2020).", "However, most research studying the cross-domain aspect has not focused on predicting unseen label types in the target domain since both source and target domains have the same label types in the considered tasks (Guo et al., 2018).", "In another line of work, to bypass unseen label types, Ruder and Plank (2018) and Jia et al. (2019) utilized target domain training samples, so that there was no unseen label type in the target domain.", "Recently, based on the framework proposed by Bapna et al. (2017) (discussed in Section 1), Lee and Jha (2019) added an attention layer to produce slot-aware representations, and Shah et al. (2019) leveraged slot examples to increase the robustness of cross-domain slot filling adaptation.", "As depicted in Figure 2, the slot filling process in our Coach framework consists of two steps.", "In the first step, we utilize a BiLSTM-CRF structure (Lample et al., 2016) to learn the general pattern of slot entities by having our model predict whether tokens are slot entities or not (i.e., 3-way classification for each token).", "In the second step, our model further predicts a specific type for each slot entity based on the similarities with the description representations of all possible slot types.", "To generate representations of slot entities, we leverage another encoder, BiLSTM (Hochre-iter and Schmidhuber, 1997), to encode the hidden states of slot entity tokens and produce representations for each slot entity.", "We represent the user utterance with n tokens as w = [ w 1 , w 2 , ..., w n ] , and E denotes the embedding layer for utterances.", "The whole process can be formulated as follows: [ h 1 , h 2 , ..., h n ] = BiLSTM ( E ( w )) , (1) [ p 1 , p 2 , ..., p n ] = CRF ([ h 1 , h 2 , ..., h n ]) , (2) where [ p 1 , p 2 , ..., p n ] are the logits for the 3-way classification.", "Then, for each slot entity, we take its hidden states to calculate its representation: r k = BiLSTM ([ h i , h i +1 , ...h j ]) , (3) s k = M desc r k , (4) where r k denotes the representation of the k th slot entity, [ h i , h i +1 , ..., h j ] denotes the BiLSTM hidden states for the k th slot entity, M desc R n s d s is the representation matrix of the slot description ( n s is the number of possible slot types and d s is the dimension of slot descriptions), and s k is the specific slot type prediction for this k th slot entity.", "We obtain the slot description representation r desc R d s by summing the embeddings of the N slot description tokens (similar to Shah et al. (2019)): r desc = N (cid:88) i =1 E ( t i ) , (5) where t i is the i th token and E is the same embedding layer as that for utterances.", "In many cases, similar or the same slot types in the target domain can also be found in the source domains.", "Nevertheless, it is still challenging for the model to recognize the slot types in the target domain owing to the variance between the source domains and the target domain.", "To improve the adaptation ability, we introduce a template regularization method.", "As shown in Figure 2, we first replace the slot entity tokens in the utterance with different slot labels to generate correct and incorrect utterance templates.", "Then, we use BiLSTM and an attention layer (Felbo et al., 2017) to generate the utterance and template representations: e t = h t w a , t = exp ( e t ) (cid:80) nj =1 exp ( e j ) , R = n (cid:88) t =1 t h t , (6) where h t is the BiLSTM hidden state in the t th step, w a is the weight vector in the attention layer and R is the representation for the input utterance or template.", "We minimize the regularization loss functions for the right and wrong templates, which can be formulated as follows: L r = MSE ( R u , R r ) , (7) L w = MSE ( R u , R w ) , (8) where R u is the representation for the user utterance, R r and R w are the representations of right and wrong templates, we set as one, and MSE denotes mean square error.", "Hence, in the training phase, we minimize the distance between R u and R r and maximize the distance between R u and R w .", "To generate a wrong template, we replace the correct slot entity with another random slot entity, and we generate two wrong templates for each utterance.", "To ensure the representations of the templates are meaningful (i.e., similar templates have similar representations) for training R u , in the first several epochs, the regularization loss is only to optimize the template representations, and in the following epochs, we optimize both template representations and utterance representations.", "By doing so, the model learns to cluster the representations in the same or similar templates into a similar vector space.", "Hence, the hidden states of tokens that belong to the same slot type tend to be similar, which boosts the robustness of these slot types in the target domain.", "We evaluate our framework on SNIPS (Coucke et al., 2018), a public spoken language understanding dataset which contains 39 slot types across seven domains (intents) and 2000 training samples per domain.", "To test our framework, each time, we choose one domain as the target domain and the other six domains as the source domains.", "Moreover, we also study another adaptation case where there is no unseen label in the target domain.", "We utilize the CoNLL-2003 English named entity recognition (NER) dataset as the source domain (Tjong Kim Sang and De Meulder, 2003), and the CBS SciTech News NER dataset from Jia et al. (2019) as the target domain.", "These two datasets have the same four types of entities, namely, PER (person), LOC (location), ORG (organization), and MISC (miscellaneous).", "We use word-level (Bojanowski et al., 2017) and character-level (Hashimoto et al., 2017) embeddings for our model as well as all the following baselines.", "Concept Tagger (CT) Bapna et al. (2017) proposed a slot filling framework that utilizes slot descriptions to cope with the unseen slot types in the target domain.", "Robust Zero-shot Tagger (RZT) Based on CT, Shah et al. (2019) leveraged example values of slots to improve robustness of cross-domain adaptation.", "BiLSTM-CRF This baseline is only for the cross-domain NER.", "Since there is no unseen label in the NER target domain, the BiLSTM-CRF (Lam-ple et al., 2016) uses the same label set for the source and target domains and casts it as an entity classification task for each token, which is applicable in both zero-shot and few-shot scenarios.", "We use a 2-layer BiLSTM with a hidden size of 200 and a dropout rate of 0.3 for both the template encoder and utterance encoder.", "Note that the parameters in these two encoders are not shared.", "The BiLSTM for encoding the hidden states of slot entity tokens has one layer with a hidden size of 200, which would output the same dimension as the concatenated word-level and char-level embeddings.", "We use Adam optimizer with a learning rate of 0.0005.", "Cross-entropy loss is leveraged to train the 3-way classification in the first step, and the specific slot type predictions are used in the second step.", "We split 500 data samples in the target domain as the validation set for choosing the best model and the remainder are used for the test set.", "We implement the model in CT and RZT and follow the same setting as for our model for a fair comparison.", "Quantitative Analysis As illustrated in Table 1, we can clearly see that our models are able to achieve significantly better performance than the current state-of-the-art approach (RZT).", "The CT framework suffers from the difficulty of capturing the whole slot entity, while our framework is able to recognize the slot entity tokens by sharing its parameters across all slot types.", "Based on the CT framework, the performance of RZT is still limited, and Coach outperforms RZT by a 3% F1-score in the zero-shot setting.", "Additionally, template regularization further improves the adaptation robustness by helping the model cluster the utterance representations into a similar vector space based on their corresponding template representations.", "Interestingly, our models achieve impressive performance in the few-shot scenario.", "In terms of the averaged performance, our best model (Coach+TR) outperforms RZT by 8% and 9% F1-scores on the 20-shot and 50-shot settings, respectively.", "We conjecture that our model is able to better recognize the whole slot entity in the target domain and map the representation of the slot entity belonging to the same slot type into a similar vector space TargetSamples 0 samples 20 samples 50 samples unseen seen unseen seen unseen seen CT 27.10 44.18 50.13 61.21 62.05 69.64 RZT 28.28 47.15 52.56 63.26 63.96 73.10 Coach 32.89 50.78 61.96 73.78 74.65 76.95 Coach+TR 34.09 51.93 64.16 73.85 76.49 80.16 Table 2: Averaged F1-scores for seen and unseen slots over all target domains.", "to the representation of this slot type based on Eq (4).", "This enables the model to quickly adapt to the target domain slots.", "Analysis on Seen and Unseen Slots We take a further step to test the models on seen and unseen slots in target domains to analyze the effectiveness of our approaches.", "To test the performance, we split the test set into unseen and seen parts.", "An utterance is categorized into the unseen part as long as there is an unseen slot (i.e., the slot does not exist in the remaining six source domains) in it.", "Otherwise we categorize it into the seen part.", "The results for the seen and unseen categories are shown in Table 2.", "We observe that our approaches generally improve on both unseen and seen slot types compared to the baseline models.", "For the improvements in the unseen slots, our models are better able to capture the unseen slots since they explicitly learn the general pattern of slot entities.", "Interestingly, our models also bring large improvements in the seen slot types.", "We conjecture that it is also challenging to adapt models to seen slots due to the large variance between the source and target domains.", "For example, slot entities belonging to the object type in the RateBook domain are different from those in the SearchCreativeWork domain.", "Hence, the baseline models might fail to recognize these seen slots in the target domain, while our approaches can adapt to the seen slot types more quickly in comparison.", "In addition, we observe that template regularization improves performance in both seen and unseen slots, which illustrates that clustering representations based on templates can boost the adaptation ability.", "From Table 3, we see that the Coach framework is also suitable for the case where there are no unseen labels in the target domain in both the zero-shot and few-shot scenarios, while CT and RZT are not as effective as BiLSTM-CRF.", "However, we observe that template regularization loses its effectiveness Target Samples 0 samples 50 samples CT (Bapna et al. (2017)) 61.43 65.85 RZT (Shah et al. (2019)) 61.94 65.21 BiLSTM-CRF 61.77 66.57 Coach 64.08 68.35 Coach + TR 64.54 67.45 Table 3: F1-scores on the NER target domain (CBS SciTech News).", "in this task, since the text in NER is relatively more open, which makes it hard to capture the templates for each label type.", "We conduct an ablation study in terms of the methods to encode the entity tokens (described in Eq.", "(3)) to investigate how they affect the performance.", "Instead of using BiLSTM, we try two alternatives.", "One is to use the encoder of Transformer (trs) (Vaswani et al., 2017), and the other is to simply sum the hidden states of slot entity tokens.", "From Table 4, we can see that there is no significant performance difference among different methods, and we observe that using BiLSTM to encode the entity tokens generally achieves better results.", "We introduce a new cross-domain slot filling framework to handle the unseen slot type issue.", "Our model shares its parameters across all slot types and learns to predict whether input tokens are slot entities or not.", "Then, it detects concrete slot types for these slot entity tokens based on the slot type descriptions.", "Moreover, template regularization is proposed to improve the adaptation robustness further.", "Experiments show that our model significantly outperforms existing cross-domain slot filling approaches, and it also achieves better performance for the cross-domain NER task, where there is no unseen label type in the target domain.", "This work is partially funded by ITF/319/16FP and MRP/055/18 of the Innovation Technology Commission, the Hong Kong SAR Government." ]
[ "abstain", "abstain", "abstain", "objective", "objective", "abstain", "objective", "result", "result", "other", "abstain", "abstain", "abstain", "objective", "objective", "objective", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "result", "result", "other", "other", "method", "other", "other", "other", "other", "other", "abstain", "abstain", "method", "method", "method", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "result", "result", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "result", "abstain", "result", "result", "abstain", "result", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "objective", "method", "abstain", "abstain", "result", "other" ]
[ "Despite the recent popularity of word embedding methods, there is only a small body of work exploring the limitations of these representations.", "In this paper, we consider one aspect of embedding spaces, namely their stability.", "We show that even relatively high frequency words (100-200 occurrences) are often unstable.", "We provide empirical evidence for how various factors contribute to the stability of word embeddings, and we analyze the effects of stability on downstream tasks.", "Word embeddings are low-dimensional, dense vector representations that capture semantic properties of words.", "Recently, they have gained tremendous popularity in Natural Language Processing (NLP) and have been used in tasks as diverse as text similarity (Kenter and De Rijke, 2015), part-of-speech tagging (Tsvetkov et al., 2016), sentiment analysis (Faruqui et al., 2015), and machine translation (Mikolov et al., 2013a).", "Although word embeddings are widely used across NLP, their stability has not yet been fully evaluated and understood.", "In this paper, we explore the factors that play a role in the stability of word embeddings, including properties of the data, properties of the algorithm, and properties of the words.", "We find that word embeddings exhibit substantial instabilities, which can have implications for downstream tasks.", "Using the overlap between nearest neighbors in an embedding space as a measure of stability (see section 3 below for more information), we observe that many common embedding spaces have large amounts of instability.", "For example, Figure 1 shows the instability of the embeddings obtained by training word2vec on the Penn Treebank (PTB) (Marcus et al., 1993).", "As expected, lower frequency words have lower stability and higher fre-2 0 2 4 2 8 2 12 2 16 Frequency of Word in PTB (log scale) 100 75 50 25 10 5 2 % S t a b ili t y ( l o g s c a l e ) 1e-04 1e-03 1e-02 1e-01 1 % W o r d s i n P a r t i c u l a r F r e q u e n c y B u c k e t ( l o g s c a l e ) Figure 1: Stability of word2vec as a property of frequency in the PTB.", "quency words have higher stability.", "What is surprising however about this graph is the medium-frequency words, which show huge variance in stability.", "This cannot be explained by frequency, so there must be other factors contributing to their instability.", "In the following experiments, we explore which factors affect stability, as well as how this stability affects downstream tasks that word embeddings are commonly used for.", "To our knowledge, this is the first study comprehensively examining the factors behind instability.", "There has been much recent interest in the applications of word embeddings, as well as a small, but growing, amount of work analyzing the properties of word embeddings.", "Here, we explore three different embedding methods: PPMI (Bullinaria and Levy, 2007), 2092 word2vec (Mikolov et al., 2013b), and GloVe (Pennington et al., 2014).", "Various aspects of the embedding spaces produced by these algorithms have been previously studied.", "Particularly, the effect of parameter choices has a large impact on how all three of these algorithms behave (Levy et al., 2015).", "Further work shows that the parameters of the embedding algorithm word2vec influ-ence the geometry of word vectors and their context vectors (Mimno and Thompson, 2017).", "These parameters can be optimized; Hellrich and Hahn (2016) posit optimal parameters for negative sampling and the number of epochs to train for.", "They also demonstrate that in addition to parameter settings, word properties, such as word ambiguity, affect embedding quality.", "In addition to exploring word and algorithmic parameters, concurrent work by Antoniak and Mimno (2018) evaluates how document properties affect the stability of word embeddings.", "We also explore the stability of embeddings, but focus on a broader range of factors, and consider the effect of stability on downstream tasks.", "In contrast, Antoniak and Mimno focus on using word embeddings to analyze language (e.g., Garg et al., 2018), rather than to perform tasks.", "At a higher level of granularity, Tan et al. (2015) analyze word embedding spaces by comparing two spaces.", "They do this by linearly transforming one space into another space, and they show that words have different usage properties in different domains (in their case, Twitter and Wikipedia).", "Finally, embeddings can be analyzed using second-order properties of embeddings (e.g., how a word relates to the words around it).", "Newman-Griffis and Fosler-Lussier (2017) validate the usefulness of second-order properties, by demonstrating that embeddings based on second-order properties perform as well as the typical first-order embeddings.", "Here, we use second-order properties of embeddings to quantify stability.", "We define stability as the percent overlap between nearest neighbors in an embedding space.", "1 Given a word W and two embedding spaces A and B , take the ten nearest neighbors of W in both A and B .", "Let the stability of W be the percent 1 This metric is concurrently explored in work by Antoniak and Mimno (2018).", "overlap between these two lists of nearest neighbors.", "100% stability indicates perfect agreement between the two embedding spaces, while 0% stability indicates complete disagreement.", "In order to find the ten nearest neighbors of a word W in an embedding space A , we measure distance between words using cosine similarity.", "2 This definition of stability can be generalized to more than two embedding spaces by considering the average overlap between two sets of embedding spaces.", "Let X and Y be two sets of embedding spaces.", "Then, for every pair of embedding spaces ( x, y ) , where x X and y Y , take the ten nearest neighbors of W in both x and y and calculate percent overlap.", "Let the stability be the average percent overlap over every pair of embedding spaces ( x, y ) .", "Consider an example using this metric.", "Table 1 shows the top ten nearest neighbors for the word international in three randomly initialized word2vec embedding spaces trained on the NYT Arts domain (see Section 4.3 for a description of this corpus).", "These models share some similar words, such as metropolitan and national , but there are also many differences.", "On average, each pair of models has four out of ten words in common, so the stability of international across these three models is 40%.", "The idea of evaluating ten best options is also found in other tasks, like lexical substitution (e.g., McCarthy and Navigli, 2007) and word associa-2 We found comparable results for other distance metrics, such as l 1 norm, l 2 norm, and l norm, but we report results from cosine similarity to be consistent with other word embedding research.", "tion (e.g., Garimella et al., 2017), where the top ten results are considered in the final evaluation metric.", "To give some intuition for how changing the number of nearest neighbors affects our stability metric, consider Figure 2. This graph shows how the stability of GloVe changes with the frequency of the word and the number of neighbors used to calculate stability; please see the figure caption for a more detailed explanation of how this graph is structured.", "Within each frequency bucket, the stability is consistent across varying numbers of neighbors.", "Ten nearest neighbors performs approximately as well as a higher number of nearest neighbors (e.g., 100).", "We see this pattern for low frequency words as well as for high frequency words.", "Because the performance does not change substantially by increasing the number of nearest neighbors, it is computationally less intensive to use a small number of nearest neighbors.", "We choose ten nearest neighbors as our metric throughout the rest of the paper.", "As we saw in Figure 1, embeddings are sometimes surprisingly unstable.", "To understand the factors behind the (in)stability of word embeddings, we build a regression model that aims to predict the stability of a word given: (1) properties related to the word itself; (2) properties of the data used to train the embeddings; and (3) properties of the algorithm used to construct these embeddings.", "Using this regression model, we draw observations about factors that play a role in the stability of word embeddings.", "We use ridge regression to model these various factors (Hoerl and Kennard, 1970).", "Ridge regression regularizes the magnitude of the model weights, producing a more interpretable model than non-regularized linear regression.", "This regularization mitigates the effects of multicollinearity (when two features are highly correlated).", "Specifically, given N ground-truth data points with M extracted features per data point, let x n R 1 M be the features for sample n and let y R 1 N be the set of labels.", "Then, ridge regression learns a set of weights w R 1 M by minimizing the least squares function with l 2 regularization, where is a regularization constant: L ( w ) = 1 2 NX n =1 ( y n w > x n ) 2 + 2 || w || 2 We set = 1 .", "In addition to ridge regression, we tried non-regularized linear regression.", "We obtained comparable results, but many of the weights were very large or very small, making them hard to interpret.", "The goodness of fit of a regression model is measured using the coefficient of determination R 2 .", "This measures how much variance in the dependent variable y is captured by the independent variables x .", "A model that always predicts the expected value of y , regardless of the input features, will receive an R 2 score of 0.", "The highest possible R 2 score is 1, and the R 2 score can be negative.", "Given this model, we create training instances by observing the stability of a large number of 2094 words across various combinations of two embedding spaces.", "Specifically, given a word W and two embedding spaces A and B , we encode properties of the word W , as well as properties of the datasets and the algorithms used to train the embedding spaces A and B .", "The target value associated with this features is the stability of the word W across embedding spaces A and B .", "We repeat this process for more than 2,500 words, several datasets, and three embedding algorithms.", "Specifically, we consider all the words present in all seven of the data domains we are using (see Section 4.3), 2,521 words in total.", "Using the feature categories described below, we generate a feature vector for each unique word, dataset, algorithm, and dimension size, resulting in a total of 27,794,025 training instances.", "To get good average estimates for each embedding algorithm, we train each embedding space five times, randomized differently each time (this does not apply to PPMI, which has no random component).", "We then train a ridge regression model on these instances.", "The model is trained to predict the stability of word W across embedding spaces A and B (where A and B are not necessarily trained using the same algorithm, parameters, or training data).", "Because we are using this model to learn associations between certain features and stability, no test data is necessary.", "The emphasis is on the model itself, not on the model's performance on a specific task.", "We describe next each of the three main categories of factors examined in the model.", "An example of these features is given in Table 2. 4.2 Word Properties We encode several features that capture attributes of the word W .", "First, we use the primary and secondary part-of-speech (POS) of the word.", "Both of these are represented as bags-of-words of all possible POS, and are determined by looking at the primary (most frequent) and secondary (sec-ond most frequent) POS of the word in the Brown corpus 3 (Francis and Kucera, 1979).", "If the word is not present in the Brown corpus, then all of these POS features are set to zero.", "polysemy of the word, we consider the number of different POS present.", "For a finer-grained representation, we use the number of different WordNet senses associated with the word (Miller, 1995; Fellbaum, 1998).", "We also consider the number of syllables in a word, determined using the CMU Pronuncing Dictionary (Weide, 1998).", "If the word is not present in the dictionary, then this is set to zero.", "Data features capture properties of the training data (and the word in relation to the training data).", "For this model, we gather data from two sources: New York Times (NYT) (Sandhaus, 2008) and Europarl (Koehn, 2005).", "Overall, we consider seven domains of data: (1) NYT U.S., (2) NYT New York and Region, (3) NYT Business, (4) NYT Arts, (5) NYT Sports, (6) All of the data from 2095 Vocab.", "domains 1-5 (denoted All NYT), and (7) All of English Europarl.", "Table 3 shows statistics about these datasets.", "The first five domains are chosen because they are the top five most common categories of news articles present in the NYT corpus.", "They are smaller than All NYT and Europarl, and they have a narrow topical focus.", "The All NYT domain is more diverse topically and larger than the first five domains.", "Finally, the Europarl domain is the largest domain, and it is focused on a single topic (European Parliamentary politics).", "These varying datasets allow us to consider how data-dependent properties affect stability.", "We use several features related to domain.", "First, we consider the raw frequency of word W in both the domain of data used for embedding space A and the domain of data for space B .", "To make our regression model symmetric, we effectively encode three features: the higher raw frequency (between the two), the lower raw frequency, and the absolute difference in raw frequency.", "We also consider the vocabulary size of each corpus (again, symmetrically) and the percent overlap between corpora vocabulary, as well as the domain of each of the two corpora, represented as a bag-of-words of domains.", "Finally, we consider whether the two corpora are from the same domain.", "Our final data-level features explore the role of curriculum learning in stability.", "It has been posited that the order of the training data affects the performance of certain algorithms, and previous work has shown that for some neural network-based tasks, a good training data order (curricu-lum learning strategy) can improve performance (Bengio et al., 2009).", "Curriculum learning has been previously explored for word2vec , where it has been found that optimizing training data order can lead to small improvements on common NLP tasks (Tsvetkov et al., 2016).", "Of the embedding algorithms we consider, curriculum learning only affects word2vec .", "Because GloVe and PPMI use the data to learn a complete matrix before building embeddings, the order of the training data will not affect their performance.", "To measure the effects of training data order, we include as features the first appearance of word W in the dataset for embedding space A and the first appearance of W in the dataset for embedding space B (represented as percentages of the total number of training sentences) 4 .", "We further include the absolute difference between these percentages.", "In addition to word and data properties, we encode features about the embedding algorithms.", "These include the different algorithms being used, as well as the different parameter settings of these algorithms.", "Here, we consider three embedding algorithms, word2vec , GloVe, and PPMI.", "The choice of algorithm is represented in our feature vector as a bag-of-words.", "PPMI creates embeddings by first building a positive pointwise mutual information word-context matrix, and then reducing the dimensionality of this matrix using SVD (Bullinaria and Levy, 2007).", "A more recent word embedding algorithm, word2vec (skip-gram model) (Mikolov et al., 2013b) uses a shallow neural network to learn word embeddings by predicting context words.", "Another recent method for creating word embeddings, GloVe, is based on factoring a matrix of ratios of co-occurrence probabilities (Penning-ton et al., 2014).", "For each algorithm, we choose common parameter settings.", "For word2vec , two of the parameters that need to be chosen are window size and minimum count.", "Window size refers to the maximum distance between the current word and the predicted word (e.g., how many neighboring words to consider for each target word).", "Any word appearing less than the minimum count number of times in the corpus is discarded and not considered in the word2vec algorithm.", "For both of these features, we choose standard parameter settings, namely, a window size of 5 and a minimum count of 5. For GloVe, we also choose standard parameters.", "We 4 All word2vec experiments reported here are run in a multi-core setting, which means that these percentages are approximate.", "However, comparable results were achieved using a single-core version of word2vec .", "use 50 iterations of the algorithm for embedding dimensions less than 300, and 100 iterations for higher dimensions.", "We also add a feature reflecting the embedding dimension, namely one of five embedding dimensions: 50, 100, 200, 400, or 800.", "Overall, the regression model achieves a coefficient of determination ( R 2 ) score of 0.301 on the training data, which indicates that the regression has learned a linear model that reasonably fits the training data given.", "Using the regression model, we can analyze the weights corresponding to each of the features being considered, shown in Table 4. These weights are difficult to interpret, because features have different distributions and ranges.", "However, we make several general observations about the stability of word embeddings.", "Observation 1. Curriculum learning is impor-100 10 1 0 .", "tant.", "This is evident because the top two features (by magnitude) of the regression model capture where the word first appears in the training data.", "Figure 3 shows trends between training data position and stability in the PTB.", "This figure contrasts word2vec with GloVe (which is order invariant).", "To further understand the effect of curriculum learning on the model, we train a regression model with all of the features except the curriculum learning features.", "This model achieves an R 2 score of 0.291 (compared to the full model's score of 0.301).", "This indicates that curriculum learning is a factor in stability.", "Observation 2. POS is one of the biggest factors in stability.", "Table 4 shows that many of the top weights belong to POS-related features (both primary and secondary POS).", "Table 5 compares aver-2097 Primary POS Avg.", "age stabilities for each primary POS.", "Here we see that the most stable POS are numerals, verbs, and determiners, while the least stable POS are punctuation marks, adpositions, and particles.", "Observation 3. Stability within domains is greater than stability across domains.", "Table 4 shows that many of the top factors are domain-related.", "Figure 4 shows the results of the regression model broken down by domain.", "This figure shows the highest stabilities appearing on the diagonal of the matrix, where the two embedding spaces both belong to the same domain.", "The stabilities are substantially lower off the diagonal.", "Figure 4 also shows that All NYT generalizes across the other NYT domains better than Europarl, but not as well as in-domain data (All NYT encompasses data from US, NY, Business, Arts, and Sports).", "This is true even though Europarl is much larger than All NYT.", "Observation 4. Overall, GloVe is the most stable embedding algorithm.", "This is particularly apparent when only in-domain data is considered, as in Figure 5. PPMI achieves similar stability, while word2vec lags considerably behind.", "To further compare word2vec and GloVe, we look at how the stability of word2vec changes with the frequency of the word and the number of neighbors used to calculate stability.", "This is shown in Figure 6 and is directly comparable to Figure 2. Surprisingly, the stability of word2vec varies substantially with the frequency of the word.", "For lower-frequency words, as the number of nearest neighbors increases, the stability increases approximately exponentially.", "For high-frequency words, the lowest and highest number of nearest NYTUSNYTNYNYTB u s i n e ss NYTA r t s NYT Sp o r t s A ll NYTE u r o p a r l NYT US NYT NY NYT Business NYT Arts NYT Sports All NYT Europarl 18 4.4 21 3.7 4.7 20 2.6 4 2.9 18 2.7 3.6 3 2.6 19 6 11 7.8 6.4 6.1 27 2.6 4 3.2 2.5 2.4 7 30 0 15 30 45 60 75 % S t a b ili t y Figure 4: Percent stability broken down by domain.", "neighbors show the greatest stability.", "This is different than GloVe, where stability remains reasonably constant across word frequencies, as shown in Figure 2. The behavior we see here agrees with the conclusion of (Mimno and Thompson, 2017), who find that GloVe exhibits more well-behaved geometry than word2vec .", "Observation 5. Frequency is not a major factor in stability.", "To better understand the role that frequency plays in stability, we run separate ablation experiments comparing regression models with frequency features to regression models without frequency features.", "Our current model (using raw frequency) achieves an R 2 score of 0.301.", "Comparably, a model using the same features, but with normalized instead of raw frequency, achieves a score of 0.303.", "Removing frequency from either regression model gives a score of 0.301.", "This indicates that frequency is not a major factor in stability, though normalized frequency is a larger factor than raw frequency.", "Finally, we look at regression models using only frequency features.", "A model using only raw frequency features has an R 2 score of 0.008, while 2098 100 80 60 40 20 0 % Stability 1 10 50 100 200 800 max F r e q u e n c y i n T r a i n i n g C o r p u s ( a pp r o x . l o g b a s e d s c a l e ) 0.2 0.4 0.6 0.8 1.0 % W o r d s i n P a r t i c u l a r F r e q u e n c y B u c k e t 9998 1024 32 29998 1024 32 29998 1024 32 29998 1024 32 29998 1024 32 29998 1024 32 2 # o f N e a r e s t N e i g h b o r s ( l o g s c a l e ) Figure 6: Stability of word2vec on the PTB.", "a model with only normalized frequency features has an R 2 score of 0.0059.", "This indicates that while frequency is not a major factor in stability, it is also not negligible.", "As we pointed out in the introduction, frequency does correlate with stability (Figure 1).", "However, in the presence of all of these other features, frequency becomes a minor factor.", "Word embeddings are used extensively as the first stage of neural networks throughout NLP.", "Typically, embeddings are initalized based on a vector trained with word2vec or GloVe and then further modified as part of training for the target task.", "We study two downstream tasks to see whether stability impacts performance.", "Since we are interested in seeing the impact of 0 10 20 30 40 50 60 70 80 90 100 Word 2 Stability (%) 100 90 80 70 60 50 40 30 20 10 0 W o r d 1 S t a b ili t y ( % ) 0.0 0.1 0.2 0.3 0.4 0.5 A v e r a g e A b s o l u t e E rr o r Figure 7: Absolute error for word similarity.", "word vector stability, we choose tasks that have an intuitive evaluation at the word level: word similarity and POS tagging.", "To model word similarity, we use 300-dimensional word2vec embedding spaces trained on the PTB.", "For each pair of words, we take the cosine similarity between those words averaged over ten randomly initialized embedding spaces.", "We consider three datasets for evaluating word similarity: WS353 (353 pairs) (Finkelstein et al., 2001), MTurk287 (287 pairs) (Radinsky et al., 2011), and MTurk771 (771 pairs) (Halawi et al., 2012).", "For each dataset, we normalize the similarity to be in the range [0 , 1] , and we take the absolute difference between our predicted value and the ground-truth value.", "Figure 7 shows the results broken down by stability of the two words (we always consider Word 1 to be the more stable word in the pair).", "Word similarity pairs where one of the words is not present in the PTB are omitted.", "We find that these word similarity datasets do not contain a balanced distribution of words with respect to stability; there are substantially more unstable words than there are stable words.", "However, we still see a slight trend: As the combined stability of the two words increases, the average absolute error decreases, as reflected by the lighter color of the cells in Figure 7 while moving away from the (0,0) data point.", "Part-of-speech (POS) tagging is a substantially more complicated task than word similarity.", "We use a bidirectional LSTM implemented using DyNet (Neubig et al., 2017).", "We train nine sets of 2099 0 10 20 30 40 50 60 70 80 90 100 % Stability max 4000 1000 250 60 15 0 F r e q u e n c y 0.0 0.1 0.2 0.3 0.4 0.5 A v g .", "(b) POS error probability when vectors may shift in training.", "(c) Cosine similarity between original word vectors and shifted word vectors.", "128-dimensional word embeddings with word2vec using different random seeds.", "The LSTM has a single layer and 50-dimensional hidden vectors.", "Outputs are passed through a tanh layer before classification.", "To train, we use SGD with a learning rate of 0.1, an input noise rate of 0.1, and recurrent dropout of 0.4.", "This simple model is not state-of-the-art, scoring 95.5% on the development set, but the word vectors are a central part of the model, providing a clear signal of their impact.", "For each word, we group tokens based on stability and frequency.", "Figure 8 shows the results.", "5 Fixing the word vectors provides a clearer pattern in the results, but also leads to much worse performance: 85.0% on the development set.", "Based on these results, it seems that training appears to compensate for stability.", "This hypothesis is supported by Figure 8c, which shows the similarity between the original word vectors and the shifted word vectors produced by the training.", "In general, lower stability words are shifted more during training.", "Understanding how the LSTM is changing the input embeddings is useful information for tasks with limited data, and it could allow us to improve embeddings and LSTM training for these low-resource tasks.", "Word embeddings are surprisingly variable, even for relatively high frequency words.", "Using a regression model, we show that domain and part-of-speech are key factors of instability.", "Downstream experiments show that stability impacts tasks using embedding-based features, though allowing embeddings to shift during training can reduce this effect.", "In order to use the most stable embedding spaces for future tasks, we recommend either using GloVe or learning a good curriculum for word2vec training data.", "We also recommend using in-domain embeddings whenever possible.", "We would like to thank Ben King and David Ju-rgens for helpful discussions about this paper, as well as our anonymous reviewers for useful feedback.", "This material is based in part upon work supported by the National Science Foundation (NSF #1344257) and the Michigan Institute for Data Science (MIDAS).", "Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the NSF or MIDAS.", "5 The unusual dark spot that occurs at medium-high stability and low frequency is caused primarily by words that have a substantially different POS distribution in the test data than in the training data." ]
[ "abstain", "method", "result", "objective", "abstain", "abstain", "abstain", "objective", "result", "result", "abstain", "other", "abstain", "abstain", "abstain", "objective", "objective", "other", "abstain", "other", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "other", "other", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "method", "method", "other", "other", "other", "other" ]
[ "Data sharing restrictions are common in NLP, especially in the clinical domain, but there is limited research on adapting models to new domains without access to the original training data, a setting known as source-free domain adaptation.", "We take algorithms that traditionally assume access to the source-domain training dataactive learning, self-training, and data augmentationand adapt them for source-free domain adaptation.", "Then we systematically compare these different strategies across multiple tasks and domains.", "We find that active learning yields consistent gains across all SemEval 2021 Task 10 tasks and domains, but though the shared task saw successful self-trained and data augmented models, our systematic comparison finds these strategies to be unreliable for source-free domain adaptation.", "Deep neural networks achieve high performance in many tasks, but typically require annotated training data for each new domain.", "Domain adaptation algorithms aim to take models trained on one domain (the source domain) and transfer the model's knowledge to another domain (the target domain).", "They typically try to do this without a huge amount of annotated data in the target domain.", "Domain adaptation can be easy if the source and target domain have similar distributions, but domains often differ substantially (Wilson and Cook, 2020).", "While there has been much progress in domain adaptation methods (Kouw, 2018) and even in unsupervised domain adaptation where there are no target-domain labels (Ramponi and Plank, 2020), most methods assume access to the labeled source data.", "Yet this assumption is often not satisfied, especially in the clinical domain due to privacy concerns (Laparra et al., 2020).", "SemEval 2021 Task 10 (Laparra et al., 2021), on source-free domain adaptation, called attention to this challenging but more realistic scenario where labeled source data are not accessible, only the model trained on the source domain data can be shared 1 , and little or no labeled target data are available.", "Participants explored methods including self-training, active learning, and data augmentation (Laparra et al., 2021) but it is hard to make fair comparisons between algorithms since different teams varied in their base implementations.", "1. The first systematic comparison of self-training, active learning, and data augmentation for source-free domain adaptation, carried out across multiple tasks and domains.", "2. We identify a formulation of source-free active learning that consistently improves performance of the source-domain model, and sometimes even outperforms fine-tuning on a large set of labeled target domain data.", "3. We perform an error analysis across tasks and domains and show that the selected formulation of active learning corrects several types of errors that self-training does not.", "Our code is publicly available.", "2 2 Related Work 2.1 Source-free Domain Adaptation Recently, there is rising interest in computer vision to develop methods for unsupervised source-free domain adaptation.", "Several works utilize a generative framework with a classifier trained on source data to generate labeled training examples (Kurmi et al., 2021; Li et al., 2020) or transfer the target ex-1 In general, it is easier to distribute models than raw data.", "For example, Lehman et al. (2021) found that none of the algorithms they tried could effectively recover protected health information from a pre-trained language model.", "2 github.com/xinsu626/ SourceFreeDomainAdaptation 8352 amples to match the source style (Hou and Zheng, 2020; Sahoo et al., 2020).", "Other works use self-supervised pseudo-labeling.", "Liang et al. (2020) proposes source hypothesis transfer that freezes the classifier of the source model domain but fine-tunes the encoding of the source model with a goal to reduce the entropy of individual output prediction while maintaining global diversity.", "They also augment the strategy by self-supervised pseudo labels via the nearest centroid classifier.", "Kim et al. (2020) select low self-entropy instances as class prototypes and pseudo-label the remaining target instances based on the distance to the class prototypes and progressively update the models on target data in the manner of self-training.", "Despite of a growing number of computer vision studies on source-free domain adaptation, there is limited NLP research into this challenging but realistic scenario.", "Though there is partially related research on continual learning (de Masson d'Autume et al., 2019; Sun et al., 2020) and generalization of pre-trained models (Hendrycks et al., 2020), the only work to explicitly test source-free domain adaptation is SemEval 2021 Task 10 (Laparra et al., 2021), which asked participants to perform source-free domain adaptation on negation detection and time expression recognition.", "A variety of techniques were applied to this task, including active learning, self-training, and data augmentation.", "However, different techniques were applied by different participants with different baseline models, so the shared task results do not allow us to make fair comparisons between different techniques.", "In the current article, we implement and then systematically compare these different techniques.", "Self-training (Yarowsky, 1995; McClosky et al., 2006) trains a model on a labeled dataset L and then iteratively makes predictions (pseudo-labels) on an unlabeled dataset U and re-trains.", "On each iteration, the examples in U that the model labels with high confidence (silver labels) are added to L , and the model is retrained on the new, larger L .", "This process is repeated until no more predictions are highly confident.", "Self-training has been applied to a variety of domain adaptation scenarios (Ruder and Plank, 2018; Yu et al., 2015; Cui and Bollegala, 2019), but always with the assumption that the original labeled data L is available at each iteration.", "In source-free domain adaptation, L is not available, so source-free self-training could train on only the pseudo-labels, and it is unclear whether that would yield a superior or inferior model.", "Active learning selects a small number of examples to be manually annotated, using strategies designed to select the examples that should most benefit the model.", "Various active learning selection strategies have been developed (see the survey of Settles, 2009), and recent work has shown the benefits of active learning even with pre-trained transformer models (Ein-Dor et al., 2020).", "Active learning is also frequently used in domain adaptation.", "For example, Chan and Ng (2007) applied uncertainty sampling for domain adaptation of word sense disambiguation models, and Rai et al. (2010) combined model confidence and a domain discriminator to select target-domain examples for sentiment analysis.", "As with self-training, active learning algorithms typically assume that the source-domain training data is available and can be combined with target-domain examples.", "Thus, the efficacy of source-free active learning is currently unclear.", "Data Augmentation enhances limited data by using existing resources (WordNet, similar datasets, etc.) and/or rule-based transformations of the training data to create new training examples.", "A variety of data augmentation techniques have been proposed (see the survey of Liu et al., 2020) including back-translation (Sennrich et al., 2016; Wang et al., 2021), lexical-substitution (Zhou et al., 2019; Arefyev et al., 2020; Wei and Zou, 2019; Miao et al., 2020), noise injection (Wei and Zou, 2019), conditional generation (Juuti et al., 2020; Malan-drakis et al., 2019; Kobayashi, 2018), and data transformation with task-specific rules or templates (Sahin and Steedman, 2018; Wang et al., 2021; Xu et al., 2020).", "Data augmentation assumes access to the source-domain training data, so cannot be used by itself in source-free domain adaptation.", "It could be coupled with source-free self-training or source-free active learning, but researchers have not yet systematically explored such combinations.", "We base our experiments off of the data and source-domain models from the tasks of SemEval 2021 Task 10: negation detection and time expression", "recognition.", "We select these tasks because:", "1. They represent real-world data-sharing problems: the negation source-domain data cannot currently be distributed and the time expression source-domain data is difficult to gain access to due to the complex data use agreements (La-parra et al., 2021).", "Only the task organizers had access to the data and permission to distribute models trained on the (de-identified) data.", "2. The annotation schemes are complex enough that the problem cannot be easily solved by manually annotating the target domain.", "Su et al. (2021) found that annotations from annotators given only the time annotation guidelines yielded no gains to models, while annotations from heavily trained annotators did yield gains.", "3. These two tasks suffer a large performance loss under domain shift: the source-trained model is 15+ points of F1 lower on the target test set than on the source test set (Laparra et al., 2021).", "The popular Amazon reviews sentiment analysis dataset (Blitzer et al., 2007) violates the points above: labeled source and target data are easily available, the annotation scheme is easy (it is artifi-cially balanced and removes reviews with neutral labels, as others have noted (He et al., 2018; Miller, 2019)), and the source domain model performs well on the target domain (within 0-4 points of F1).", "We nonetheless include some experiments on this dataset in appendix A.3.", "We find that with simple data preprocessing and source-domain hyperparameter tuning, the source-domain model alone outperforms all domain adaptation models from Ye et al. (2020) and Ben-David et al. (2020).", "the goal is to predict that diarrhea is negated by its context.", "The source-domain negation detection model was trained on Mayo clinic clinical notes.", "The target domains are Partners HealthCare clinical notes from the i2b2 2010 Challenge and Beth Israel ICU progress notes from the MIMIC III corpus.", "SemEval 2021 Task 10 time expression recognition is a sequence-tagging task.", "The goal is to identify the time entities in the document and label them with SCATE types (Bethard and Parker, 2016).", "For example, given the sentence: the patient underwent appendicitis surgery on August 29, 2018 , the goal is to label August as Month-Of-Year , 29 as Day-Of-Month , and 2018 as Year .", "The source-domain time expression recognition model was trained on the Mayo Clinic clinical notes of SemEval 2018 Task 6 (Laparra et al., 2018).", "The target domains are news articles (also from SemEval 2018 Task 6) and reports from food security warning systems including the UN World Food Programme and the Famine Early Warning Systems Network.", "Each task has a model trained from a source domain and a test set for each of two target domains.", "For each target domain, we split the data into 20% as a development set and 80% as a test set.", "Detailed data information is shown in table", "1. Source data We do not use source domain data.", "We use only the English RoBERTa-base models (Liu et al., 2019) (approx. 125M parameters) that the task organizers fine-tuned on the source domain data sets via the Huggingface Transform-8354 ers library v3.5.1 (Wolf et al., 2020).", "Target development data We use the development data for fine-tuning the model.", "For active learning, to simulate manual annotation, we fine-tune on a small number of automatically selected labeled examples.", "For self-training, no labels are used; we fine-tune on predictions (pseudo-labels) generated by the model on the development data.", "For oracle experiments, we fine-tune the model on all labeled examples in the development set.", "Target test data We evaluate on the test data.", "No fine-tuning is performed.", "Models always treat this data as unlabeled 3 .", "Its labels are used only during evaluation.", "We use the same evaluation metrics as in SemEval 2021 Task 10: precision, recall, and F1 score.", "We aim for a systematic analysis of three strategies with many different implementations in SemEval 2021 Task 10: self-training, active learning, and data augmentation.", "Our research questions are:", "1. How much can we gain from having human intervention (active learning) and not just the model alone (self-training)?", "2. For active learning, given a fixed annotation budget, is it better to do several iterations of selecting examples for annotation and retraining the model, or to select and retrain just once?", "3. For self training, given a fixed confidence threshold, is it better to do several iterations of generating pseudo-labels and retraining the model, or to generate and train only once?", "4. In each iteration of active learning or self-training, should we use the training data from the previous iteration or start anew?", "5. In each iteration of active learning or self-training, should we continue training the model from the previous iteration or the model from the source-domain?", "6. Do active learning and self-training improve with data augmentation or work better alone?", "We design source-free variants of self-training, active learning, and data augmentation that incorporate the following parameters, allowing us to investigate the questions above.", "self-training or active learning SD the data construction strategy: KeepData to keep the training data from the previous iteration, or ResetData to start anew on each iteration.", "SM the model training strategy: KeepModel to continue training the model from the previous iteration, or ResetModel to continue training from the source-domain model.", "SA whether or not to use data augmentation.", "Algorithm 1 presents our self-training algorithm.", "It follows standard self-training (Yarowsky, 1995) in using the model to add pseudo-labels to the unlabeled data (line 10).", "However, there is no source-domain labeled data, so the model can fine-tune only on the pseudo-labels.", "The remainder of the code ensures that models and/or data are kept, reset, or augmented as per the selected strategies.", "Self-training requires a measure of model confidence on each prediction.", "In both tasks, we add pseudo-labeled training data a sentence at a time, so we measure confidence at the sentence level.", "In negation detection, we use the predicted probability 8355 Algorithm 2: Source-Free Active Learning Algorithm Input: M : the source-domain model D : the development set of the target domain T : the maximum number of iterations K : the number of annotations per iteration SD : the data construction strategy SM : the model training strategy SA : the data augmentation strategy 1 M 0 Copy ( M ) 2 D 0 Copy ( D ) 3 L 4 for i 0 to T do 5 if SD = ResetData then 6 L = 7 D = D 0 8 DU [ d for d D sorted by uncertainty of M ( d )] 9 LU { ( d, Annotate ( d )) for d top K of DU } 10 L L LU 11 if SD = KeepData then 12 D D { d for ( d, l ) LU } 13 if SA = Augment then 14 L L Augment ( LU ) ; 15 if SM = ResetModel then 16 M M 0 17 Fine-tune M on L ; at RoBERTa's special sentence-initial token <s>.", "In time expression recognition, we use the average of the predicted probabilities of the most probable class of each token.", "Algorithm 2 presents our active learning algorithm.", "It follows an approach similar to Su et al. (2021).", "Like most active learning algorithms, the core is to select examples the model is uncertain of (line 8) and then manually annotate them (line 9).", "Since our development sets are already annotated, we simulate annotation by simply revealing the (previously hidden) labels for the selected examples.", "Active learning requires a measure of model uncertainty on each prediction.", "In both tasks, we add annotations a sentence at a time, so we measure uncertainty at the sentence level.", "In negation detection, we use the predicted entropy at RoBERTa's special sentence-initial token, <s>.", "In time expression recognition, we use the average of the predicted entropies of the tokens in the sentence.", "Inspired by Miao et al. (2020), we use a pool-based data augmentation method to automatically increase the size of the training set.", "In negation detection, we construct a pool of all event words in the unlabeled target domain test data.", "For each development data example to be augmented, we substitute its event with n randomly-sampled words from the pool.", "For example, if data augmentation is performed on the sentence: Has no <e> diarrhea </e> , we replace the diarrhea with random words from the pool, resulting in sentences like Has no <e> asthma </e> .", "In time expression recognition, we construct a pool of words for each time entity type using the guidelines of the SCATE annotation schema, excluding words that do not appear in the unlabeled target domain test data.", "For each entity in a development data example to be augmented, we substitute it with n randomly-sampled words from the pool for its entity type.", "For example, in the sentence, the patient underwent appendicitis surgery on August 29, 2018 , there are three time entities (Au-gust: Month-Of-Year, 29: Day-Of-Month, 2018: Year).", "Data augmentation can therefore generate up to n 3 sentences with different years, months, and days, e.g., the patient underwent appendicitis surgery on September 1st, 2017 .", "The input to the source-domain models for both tasks is a sentence.", "The output for the negation detection model is a sentence label (negated or not negated).", "The output for the time expression model is one label per token (its time entity type).", "For both tasks, we use the conventional RoBERTa input format, surrounding the sentence with the special tokens <s> and </s>.", "The negation detection data is already split into sentences.", "For the time recognition data, we split it into sentences using the English sentencizer from Spacy v2.3.2 (Honnibal et al., 2020).", "When we fine-tune the source-domain model on the target domain, we keep the same training hyperparameters as were used when the shared task organizers trained the models on the source domains.", "In source-free domain adaptation, there is no (or very little) labeled development data available, so it is not possible to tune hyperparameters.", "All hyperparameters are given in appendix A.1.", "All experiments are run on a single Nvidia P100 GPU.", "The total approximate GPU hours are 70 hours.", "In self-training, we set the threshold to 0.95, and experiment with running just a single iteration and with running 30 iterations with the different 8356 SD and SM strategies.", "The threshold and the number of iterations are adapted from Su et al. (2021).", "Training may run for fewer iterations when the stopping conditions are met.", "In active learning, we set our annotation budget to 96 sentences, and experiment with spending these 96 sentences at once and in 8 iterations with the different SD and SM strategies.", "For all experiments, we run one version with data augmentation (with n = 5 ) and one without.", "1. Source-Domain Model : The baseline.", "It is unadapted, trained only on the source domain.", "2. Fine-Tuned Source-Domain Model : The oracle.", "It is fine-tuned on the target domain using the entire labeled development set.", "3. Self-Distilled Model : A RoBERTa-base model fine-tuned on the development set using pseudo labels generated by the source-domain model.", "4. Passive Learning Model : The source-domain model fine-tuned on 96 randomly sampled examples from the labeled development set.", "Tables 2 and 3 show the results of our experiments.", "We are interested less in the best model for a particular configuration, but rather in which config-urations are successful across multiple tasks and domains.", "This is because in source-free domain adaptation, there is typically no (or very little) labeled target domain data available for hyperparameter tuning.", "Therefore, what we need is a universal strategy that does not require careful tuning.", "For source-free active learning, we find that even small amounts of annotated data are useful, and that smart data selection (e.g., using uncertainty scores) is usually helpful.", "The active learning KeepData models (rows 6, 8, 11, and 13 in tables 2 and 3) have higher F1s than the baseline source domain models across all tasks and domains (0.054 F1 higher on average).", "Active learning KeepData models also outperform passive learning models (that randomly select data) in 14 out of 16 cases, and are at least as good as, and typically much better than, the self-training models (rows 15-24 in tables 2 and 3).", "The ResetModel+ResetData models always have the worst F1s of the active learning models (rows 7 and 12 in tables 2 and 3).", "Several active learning models achieve higher F1s than the oracle model that fine-tuned on the full labeled development set (row 8, 10, 11, 13, 14 in table 3 Time: News and row 8, 11, 14 in table 3 Time: Food).", "1. If there is sufficient expertise to label the data, use active learning and iteratively adapt the model with the KeepModel+KeepData strategy instead of spending the annotation budget all at 8357 Negation: MIMIC-III Negation: i2b2 # Strategy F P R F P R 1 Source-Domain Model (baseline) 0.656 0.921 0.510 0.837 0.855 0.820 2 Fine-Tuned Source-Domain Model (oracle) 0.868 0.875 0.862 0.925 0.928 0.922 3 Self-Distilled Model 0.623 0.825 0.501 0.846 0.849 0.842 4 Passive Learning Model 0.722 0.792 0.663 0.882 0.914 0.853 Active Learning 5 AL ( 96 1 ) 0.759 0.901 0.656 0.886 0.943 0.836 6 AL ( 12 8 ) + ResetModel + KeepData 0.800 0.828 0.774 0.891 0.951 0.838 7 AL ( 12 8 ) + ResetModel + ResetData 0.618 0.842 0.489 0.778 0.972 0.649 8 AL ( 12 8 ) + KeepModel + KeepData 0.817 0.867 0.773 0.859 0.852 0.865 9 AL ( 12 8 ) + KeepModel + ResetData 0.777 0.890 0.689 0.877 0.928 0.831 Active Learning + Data Augmentation 10 AL ( 96 1 ) + DA (5) 0.708 0.652 0.773 0.883 0.937 0.834 11 AL ( 12 8 ) + ResetModel + KeepData + DA (5) 0.805 0.803 0.806 0.891 0.960 0.831 12 AL ( 12 8 ) + ResetModel + ResetData + DA (5) 0.586 0.489 0.730 0.817 0.960 0.710 13 AL ( 12 8 ) + KeepModel + KeepData + DA (5) 0.805 0.878 0.744 0.881 0.925 0.841 14 AL ( 12 8 ) + KeepModel + ResetData + DA (5) 0.745 0.882 0.645 0.889 0.929 0.852 Self-training 15 ST (1) 0.677 0.916 0.537 0.854 0.871 0.838 16 ST (30) + ResetModel + KeepData 0.679 0.937 0.533 0.857 0.876 0.839 17 ST (30) + ResetModel + ResetData 0.695 0.912 0.562 0.861 0.880 0.843 18 ST (30) + KeepModel + KeepData 0.664 0.906 0.525 0.864 0.890 0.840 19 ST (30) + KeepModel + ResetData 0.654 0.879 0.521 0.858 0.883 0.834 Self-training + Data Augmentation 20 ST (1) + DA (5) 0.654 0.943 0.501 0.863 0.894 0.833 21 ST (30) + ResetModel + KeepData + DA (5) 0.000 0.000 0.000 0.861 0.887 0.838 22 ST (30) + ResetModel + ResetData + DA (5) 0.000 0.000 0.000 0.864 0.897 0.834 23 ST (30) + KeepModel + KeepData + DA (5) 0.000 0.000 0.000 0.854 0.869 0.839 24 ST (30) + KeepModel + ResetData + DA (5) 0.000 0.000 0.000 0.855 0.885 0.827 Table 2: Performance of domain adaptation strategies on the negation detection target domains.", "This emphasizes a challenge of source-free domain adaptation: more data is not always better data.", "Since we do not have access to the source domain training data, if we fine-tune on too much target domain data the model may start to forget what it learned on the source domain, i.e., catastrophic forgetting (McCloskey and Cohen, 1989).", "In these cases, the active learning models, by selecting a small set of just the most uncertain examples, reap the benefits of knowing something about the target domain without losing what they learned from the source domain.", "For source-free self-training, we find that iteratively updating both model and data is slightly above baseline, and that it is better to start from the source-domain model than from RoBERTa without fine-tuning.", "The KeepModel+KeepData (without data augmentation) is slightly above the source-domain model across all tasks and domains (0.013 F1 higher on average).", "Every other configuration, even if they outperform KeepModel+KeepData in one task or domain, is below the source-domain baseline in another.", "All self-trained models without data augmentation (which start from the source-domain model) do at least outperform self-distilled models (which start from the RoBERTa model without fine-tuning; row 3 in tables 2 and 3).", "The small gains from the only self-training configu-ration that consistently outperformed the source-domain model suggest that self-training may not be worthwhile for source-free domain adaptation.", "Data augmentation helped in some cases (e.g., self-training time expression recognition on news), and hurt in others (e.g., self-training time expression recognition on food security).", "Data augmentation sometimes led to ill-behaving models: on the negation MIMIC-III dataset, data augmentation made the self-trained model predict all examples as not negated resulting in 0.000 F1 (rows 21 -24 in table 2: Negation-MIMIC-III).", "This suggests that data augmentation (or at least the variants of it that we explored) is probably not viable for source-free domain adaptation where no labeled data for tuning strategies is available.", "once.", "This is the best model without data augmentation in three of the four domains (Nega-tion: MIMIC III, Time: News, Time: Food).", "Note that expertise is important: Su et al. (2021) found that active learning with non-experts in the face of a complex annotation scheme did not yield performance improvements.", "2. Self-training and data augmentation, at least as implemented here, are not good choices for source free domain adaptation: sometimes they led to gains, and sometimes they led to losses.", "While a good strategy could be found by labeling some target domain data and performing hyperparameter search, such annotation effort would have a higher payoff if used for active learning instead.", "3. Active learning is better than passive learning: smart example selection is better than random example selection.", "4. Self-training is better than self-distillation: the models benefit from the task knowledge learned from the source-domain.", "Our systematic analysis allowed us to make the above more specific suggestions than the shared task's main suggestion that the best performing [systems] incorporated. . . active-learning, handcrafted heuristics or semiautomatically building a training set (Laparra et al., 2021).", "We performed an error analysis to try to determine if different adaptation strategies resulted in different types of errors being corrected (as compared to the source domain model).", "For negation detection we sampled and categorized around 200 errors of the source-domain model for each target domain.", "When the model failed to predict a negation, we manually categorized the error by the negation cue ( no , free , absent , etc.).", "When the model predicted a negation it should not have, we manually cate-8358 Time: News Time: Food # Strategy F P R F P R 1 Source-Domain Model (baseline) 0.771 0.772 0.770 0.781 0.834 0.734 2 Fine-Tuned Source-Domain Model (oracle) 0.844 0.826 0.864 0.851 0.841 0.861 3 Self-Distilled Model 0.572 0.590 0.555 0.766 0.831 0.711 4 Passive Learning Model 0.796 0.783 0.809 0.770 0.755 0.785 Active Learning 5 AL ( 96 1 ) 0.812 0.800 0.825 0.819 0.821 0.818 6 AL ( 12 8 ) + ResetModel + KeepData 0.812 0.794 0.830 0.842 0.844 0.840 7 AL ( 12 8 ) + ResetModel + ResetData 0.771 0.771 0.770 0.781 0.832 0.737 8 AL ( 12 8 ) + KeepModel + KeepData 0.861 0.844 0.879 0.872 0.866 0.879 9 AL ( 12 8 ) + KeepModel + ResetData 0.772 0.758 0.787 0.781 0.797 0.765 Active Learning + Data Augmentation 10 AL ( 96 1 ) + DA (5) 0.856 0.829 0.884 0.840 0.824 0.855 11 AL ( 12 8 ) + ResetModel + KeepData + DA (5) 0.860 0.830 0.893 0.856 0.840 0.873 12 AL ( 12 8 ) + ResetModel + ResetData + DA (5) 0.790 0.748 0.836 0.793 0.782 0.805 13 AL ( 12 8 ) + KeepModel + KeepData + DA (5) 0.849 0.820 0.881 0.841 0.821 0.863 14 AL ( 12 8 ) + KeepModel + ResetData + DA (5) 0.853 0.828 0.879 0.856 0.831 0.881 Self-training 15 ST (1) 0.753 0.733 0.774 0.777 0.807 0.750 16 ST (30) + ResetModel + KeepData 0.786 0.791 0.782 0.780 0.815 0.747 17 ST (30) + ResetModel + ResetData 0.727 0.688 0.770 0.787 0.815 0.761 18 ST (30) + KeepModel + KeepData 0.784 0.777 0.792 0.786 0.832 0.745 19 ST (30) + KeepModel + ResetData 0.633 0.551 0.743 0.789 0.829 0.752 Self-training + Data Augmentation 20 ST (1) + DA (5) 0.800 0.794 0.805 0.756 0.787 0.726 21 ST (30) + ResetModel + KeepData + DA (5) 0.789 0.790 0.788 0.754 0.780 0.730 22 ST (30) + ResetModel + ResetData + DA (5) 0.795 0.792 0.798 0.765 0.788 0.744 23 ST (30) + KeepModel + KeepData + DA (5) 0.794 0.801 0.788 0.759 0.786 0.734 24 ST (30) + KeepModel + ResetData + DA (5) 0.797 0.791 0.802 0.747 0.771 0.724 Table 3: Performance of domain adaptation strategies on the time expression recognition target domains.", "gorized the error into wrong cue (there was a negation cue in the sentence but it did not apply to the target event) or short sentence (especially on the i2b2 domain, the model liked to predict all short sentences as negated).", "For time expression recognition, we categorized all errors of the source-domain model by entity type (insideoutsidebeginning format) for each target domain.", "For both tasks, we then calculated how many of these source-domain model errors the best adapted models continued to make.", "Heatmaps of these analyses are plotted in appendix A.2.", "Across all tasks and domains, we see that the best self-trained models correct errors roughly evenly across source-domain error categories, while the best active learning models correct different errors, more like the oracle (target-fine-tuned) model.", "For example, the oracle model and active learning adapted models correct many more wrong cue errors in the negation i2b2 domain, more denies and none errors in the negation MIMIC III domain, more B-Period and B-Month-Of-Year entities in the time news domain, and more B-Season-Of-Year, I-Season-Of-Year, and B-This entities in the time food domain.", "Some error types appear to be only learnable with substantially more data.", "Only the oracle model is able to correct errors with the non and afebrile negation cues in the i2b2 domain and with the hold negation cue in MIMIC-III domain.", "This suggests that the source-domain model may be very con-fident in some types of wrong examples causing them not to be selected in active learning and generating poor pseudo-labels in self-training.", "In this paper, we present a detailed comparison of the use of active learning, self-training and data augmentation to adapt a source-domain model on a target domain when the source-domain training data is unavailable.", "We identify a specific formula-8359 tion of source-free active learning that consistently improves performance of the source-domain model.", "We believe our work highlights the interesting challenges of source-free domain adaptation, and its systematic comparison provides a solid base for future research in this area.", "Research reported in this publication was supported by the National Library of Medicine of the National Institutes of Health under Award Numbers R01LM012918 and R01LM010090.", "The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.", "Our comparison experiments and proposed formulation are intended to encourage model sharing in source-free domain adaptation while avoiding the risk of privacy leakage caused by direct data sharing.", "The data we use in this experiment are publicly available and from a shared task, however some of that data is from health institutions and requires a data use agreement to work with the data.", "Though recent research has found it difficult to recover protected information from trained models (Lehman et al., 2021), there is still some small risk that more complex models may be able to do so.", "However, as our research is a comparative study, we are not directly releasing models, and thus not risking any release of protected health information." ]
[ "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "result", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "result", "result", "other", "other", "abstain", "method", "abstain", "abstain" ]
[ "We propose NeuralWOZ, a novel dialogue collection framework that uses model-based dialogue simulation.", "NeuralWOZ has two pipelined models, Collector and Labeler.", "Collector generates dialogues from (1) user's goal instructions, which are the user context and task constraints in natural language, and (2) system's API call results, which is a list of possible query responses for user requests from the given knowledge base.", "Labeler annotates the generated dialogue by formulating the annotation as a multiple-choice problem, in which the candidate labels are extracted from goal instructions and API call results.", "We demonstrate the effectiveness of the proposed method in the zero-shot domain transfer learning for dialogue state tracking.", "In the evaluation, the synthetic dialogue corpus generated from NeuralWOZ achieves a new state-of-the-art with improvements of 4.4% point joint goal accuracy on average across domains, and improvements of 5.7% point of zero-shot coverage against the MultiWOZ 2.1 dataset.", "1 1 Introduction For a task-oriented dialogue system to be scalable, the dialogue system needs to be able to quickly adapt and expand to new scenarios and domains.", "However, the cost and effort in collecting and annotating an expanding dataset is not only labor-intensive but also proportional to the size and variety of the unseen scenarios.", "There are three types of dialogue system expansions.", "(1) The simplest expansion is the addition of new instances in the knowledge base (KB) under the identical schema.", "For example, the addition of newly opened restaurants in the KB of restaurant domain falls under this category.", "(2) A slightly more complicated expansion involves modifica-tions to the KB schema, and possibly the related 1 The code is available at github.com/naver-ai/neuralwoz.", "instances.", "For example, additions of new constraint types to access the KB due to the change in needs of the user often require a restructuring of the KB.", "If a dialogue system built with only restaurant search in mind observes user's requests about not only restaurant location and but also traffic informa-tion for navigating, the system now needs a new knowledge base including the additional different domain.", "(3) The most complex expansion is the one that expands across multiple domains.", "For example, imagine an already built dialogue system supported restaurant and hotel reservation domains, but now needs to expand to points of interest or other domains.", "It is difficult to expand to new domain without collecting new data instances and building a new knowledge base, if the schema between the source (restaurant and hotel in this case) and target domain (point of interest) look different.", "To support development of scalable dialogue systems, we propose NeuralWOZ, a model-based dialogue collection framework.", "NeuralWOZ uses goal instructions and KB instances for synthetic dialogue generation.", "NeuralWOZ mimics the mechanism of a Wizard-of-Oz (Kelley, 1984; Dahlback et al., 1993) and Figure 1 illustrates our approach.", "NeuralWOZ has two neural components, Collector and Labeler.", "Collector generates a dialogue by using the given goal instruction and candidate relevant API call results from the KB as an input.", "Labeler annotates the generated dialogue with appropriate labels by using the schema structure of the dialogue domain as meta information.", "More specifically, Labeler selects the labels from candidate labels which can be obtained from the goal instruction and the API call results.", "As a result, NeuralWOZ is able to generate a dialogue corpus without training data of the target domain.", "We evaluate our method for zero-shot domain transfer task (Wu et al., 2019; Campagna et al., 2020) to demonstrate the ability to generate corpus for unseen domains, when no prior training data exists.", "In dialogue state tracking (DST) task with MultiWOZ 2.1 (Eric et al., 2019), the synthetic data generated with NeuralWOZ achieves 4.4% point higher joint goal accuracy and 5.7% point higher zero-shot coverage than the existing baseline.", "Additionally, we examine few-shot and full data augmentation tasks using both training data and synthetic data.", "We also illustrate how to collect synthetic data beyond MultiWOZ domains, and discuss the effectiveness of the proposed approach as a data collection strategy.", "Our contributions are as follows: NeuralWOZ, a novel method for generating dialogue corpus using goal instruction and knowledge base information New state-of-the-art performance on the zero-shot domain transfer task Analysis results highlighting the potential synergy of using the data generated from NeuralWOZ together with human-annotated data 2 Related Works 2.1 Wizard-of-Oz Wizard-of-Oz (WOZ) is a widely used approach for constructing dialogue data (Henderson et al., 2014a,b; El Asri et al., 2017; Eric and Manning, 2017; Budzianowski et al., 2018).", "It works by facilitating a role play between two people.", "User utilizes a goal instruction that describes the context of the task and details of request and system has access to a knowledge base, and query results from the knowledge base.", "They take turns to converse, while the user makes requests one by one following the instructions, the system responds according to the knowledge base, and labels user's utterances.", "Other studies on dialogue datasets use the user simulator-based data collection approaches (Schatz-mann et al., 2007; Li et al., 2017; Bordes et al., 2017; Shah et al., 2018; Zhao and Eskenazi, 2018; Shah et al., 2018; Campagna et al., 2020).", "They define domain schema, rules, and dialogue templates to simulate user behavior under certain goals.", "The ingredients to the simulation are designed by developers and the dialogues are realized by predefined mapping rules or paraphrasing by crowdworkers.", "If a training corpus for the target domain exists, neural models that synthetically generates dialogues can augment the training corpus (Hou et al., 2018; Yoo et al., 2019).", "For example, Yoo et al. (2020) introduce Variational Hierarchical Dialog Autoencoder (VHDA), where hierarchical latent variables exist for speaker identity, user's request, dialog state, and utterance.", "They show the effectiveness of their model on single-domain DST tasks.", "SimulatedChat (Mohapatra et al., 2020) also uses goal instruction for dialogue augmentation.", "Although it does not solve zero-shot learning task with domain expansion in mind, we run auxiliary experiments to compare with NeuralWOZ, and the results are in the Appendix D. 2.3 Zero-shot Domain Transfer In zero-shot domain transfer tasks, there is no data for target domain, but there exists plenty of data for other domains similar to target domain.", "Solving the problem of domain expansion of dialogue systems can be quite naturally reducted to solving zero-shot domain transfer.", "Wu et al. (2019) conduct a landmark study on the zero-shot DST.", "They Figure 2: Illustration of Collector and Labeler.", "suggest a model, Transferable Dialogue State Generator (TRADE), which is robust to a new domain where few or no training data for the domain exists.", "Kumar et al. (2020) and Li et al. (2021) follow the same experimental setup, and we also compare NeuralWOZ in the same experiment setup.", "Abstract Transaction Dialogue Model (ATDM) (Cam-pagna et al., 2020), another method for synthesizing dialogue data, is another baseline for zero-shot domain transfer tasks we adopt.", "They use rules, abstract state transition, and templates to synthesize the dialogue, which is then fed into a model-based zero-shot learner.", "They achieved state-of-the-art in the task using the synthetic data on SUMBT (Lee et al., 2019), a pretrained BERT (Devlin et al., 2019) based DST model.", "In this section, we describe the components of NeuralWOZ in detail, and how they interact with each other.", "Figure 2 illustrates the input and output of two modules in NeuralWOZ.", "The synthetic corpus, which Collector and Labeler made, are used for the training of the DST baselines, TRADE (Wu et al., 2019) and SUMBT (Lee et al., 2019) in our experiments.", "Domain Schema In task-oriented dialogues, there are two slot types; informable and requestable slots (Henderson et al., 2014a; Budzianowski et al., 2018).", "The informable slots are the task constraints to find relevant information from user requests, for example, restaurant-pricerange, restaurant-food, restaurant-name, and restaurant-book people in Figure", "1. The requestable slots are the additional details of user requests, like reference number and address in Figure", "1. Each slot S can have its corresponding value V in a scenario.", "In multi-domain scenarios, each domain has a knowledge base KB , which consists of slot-value pairs corresponding to its domain schema.", "The API call results in Figure 1 are the examples of the KB instances of the restaurant domain.", "Goal Instruction The goal instruction, G , is a natural language text describing constraints of user behavior in the dialogue D including informable and requestable slots.", "The paragraph consists of four sentences at the top of Figure 1 is an example.", "We define a set of informable slot-value pairs that explicitly expressed on the G as CG , which we formally define as CG = { ( S Gi , V Gi ) | 1 i | CG | , S Gi informable } .", "(restaurant-pricerange, expensive) and (restaurant-food, british) are examples of the elements of CG (Fig-ure 1).", "API Call Results The API call results, A , are corresponding query results of the CG from KB.", "We formally define A = { a i | 1 i | A | , a i KB } .", "Each a i is associated with its domain, domain a i , and with slot-value pairs, C a i = { ( S a i k , V a i k ) | 1 k | C a i |} .", "A slot S a i k can be either informable or requestable slot.", "For example, the restaurant instance, graffiti in Figure 1, is a query result from (restaurant-pricerange, expensive) and (restaurant-food, british) described in the goal instruction.", "State Candidate We define informable slot-value pairs that are not explicit in G but accessible by A in D as CA = { ( S Ai , V Ai ) | 1 i | CA | , S Ai informable } .", "It contains all informable slot-value pairs from C a 1 to C a | A | .", "The elements of CA are likely to be uttered by summaries of current states or recommendations of KB instances by the system side in D .", "The system utterance of the second turn in Figure 1 is an example (I recommend graffiti.).", "In this case, the slot-value pair (restaurant-name, graffiti) can be obtained from the A , not from the G .", "Finally, state candidate C is the union of CG and CA .", "It is a full set of the dialogue state for the dialogue D from given G and A .", "Thus, it can be used as label candidates of dialogue state tracking annotation.", "Collector is a sequence-to-sequence model, which takes a goal instruction G and API call results A as the input and generates dialogue DT .", "The generated dialogue DT = ( r 1 , u 1 , ..., r T , u T ) is the sequence of system response r and user utterance u .", "They are represented by N tokens ( w 1 , ..., w N ) 2 .", "We denote the input of Collector as <s> G </s> A , where the is concatenate operation.", "The <s> and </s> are special tokens to indicate start and seperator respectively.", "The to-kenized natural language description of G is directly used as the tokens.", "The A takes concatenation of each a i ( a 1 a | A | ) 3 .", "For each a i , we flatten the result to the token sequence, <domain> domain a i <slot> S a i 1 V a i 1 <slot> S a i | C ai | V a i | C ai | .", "The <domain> and <slot> are other special tokens as separators.", "The objective function of Collector is LC = 1 MC MC (cid:88) j =1 N j (cid:88) i =1 log p ( w ji | w j<i , G j , A j ) .", "Our Collector model uses the transformer architecture (Vaswani et al., 2017) initialized with pretrained BART (Lewis et al., 2020).", "Collector is trained using negative log-likelihood loss, where MC is the number of training dataset for Collector and N j is target length of the j -th instance.", "Following Lewis et al. (2020), label smoothing is used during the training with the smoothing parameter of 0.1.", "We formulate labeling as a multiple-choice problem.", "Specifically, Labeler takes a dialogue context D t = ( r 1 , u 1 , ..., r t , u t ) , question q , and a set of answer options O = { o 1 , o 2 , ..., o | O | } , and selects one answer o O .", "Labeler encodes the inputs for each o i separately, and s o i R 1 is the corresponding logit score from the encoding.", "Finally, the logit score is normalized via softmax function over the answer option set O .", "p ( o i | D t , q, O ) = exp( s o i ) (cid:80) | O | j exp( s o j ) , s o i = Labeler ( D t , q, o i ) , i.", "The input of Labeler is a concatenation of D t , q , and o i , <s> D t </s> q </s> o i </s> , with special tokens.", "For labeling dialogue states to D t , we use the slot description for each corresponding slot type, S i , as the question, for example, what is area or place of hotel? for hotel-area in Figure", "2. We populate corresponding answer options OS i = { V j | ( S j , V j ) C, S j = S i } from the state candidate set C .", "There are two special values, Dontcare to indicate the user has no preference and None to indicate the user is yet to specify a value for this slot (Henderson et al., 2014a; Budzianowski et al., 2018).", "We include these values in the OS i .", "For labeling the active domain of D t , which is the domain at t -th turn of D t , we define domain question, for example what is the domain or topic of current turn?, for q and use predefined domain set O domain as answer options.", "In MultiWOZ, O domain = { Attraction, Hotel, Restaurant, Taxi, Train } .", "Our Labeler model employs a pretrained RoBERTa model (Liu et al., 2019) as the initial weight.", "Dialogue state and domain labeling are trained jointly based on the multiple choice setting.", "Preliminary result shows that the imbalanced class problem is significant in the dialogue state labels.", "Most of the ground-truth answers is None given question 4 .", "Therefore, we revise the negative log-likelihood objective to weight other (notNone ) answers by multiplying a constant to the log-likelihood when the answer of training instance is 4 The number of None in the training data is about 10 times more than the number of others not None .", "LL = 1 ML ML (cid:88) j =1 T (cid:88) t =1 N q (cid:88) i =1 L jt,i L jt,i = (cid:40) log p ( o jt,i | D jt , q ji , O ji ) , if o jt,i (cid:54) = None log p ( o jt,i | D jt , q ji , O ji ) , otherwise", ", where o jt,i denotes the answer of i -th question for j -th training dialogue at turn t , the N q is the number of questions, and ML is the number of training dialogues for Labeler.", "We empirically set to a constant 5 .", "We first define goal template G .", "5 G is a delexical-ized version of G by changing each value V Gi expressed on the instruction to its slot S Gi .", "For example, the expensive and british of goal instruction in Figure 1 are replaced with restaurant-pricerange and restaurant-food, respectively.", "As a result, domain transitions in G becomes convenient.", "First, G is sampled from a pre-defined set of goal template.", "API call results A , which correspond to domain transitions in G , are randomly selected from the KB .", "Especially, we constrain the sampling space of A when the consecutive scenario among domains in G have shared slot values.", "For example, the sampled API call results for restaurant and hotel domain should share the value of area to support the following instruction I am looking for a hotel nearby the restaurant.", "G and A are aligned to become GA .", "In other words, each value for S Gi in G is assigned using the corresponding values in A .", "6 Then, Collector generates dialogue D , of which the total turn number is T , given GA and A .", "More details are in Appendix A. Nucleus sampling (Holtzman et al., 2020) is used for the generation.", "We denote dialogue state and active domain at turn t as B t and domain t respectively.", "The B t , { ( S j , V j,t ) | 1 j J } , has J number of predefined slots and their values at turn t .", "It means Labeler is asked J (from slot descriptions) + 1 (from domain question) questions regarding dialogue context D t from Collector.", "Finally, the out-5 In Budzianowski et al. (2018), they also use templates like ours when allocating goal instructions to the user in the Wizard-of-Oz setup.", "6 Booking-related slots, e.g., the number of people, time, day, and etc., are randomly sampled for their values since they are independent of the A .", "We use MultiWOZ 2.1 (Eric et al., 2019) dataset 7 for our experiments.", "It is one of the largest publicly available multi-domain dialogue data and it contains 7 domains related to travel (attraction, hotel, restaurant, taxi, train, police, hospital), including about 10,000 dialogues.", "The MultiWOZ data is created using WOZ so it includes goal instruction per each dialogue and domain-related knowledge base as well.", "We train our NeuralWOZ using the goal instructions and the knowledge bases first.", "Then we evaluate our method on dialogue state tracking with and without synthesized data from the NeuralWOZ using five domains (attraction, restaurant, hotel, taxi, train) in our baseline, and follow the same preprocessing steps of Wu et al. (2019); Campagna et al. (2020).", "We use the pretrained BART-Large (Lewis et al., 2020) for Collector and RoBERTa-Base (Liu et al., 2019) for Labeler.", "They share the same byte-level BPE vocab (Sennrich et al., 2016) introduced by Radford et al. (2019).", "We train the pipelined models using Adam optimizer (Kingma and Ba, 2017) with learning rate 1e-5, warming up steps 1,000, and batch size 32.", "The number of training epoch is set to 30 and 10 for Collector and Labeler respectively.", "For the training phase of Labeler, we use a state candidate set from ground truth dialogue states B 1: T for each dialogue, not like the synthesizing phase where the options are obtained from goal instruction and API call results.", "We also evaluate the performance of Labeler itself like the training phase with validation data (Table 5).", "Before training Labeler on the MultiWOZ 2.1 dataset, we pretrain Labeler on DREAM 8 (Sun et al., 2019) to boost Labeler's performance.", "This is similar to coarse-tuning in Jin et al. (2019).", "The same hyper parameter setting is used for the pretraining.", "For the zero-shot domain transfer task, we exclude dialogues which contains target domain from 7 https://github.com/budzianowski/multiwoz 8 The DREAM is a multiple-choice question answering dataset in dialogue and includes about 84% of non-extractive answers.", "the training data for both Collector and Labeler.", "This means we train our pipelines for every target domain separately.", "We use the same seed data for training as Campagna et al. (2020) did in the few-shot setting.", "All our implementations are conducted on NAVER Smart Machine Learning (NSML) platform (Sung et al., 2017; Kim et al., 2018) using hug-gingface's transformers library (Wolf et al., 2020).", "The best performing models, Collector and Labeler, are selected by evaluation results from the validation set.", "We synthesize 5,000 dialogues for every target domain for both zero-shot and few-shot experiments 9 , and 1,000 dialogues for full data augmentation.", "For zero-shot experiment, since the training data are unavailable for a target domain, we only use goal templates that contain the target domain scenario in the validation set similar to Campagna et al. (2020).", "We use nucleus sampling in Collector with parameters top p ratio in the range { 0 .", "92 , 0 .", "98 } and temperature in the range { 0 .", "7 , 0 .", "9 , 1 .", "0 } .", "It takes about two hours to synthesize 5,000 dialogues using one V100 GPU.", "More statistics is in Appendix B. 4.4 Baselines We compare NeuralWOZ with baseline methods both zero-shot learning and data augmentation using MultiWOZ 2.1 in our experiments.", "We use a baseline zero-shot learning scheme which does not 9 In Campagna et al. (2020), the average number of synthesized dialogue over domains is 10,140.", "use synthetic data (Wu et al., 2019).", "For data augmentation, we use ATDM and VHDA.", "ATDM refers to a rule-based synthetic data augmentation method for zero-shot learning suggested by Campagna et al. (2020).", "It defines rules including state transitions and templates for simulating dialogues and creates about 10,000 synthetic dialogues per five domains in the MultiWOZ dataset.", "Campagna et al. (2020) feed the synthetic dialogues into zero-shot learner models to perform zero-shot transfer task for dialogue state tracking.", "We also employ TRADE (Wu et al., 2019) and SUMBT (Lee et al., 2019) as baseline zero-shot learners for fair comparisons with the ATDM.", "VHDA refers to model-based generation method using hierarchical variational autoencoder (Yoo et al., 2020).", "It generates dialogues incorporating information of speaker, goal of the speaker, turn-level dialogue acts, and utterance sequentially.", "Yoo et al. (2020) augment about 1,000 dialogues for restaurant and hotel domains in the MultiWOZ dataset.", "For a fair comparison, we use TRADE as the baseline model for the full data augmentation experiments.", "Also, we compare ours with the VHDA on the single-domain augmentation setting following their report.", "We use both joint goal accuracy (JGA) and slot accuracy (SA) as the performance measurement.", "The JGA is an accuracy which checks whether all slot values predicted at each turn exactly match the ground truth values, and the SA is the slotwise accuracy of partial match against the grouth Synthetic TRADE SUMBT no syn 44.2 / 96.5 46.7 / 96.7 ATDM 43.0 / 96.4 46.9 / 96.6 NeuralWOZ 45.8 / 96.7 47.1 / 96.8 Table 2: Full data augmentation on multi-domain DST.", "truth values.", "Especially for zero and few-shot setting, we follow the previous setup (Wu et al., 2019; Campagna et al., 2020).", "Following Campagna et al. (2020), the zero-shot learner model should be trained on data excluding the target domain, and tested on the target domain.", "We also add synthesized data from our NeuralWOZ which is trained in the same way, i.e., leave-one-out setup, to the training data in the experiment.", "Our method achieves new state-of-the-art of zero-shot domain transfer learning for dialogue state tracking on the MultiWOZ 2.1 dataset (Table 1).", "Except for the hotel domain, the performance over all target domains is significantly better than the previous sota method.", "We discuss the lower performance in hotel domain in the analysis section.", "Following the work of Campagna et al. (2020), we also measure zero-shot coverage, which refers to the accuracy ratio between zero-shot learning over target domain, and fully trained model including the target domain.", "Our NeuralWOZ achieves 66.9% and 79.2% zero-shot coverage on TRADE and SUMBT, respectively, outperforming previous state-of-the-art, ATDM, which achieves 61.2% and 73.5%, respectively.", "For full data augmentation, our synthesized data come from fully trained model including all five domains in this setting.", "Table 2 shows that our model still consistently outperforms in full data augmentation of multi-domain dialogue state tracking.", "Specifically, our NeuralWOZ performs 2.8% point better on the joint goal accuracy of TRADE than ATDM.", "Our augmentation improves the performance by a 1.6% point while ATDM degrades.", "We also compare NeuralWOZ with VHDA, a previous model-based data augmentation method for dialogue state tracking (Yoo et al., 2020).", "Since the VHDA only considers single-domain simulation, we use single-domain dialogue in hotel Synthetic Restaurant Hotel no syn 64.1 / 93.1 52.3 / 91.9 VHDA 64.9 / 93.4 52.7 / 92.0 NeuralWOZ 65.8 / 93.6 53.5 / 92.1 Table 3: Full data augmentation on single-domain DST.", "and restaurant domains for the evaluation.", "Table 3 shows that our method still performs better than the VHDA in this setting.", "NeuralWOZ has more than twice better joint goal accuracy gain than that of VHDA.", "Table 4 shows the intrinsic evaluation results from two components (Collector and Labeler) of the NeuralWOZ on the validation set of MultiWOZ 2.1.", "We evaluate each component using perplexity for Collector and joint goal accuracy for Labeler, respectively.", "Note that the joint goal accuracy is achieved by using state candidate set, prepopulated as the multiple-choice options from the ground truth, B 1: T , as the training time of Labeler.", "It can be seen as using meta information since its purpose is accurate annotation but not the dialogue state tracking itself.", "We also report the results by excluding target domain from full dataset to simulate zero-shot environment.", "Surprisingly, synthesized data from ours performs effectively even though the annotation by Labeler is not perfect.", "We conduct further analysis, the responsibility of each model, in the following section.", "Figure 3 shows the slot accuracy for each slot type in the hotel domain, which is the weakest domain from ours.", "Different from other four domains, only the hotel domain has two boolean type slots, park-ing and internet, which can have only yes or no as their value.", "Since they have abstract property for the tracking, Labeler's labeling performance tends to be limited to this domain.", "However, it is noticeable that our accuracy of booking related slots (book stay, book people, book day) are much higher than the ATDM's.", "Moreover, the model using synthetic data from the ATDM totally fails to track the book stay slot.", "In the synthesizing procedures of Campagna et al. (2020), they create the data with a simple substitution of a domain noun phrase when the two domains have similar slots.", "For example, find me a restaurant in the city center can be replaced with find me a hotel in the city center since the restaurant and hotel domains share area slot.", "We presume it is why they outperform over slots like pricerange and area.", "We further investigate how our method is complementary with human-annotated data.", "Figure 4 illustrates our NeuralWOZ shows a consistent gain in the few-shot domain transfer setting.", "Unlike the performance with ATDM is saturated as few-shot ratio increases, the performance using our NeuralWOZ is improved continuously.", "We get about 5.8% point improvement from the case which does not use synthetic data when using 10% of human-annotated data for the target domain.", "It implies our method could be used more effectively with the Figure 4: Few-shot learning result in MultiWOZ 2.1.", "We discover whether Collector and Labeler are more responsible for the quality of synthesizing.", "Table 5 shows ablation results where each model of NeuralWOZ is trained the data including or withholding the hotel domain.", "Except for the training data for each model, the pipelined models are trained and dialogues are synthesized in the same way.", "Then, we train TRADE model using the synthesized data and evaluate it on hotel domain like the zero-shot setting.", "The performance gain from Collector which is trained including the target domain is 4.3% point, whereas the gain from Labeler is only 0.8% point.", "It implies the generation quality from Collector is more responsible for the performance of the zero-shot learner than accurate annotation of Labeler.", "Figure 5 is an qualitative example generated by NeuralWOZ.", "It shows the NeuralWOZ can generate an unseen movie domain which has a different schema from the traveling, the meta domain of the MultiWOZ dataset, even if it is trained on only the Figure 5: Unseen domain dialogue generation from NeuralWOZ.", "dataset.", "It is harder to generalize when the schema structure of the target domain is different from the source domain.", "Other examples can be found in Appendix C. We would like to extend the NeuralWOZ to more challenging expansion scenario like these in future work.", "To show that our framework can be used for other dialogue tasks, we test our data augmentation method on end-to-end task in MultiWOZ 2.1.", "We describe the result in Appendix D with discussion.", "In full data setting, Our method achieves 17.46 BLUE, 75.1 Inform rate, 64.6 Success rate, and 87.31 Combine rate, showing performance gain using the synthetic data.", "Appendix D also includes the comparison and discussion on SimulatedChat (Mohapatra et al., 2020).", "We propose NeuralWOZ, a novel dialogue collection framework, and we show our method achieves state-of-the-art performance on zero-shot domain transfer task.", "We find the dialogue corpus from NeuralWOZ is synergetic with human-annotated data.", "Finally, further analysis shows that NeuralWOZ can be applied for scaling dialogue system.", "We believe NeuralWOZ will spark further research into dialogue system environments where expansion target domains are distant from the source domains.", "We thank Sohee Yang, Gyuwan Kim, Jung-Woo Ha, and other members of NAVER AI for their valuable comments.", "We also thank participants who helped our preliminary experiments for building data collection protocol." ]
[ "objective", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "other", "abstain", "method", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "result", "abstain", "method", "other", "other" ]
[ "Machine reading comprehension is a challenging task especially for querying documents with deep and interconnected contexts.", "Transformer-based methods have shown advanced performances on this task; however, most of them still treat documents as a flat sequence of tokens.", "This work proposes a new Transformer-based method that reads a document as tree slices.", "It contains two modules for identifying more relevant text passage and the best answer span respectively, which are not only jointly trained but also jointly consulted at inference time.", "Our evaluation results show that our proposed method outperforms several competitive baseline approaches on two datasets from varied domains.", "Machine Reading Comprehension (MRC) is the task of reading a given text and answering questions about it (Liu et al., 2019).", "Some MRC tasks such as SQuAD (Rajpurkar et al., 2016, 2018),and ShARC (Saeidi et al., 2018) provide a short text snippets as the context documents; while others such as TriviaQA (Joshi et al., 2017), Natural Questions (Kwiatkowski et al., 2019) and Doc2Dial (Feng et al., 2020) use full articles as documents.", "Most top performing models on MRC tasks use different variants of Transformers (Vaswani et al., 2017).", "Transformer-based models typically only consider a certain number of tokens, utilize a sliding window approach (Richardson et al., 2013) or segment the document into passages (Hu et al., 2019; Wang et al., 2019) due to the constraint on the size of input sequence.", "More recent works explore how to scale up input length (Yang et al., 2019; Beltagy et al., 2020; Kitaev et al., 2020; Wang et al., 2020; Ainslie et al., 2020) but still mainly focus on flat sequences.", "In addition to scaling up input length, ETC(Ainslie et al., 2020) also propose to deal with encoding structured inputs How to appeal a TVB ticket You can appeal online if you have been convicted in a NYSTVB the TVB traffic ticket number full name, date of birth, and gender ...", "A series of recent work explores incorporating structured knowledge embedded in text into MRC (Shen et al., 2020; Dhingra et al., 2020).", "However, such kind of linking information for creating triples is not necessarily prominent in documents other than Wikipedia.", "Some works segment the document content based on its semantic structures and rank them based on their relevance to the query (Yan et al., 2019; Lee et al., 2018; Wang et al., 2018; Zheng et al., 2020; Liu et al., 2020).", "Another thread of works, on hierarchical document encoding (Li et al., 2015; Yang et al., 2016; Zhang et al., 2019; Guo et al., 2019), first obtain sentence level representations then encode document based on the sentence vectors.", "Those works do not directly apply on fine-grained answer extraction across sentences.", "In many online documents, certain important information unfolds through the semantic relations of hierarchical structures such as parent-child and siblings between different parts of the document.", "Figure 1 illustrates the difference when using a document with and without the structure information for a MRC task.", "For query U1 , it is crucial to keep in mind we are in the context of How to appeal a TVB ticket and Online while reading the passage of You will need to find the answer to the user query.", "However, conventional Transformers fail to capture such contextual information when the text is too long to fit in the maximum sequence length allowed.", "In this work, we explore the utilization of document structure for the focused task of fine-grained Machine Reading Comprehension on document.", "We propose a Transformer-based method that reads a document as tree slices; it jointly learns the relevance of paragraphs and spans, and then performs a cascaded inference to find the best answer span.", "Our work is intuitively inspired by how people read through documents (Choi et al., 2017) based on structural cues such as titles and subtitles, and then focus on the relevant parts to search for an answer.", "We utilize the structural information naturally available in online documents for identifying tree slices.", "Each slice corresponds to nodes along a path from a root node to a lower level child node as illustrated by the right part of Figure 1.", "Thus, we are able to capture the essential structural information for the inference that could be outside of a conventional sliding window or text segment.", "Compared to approaches such as Longformer (Belt-agy et al., 2020) or ETC (Ainslie et al., 2020), our approach can be directly applied to many existing pretrained models, and has a small GPU mem-ory footprint.", "RikiNet (Liu et al., 2020) employs a dynamic paragraph dual-attention reader and a multi-level cascaded answer predictor, while our tree slices consider hierarchical structures above paragraphs, and our cascaded inference is in beam search style rather than greedy decoding style in RikiNet.", "We evaluate on two datasets with structured documents: one obtained from Natural Questions (Kwiatkowski et al., 2019), which is based on Wikipedia articles, and one from Doc2Dial (Feng et al., 2020), which is based on web pages of several domains.", "Our proposed method is compared with several baselines to see performance gain on both datasets.", "For example, our method achieves 4% gain of F1 on Doc2Dial, which shows its superiority on small-scaled dataset across multiple domains.", "(1) We propose a Transformer-based method that reads a document as a tree.", "It simultaneously identifies the relevance of paragraphs and finds the answer span via jointly trained models with cascaded inference.", "(2) Our method can utilize common structures as seen in many web documents.", "It allows Transformer models to read in more focused content but with deep context; thus it can be used to handle long documents in an efficient way.", "(3) Our proposed method outperforms several competitive baseline methods on two kinds of MRC tasks with documents from varied domains.", "We adopt a Transformer-based document-tree-slice encoder with joint learning and cascaded inference.", "Our approach is influenced by the pattern of human behavior during reading (Choi et al., 2017), which is to focus on a smaller portion at a time and favor the more relevant parts while looking for answer.", "This approach can also overcome the constraint on fixed-length input allowed by the common Transformer architecture (Vaswani et al., 2017).", "More importantly, this enables us to always include important structural context information during encoding.", "To obtain the tree representation of a web page, we consider the different levels of HTML title tags as the main indicators of the hierarchical structures such as parent-child and siblings in Figure 1.", "More details are provided in Section", "3. Formally, we define an example in the dataset as ( Q, D, s, e ) where Q is a question, D is a document, s and e denote the inclusive indices pointing to the start and end of the target answer span.", "Suppose one does not consider the structure information, D is treated as a sequence and sent to Transformer encoder.", "For long documents, the sliding window approach is widely used to truncate D into m overlapping fragments D 1 , ...D m , and ( Q, D, s, e ) is converted to m training instances ( C i , s i , e i ) where C i = ([ CLS ] , Q 1 , ..., Q | Q | , [ SEP ] , D i, 1 , ..., D i, | D i | , [ SEP ]) , s i and e i are mapped indices in C i .", "If D i does not contain the target answer, s i and e i are set to the index of the [ CLS ] token.", "In our proposed approach to encode a document, we consider the structured information along with its content.", "Given a document D , let k be Figure 2: Joint Model with Cascaded Inference.", "the number of leaf nodes in its tree structure.", "We first convert ( Q, D, s, e ) into k examples ( Q, A i , P i , s i , e i ) , where P i is a leaf node, s i and e i are mapped indices in P i , and A i denotes P i 's ancestor chain in the document tree of D .", "Each ( Q, A i , P i ) is then encoded with Transformers as a sequence C i = ([ CLS ] , Q 1 , ..., Q | Q | , [ SEP ] , A i, 1 , ..., A i, | A i | , [ SEP ] , P i, 1 , ..., P i, | P i | , [ SEP ]) .", "An example A i in Figure 1 would be the list of { How to appeal a TVB ticket conviction' , Online' , You will need' }.", "Intuitively, the tree slice approach ensures that the most relevant structural information, the ancestor chain, is always taken into account and attended to with Transformer encoder, while this is unlikely to be guaranteed by the sliding window truncation.", "With tree slicing approach, from each document we have many paragraphs to select the answer span from, as compared to the case of sliding windows.", "In order to teach the model to favor the candidates from the more relevant parts of the document, we train a joint model to simultaneously learn to identify the relevance of paragraphs and find the answer span.", "Then we perform a cascaded inference to first find the most relevant paragraphs and then find the best answer span from them, based on the scores from the joint model, as Figure 2 shows.", "Joint model The encoded representation of C can be used to perform two tasks, each being handled by a separate module: 1) the pooler layer and the matching layer (both linear layers) predict how likely a paragraph P contains the answer; 2) the span selection layer (another linear layer) identifies the answer span from P .", "Each training instance is converted to ( C, s, e, g ) where g { 0 , 1 } denotes whether P contains the answer.", "We define the loss function to be Loss ( g, s, e, C ; ) = LCE ( f hit ( g, C ; )) + ( LCE ( f start ( s, C ; )) + LCE ( f end ( e, C ; ))) where LCE is the Cross Entropy loss function, denotes the model parameters, and each f is the score obtained by the corresponding linear layer on top of the last layer representation of Transformer encoder: f hit by the pooler layer and the matching layer, and f start and f end by the span selection layer.", "Cascaded inference After the two modules of the model are jointly trained, we conduct a cascaded inference in a beam search style.", "First, from all the instances corresponding to tree slices of a single document, we select the top n instances ranked by f hit ( g = 1 , C ; ) .", "This is important for filtering out high scored spans from irrelevant tree slices.", "Then, from these top instances, each candidate document span is assigned a score attributed from both modules of the model: Score ( C, s, e ) = f hit ( g = 1 , C ; ) + ( Score start ( s, C ; ) + Score end ( e, C ; )) where we adopted the trick from (Alberti et al., 2019) to define Score start ( s, C ; ) = f start ( s, C ; ) f start ( Idx CLS , C ; ) , and Score end ( e, C ; ) as f end ( e, C ; ) f end ( Idx CLS , C ; ) .", "Finally, we choose the document span with the highest Score ( C, s, e ) as the answer.", "Given a document with tree slices, we would create more instances than the sliding window approach.", "However, with the joint training and cascaded inference, our model reaches better accuracy in less training time, as will be shown in Section", "4. 3 Data Our focused task is utilizing document structure in contextual representation for fine-grained MRC.", "Since there is very few prior MRC datasets that provides document structure information, we iden-tified two public datasets where HTML markup tags are available in the document data together with QA pairs, and extract tree structure out of the HTML documents for MRC.", "Data script could be found at http://html2struct.github.io.", "Extract Tree Structure To obtain the tree representation of documents from the two datasets, we first parse HTML files to get markup tags of the textual content elements, which corresponds to the titles, lists, tables and paragraphs.", "We consider the different levels of title tags as the main indicators of the hierarchical structures such as parent-child and siblings.", "Thus, the stem nodes are inherently section or subsection titles of the article and leaf nodes are typically paragraphs, list content or table content.", "We assign the article title as the tree root.", "Please refer to Appendix A for more details about the data statistics for the experiment.", "NQStruct Natural Question (Kwiatkowski et al., 2019) provides QA pairs that are grounded in Wikipedia articles.", "The original task provides answers in two formats: long answer , typically a paragraph or table; short answer , typically one or more entities.", "In our task, we focus on identifying the short answer given the whole document as the input, and do not use the long answers data.", "We observe the bias on answers appearing in first paragraph, which is significant enough to serve as a baseline (Kwiatkowski et al., 2019).", "Thus, we follow Geva and Berant (2018) to alleviate such bias by only considering the questions where the short answer does not come from the first paragraph.", "As a result, we derive a subset of 48K examples from about 100K examples with short answers from training and dev sets.", "D2DStruct Doc2Dial (Feng et al., 2020) provides document-grounded dialogues with annotations of dialogue scenes, which allow us to identify question-answer pairs that are most related to our target task.", "Specifically, we combine each turn of the agent responding to a user query, together with the previous dialogue context, as a question.", "The public dataset contains over 4.1K document-grounded dialogues based on about 450 documents from different domains, and we derive 9.3K QA pairs out of it.", "We compare our proposed method ( TreeJC in short) with several baseline methods.", "Next we describe the baselines, the experiment settings, and present the evaluation results.", "Sliding Window (SW) is a popular question answering baseline that trains a span selection model with Transformer encoding document trunks as described in Section", "2. Longformer is a Transformer model that handles long documents (Beltagy et al., 2020).", "We experiment the sliding window approach above with Longformer-base pretrained model with max sequence length of 4096 and a stride of 3072.", "IR+SW is a pipeline approach that first identifies small number of k candidate paragraphs ( k = 10 in the experiments here) via an information retrieval mechanism BM25 (Robertson et al., 1995), and then uses the SW approach.", "We consider it as a solution with reduced time complexity from the traditional SW approach for us to compare with.", "LeafJC For ablation study, we experiment with a variant of TreeJC approach that excludes ancestors during encoding.", "The other implementation and experimental details are similar to TreeJC.", "All models are implemented in PyTorch.", "Pretrained models are Roberta-base for SW, IR+SW, LeafJC and TreeJC, and Longformer-base for Longformer.", "Implementations of SW, Longformer and IR+SW are adapted from the SQuAD example code 1 in HuggingFace Transformers (Wolf et al., 1 https://github.com/huggingface/ transformers/blob/master/examples/legacy/question-answering/ .", "2019).", "For a fair comparison, input to SW, Longformer and IR+SW is flattened equivalent of the tree input to LeafJC and TreeJC and does not include the HTML tags of web pages.", "All experiments were done on a single V100 GPU.", "In order to encode long sequences, Longformer requires much larger GPU memory, only 2 instances could fit in one V100 GPU, whereas 27 instances could fit in one V100 GPU with Roberta.", "Please see Appendix B for more details about experiment setup.", "For the training of our approach TreeJC, positive instances are up-sampled to reach a balanced proportion of positive and negative training instances.", "To avoid the consequent bias towards longer documents, the loss from each example (document QA pair) is scaled down by the number of training instances from this example.", "and in Section 2 is set to be 0.5 and 1, respectively.", "For evaluation, we use exact match score and token-level F1 score (Rajpurkar et al., 2018).", "Table 1 and Table 2 present the evaluation results on the test sets of D2DStruct and NQStruct respectively along with the training time.", "All numbers are in the form of mean std , which is from three runs with different random seeds.", "We observe consistent performance gains by TreeJC over almost all baselines.", "TreeJC shows a significant improvement over SW, which indicates the effectiveness of encoding the structure information with our joint model with cascaded inference.", "LeafJC performs better than SW but worse than TreeJC, which confirms the importance of including ancestor nodes during encoding.", "Longformer 2 serves as a competitive baseline and it achieves half a point higher F1 for D2DStruct dataset, however, at the cost of much longer training time.", "IR+SW method, on the other hand, shows high efficiency 2 5 epochs were finished on NQStruct, as in experiments in (Beltagy et al., 2020).", "but suffers lower effectiveness, attributing to the fact that the IR method only achieves around 73% recall.", "In order to further examine how our approach performs on documents with different sizes, we break down the results on NQStruct dataset and compare the performances in Table", "3. The results show that our approach has a clear gain on all document lengths over SW, especially on very long documents.", "We introduce a new Transformer-based method with joint learning and cascaded inference inspired by the tree structures of documents for machine reading comprehension.", "It outperforms several competitive baselines on two datasets from multiple domains.", "In particular, our study demonstrates that the proposed model is effective to encode longer documents with deep contexts for MRC tasks." ]
[ "abstain", "abstain", "objective", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "objective", "objective", "method", "method", "abstain", "abstain", "method", "method", "result", "objective", "result", "objective", "abstain", "result", "abstain", "objective", "method", "method", "abstain", "method", "method", "abstain", "method", "abstain", "other", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "method", "method", "abstain", "method", "method", "method", "method", "method", "method", "other", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "method", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "result", "result", "objective", "abstain", "objective" ]
[ "Fully supervised neural approaches have achieved significant progress in the task of Chinese word segmentation (CWS).", "Nevertheless, the performance of supervised models tends to drop dramatically when they are applied to out-of-domain data.", "Performance degradation is caused by the distribution gap across domains and the out of vocabulary (OOV) problem.", "In order to simultaneously alleviate these two issues, this paper proposes to couple distant annotation and adversarial training for cross-domain CWS.", "For distant annotation, we rethink the essence of Chinese words and design an automatic distant annotation mechanism that does not need any supervision or pre-defined dictionaries from the target domain.", "The approach could effectively explore domain-specific words and distantly annotate the raw texts for the target domain.", "For adversarial training, we develop a sentence-level training procedure to perform noise reduction and maximum utilization of the source domain information.", "Experiments on multiple real-world datasets across various domains show the superiority and robustness of our model, significantly outperforming previous state-of-the-art cross-domain CWS methods.", "Chinese is an ideographic language and lacks word delimiters between words in written sentences.", "Therefore, Chinese word segmentation (CWS) is often regarded as a prerequisite to downstream tasks in Chinese natural language processing.", "This task is conventionally formalized as a character-based sequence tagging problem (Peng et al., 2004), where each character is assigned a specific label to denote the position of the character in a word.", "With the development of deep learning techniques, recent years have also seen increasing interest in applying neural network models onto CWS (Cai Corresponding author Figure 1: Different word distributions for the newswire domain and the medical domain. and Zhao, 2016; Liu et al., 2016; Cai et al., 2017; Ma et al., 2018).", "These approaches have achieved significant progress on in-domain CWS tasks, but they still suffer from the cross-domain issue when they come to processing of out-of-domain data.", "Cross-domain CWS is exposed to two major challenges: 1) Gap of domain distributions .", "This is a common issue existing in all domain adaptation tasks.", "Source domain data and target domain data generally have different distributions.", "As a result, models built on source domain data tend to degrade performance when they are applied to target domain data.", "Generally, we need some labeled target domain data to adapt source domain models, but it is expensive and time consuming to manually craft such data.", "2) Out of vocabulary (OOV) problem , which means there exist some words in the testing data that never occur in the training data.", "Source domain models have difficulties in recognizing OOV words since source domain data contains no information on the OOVs.", "Figure 1 presents examples to illustrate the difference between the word distributions of the newswire domain and the medical domain.", "Segmenters built on the newswire domain have very limited information to segment domain-specific words like (Lysozyme).", "Previous approaches to cross-domain CWS mainly fall into two groups.", "The first group aims to attack the OOV issue by utilizing predefined dictionaries from the target domain to facilitate cross-domain CWS (Liu et al., 2014; Zhao et al., 2018; Zhang et al., 2018), which are apt to suffer from scalability since not all domains possess predefined dictionaries.", "In other words, these methods are directly restricted by external resources that are available in a target domain.", "Studies in the second group (Ye et al., 2019) attend to learn target domain distributions like word embeddings from unlabeled target domain data.", "In this approach, source domain data is not fully utilized since the information from source domain data is transferred solely through the segmenter built on the data.", "In this paper, we propose to attack the aforementioned challenges simultaneously by coupling the techniques of distant annotation and adversarial training .", "The goal of distant annotation is to automatically construct labeled target domain data with no requirement for human-curated domain-specific dictionaries.", "To this end, we rethink the defini-tion and essence of Chinese words and develop a word miner to obtain domain-specific words from unlabeled target domain data.", "Moreover, a segmenter is trained on the source domain data to recognize the common words in unlabeled target data.", "This way, sentences from the target domain are assigned automatic annotations that can be used as target domain training data.", "Although distant annotation could provide satisfactory labeled target domain data, there still exist annotation errors that affect the final performance.", "To reduce the effect of noisy data in automatic annotations in target domain data and make better use of source domain data, we propose to apply adversarial training jointly on the source domain dataset and the distantly constructed target domain dataset.", "And the adversarial training module can capture deeper domain-specific and domain-agnostic features.", "To show the effectiveness and robustness of our approach, we conduct extensive experiments on five real-world datasets across various domains.", "Experimental results show that our approach achieves state-of-the-art results on all datasets, significantly outperforming representative previous works.", "Further, we design sufficient subsidiary experiments to prove the alleviation of the aforementioned problems in cross-domain CWS.", "tagging problem.", "Thus, traditional machine learning models such as Hidden Markov Models (HMMs) and Conditional Random Fields (CRFs) are widely employed for CWS in the early stage (Wong and Chan, 1996; Gao et al., 2005; Zhao et al., 2010).", "With the development of deep learning methods, research focus has been shifting towards deep neural networks that require little feature engineering.", "Chen et al. (2015) are the first that use LSTM (Hochreiter and Schmidhuber, 1997) to resolve long dependencies in word segmentation problems.", "Since then, the majority of efforts is building end-to-end sequence tagging architectures, which significantly outperform the traditional approaches on CWS task (Wang and Xu, 2017; Zhou et al., 2017; Yang et al., 2017; Cai et al., 2017; Chen et al., 2017; Huang et al., 2019b; Gan and Zhang, 2019; Yang et al., 2019).", "Cross-domain CWS As a more challenging task, cross-domain CWS has attracted increasing attention.", "Liu and Zhang (2012) propose an unsupervised model, in which they use a character clustering method and the self-training algorithm to jointly model CWS and POS-tagging.", "Liu et al. (2014) apply partial CRF for cross-domain CWS via obtaining a partial annotation dataset from freely available data.", "Similarly, Zhao et al. (2018) build partially labeled data by combining unlabeled data and lexicons.", "Zhang et al. (2018) propose to incorporate the predefined domain dictionary into the training process via predefined handcrafted rules.", "Ye et al. (2019) propose a semi-supervised approach that leverages word embeddings trained on the segmented text in the target domain.", "Adversarial Learning Adversarial learning is derived from the Generative Adversarial Nets (GAN) (Goodfellow et al., 2014), which has achieved huge success in the computer vision field.", "Recently, many works have tried to apply adversarial learning to NLP tasks.", "(Jia and Liang, 2017; Li et al., 2018; Farag et al., 2018) focus on learning or creating adversarial rules or examples for improving the robustness of the NLP systems.", "For cross-domain or cross-lingual sequence tagging, the adversarial discriminator is widely used to extract domain or language invariant features (Kim et al., 2017; Huang et al., 2019a; Zhou et al., 2019).", "Figure 2 shows the framework of our approach to cross-domain CWS, which is mainly composed", "of two components: 1) Distant Annotation (DA) , and 2) Adversarial Training (AT) .", "In the following, we will describe details of the framework (DAAT) from the left to right in Figure 2.", "In this paper, bold-face letters (e.g. W ) are used to denote vectors, matrices and tensors.", "We use numerical subscripts to indicate the indices of a sequence or vector.", "We use the subscript of src to indicate the source domain and tgt to denote the target domain.", "As illustrated in Figure 2, given a labeled source domain dataset and an unlabeled target domain dataset, distant annotation (DA) aims to automatically generate word segmentation results for sentences in the target domain.", "DA has two main modules, including a base segmenter and a Domain-specific Words Miner .", "Specifically, the base segmenter is a GCNN-CRF (Wang and Xu, 2017) model trained solely on the labeled source domain data and is used to recognize words that are common among the source and target domains.", "Domain-specific Words Miner is designed to explore the target domain-specific words.", "Base Segmenter In the CWS task, given a sentence s = { c 1 , c 2 , ..., c n } , following the BMES tagging scheme, each character c i is assigned one of the labels in { B, M, E, S } , indicating whether the character is in the beginning, middle, end of a word, or the character is merely a single-character word.", "For a sentence s , we first use an embedding layer to obtain the embedding representation e i for each character c i .", "Then, the sentence s can be represented as e = { e 1 , e 2 , ..., e n } R n d , where d denotes the embedding dimension.", "e will be fed into the GCNN model (Dauphin et al., 2017; Gehring et al., 2017), which computes the output as: H s = ( e W + b ) (cid:12) ( e V + c ) , (1) here, W R k d l , b R l , V R k d l , c R l .", "d and l are the input and output dimensions respectively, and k is the window size of the convolution operator.", "is the sigmoid function and (cid:12) represents element-wise product.", "We adopt a stacking convolution architecture to capture long distance information, the output of the previous layers will be treated as input of the next layer.", "The final representation of sentence s is H s = { h 1 , h 2 , ..., h n } .", "Correlations among labels are crucial factors in sequence tagging.", "Particularly, for an input sequence s src = { c 1 , c 2 , ..., c n } (take source domain data as example), the corresponding label sequence is L = { y 1 , y 2 , ..., y n } .", "The goal of CRF is to compute the conditional probability distribution: P ( L | s src )= exp ( n (cid:80) i =1 ( S ( y i )+ T ( y i 1 , y i ))) (cid:80) L (cid:48) C exp ( n (cid:80) i =1 ( S ( y (cid:48) i )+ T ( y (cid:48) i 1 , y (cid:48) i ))) , (2) where T denotes the transition function to calculate the transition scores from y i 1 to y i .", "C contains all the possible label sequences on sequence s and L (cid:48) is a random label sequence in C .", "And S represents the score function to compute the emission score from the hidden feature vector h i to the corresponding label y i , which is defined as: S ( y i ) = W y i h i + b y i , (3) W y i and b y i are learned parameters specific to the label y i .", "To decode the highest scored label sequence, a classic Viterbi (Viterbi, 1967) algorithm is utilized as the decoder.", "The loss function of the sequence tagger is defined as the sentence-level negative log-likelihood: L src = (cid:88) log P ( L | s src ) .", "Domain-specific Words Miner As mentioned in section 1, previous works usually use existing domain dictionaries to solve the domain-specific noun entities segmentation problem in cross-domain CWS.", "But this strategy does not consider that it is properly difficult to acquire a dictionary with high quality for a brand new domain.", "In contrast, we develop a simple and efficient strategy to perform domain-specific words mining without any predefined dictionaries.", "Given large raw text on target domain and a base segmenter, we can obtain a set of segmented texts = { T 1 , T 2 , ..., TN } , where stop-words are removed.", "Then let = { t 1 , t 2 , ..., t m } denote all the n-gram sequences extracted from .", "For each sequence t i , we need to calculate the possibility that it is a valid word.", "In this procedure, four factors are mainly considered.", "1) Mutual Information (MI).", "MI (Kraskov et al., 2004) is widely used to estimate the correlation of two random variables.", "Here, we use mutual information between different sub-strings to measure the internal tightness for a text segment, as shown in Figure", "3(a).", "Further, in order to exclude extreme cases, it is necessary to enumerate all the sub-string candidates.", "The final MI score for one sequence t i consists of n characters t i = { c 1 ...c n } is defined as: MIS( t i )= min j [1: n ] { p ( t i ) p ( c 1 ...c j ) p ( c j +1 ...c n ) } , (5) where p ( ) denotes the probability given the whole corpus .", "2) Entropy Score (ES).", "Entropy is a crucial concept aiming at measuring the uncertainty of random variables in information theory (Jaynes, 1957).", "(b) Entropy Score to measure the external flexibility.", "Thus, we can use ES to measure the uncertainty of candidate text fragment, since higher uncertainty means a richer neighboring context.", "Let N l ( t i ) = { l 1 , ..., l k } and N r ( t i ) = { r 1 , ..., r k (cid:48) } be the set of left and right adjacent characters for t i .", "The left entropy score ES l and right entropy ES r of t i can be formulated as ES l ( t i ) = (cid:80) kj p ( l j )log p ( l j ) and ES r ( t i ) = (cid:80) k (cid:48) j p ( r j )log p ( r j ) respectively.", "We choose min(ES l ( t i ) , ES r ( t i )) as the final score for t i .", "Hence, ES( t i ) could explicitly represent the external flexibility for a text segment (as shown in Figure", "3(b)), and further serve as an important indicator to judge whether the segment is an independent word.", "3) tf-idf .", "tf-idf is a widely used numerical statistic that can reflect how important a word is to a document in a collection or corpus.", "As illustrated in Figure 1, most of the domain-specific words are noun entities, which share a large weighting factor in general.", "In this work, we define a word probability score p val ( t i ) to indicate how likely t i can be defined as a valid word.", "where denotes the sigmoid function and N denotes normalization operation with the max-min method.", "4) Word frequency .", "If t i is a valid word, it should appear repeatedly in .", "Finally, by setting an appropriate threshold for p val ( t i ) and word frequence, the Domain-Specific Words Mine r could effectively explore domain-specific words, then construct the domain-specific word collection C for the target domain.", "In this work, we only consider words t i with p val ( t i ) 0 .", "95 and frequency larger than 10 .", "The left part of Figure 2 illustrates the data construction process of DA .", "First, we utilize the Domain-specific Words Miner to build the collection C for the target domain.", "Take sentence (Scientific research on lysozyme) as an example, we use the forward maximizing match algorithm based on C , which shows that (lysozyme) is a valid word.", "Hence, the labels of characters , , are B , M , E .", "For the left part of the sentence, we adopt the baseline segmenter to perform the labelling process.", "will be assigned with { S , B .", "E , B , E } .", "To this end, we are able to automatically build annotated dataset on the target domain.", "The structure of the Adversarial Training module is illustrated as the right part of Figure 2.", "As mentioned in 3.1, we construct an annotated dataset for the target domain.", "Accordingly, the inputs of the network are two labeled datasets from source domain S and target domain T .", "There are three encoders to extract features with different emphases, and all the encoders are based on GCNN as introduced in section 3.1.", "For domain-specific features, we adopt two independent encoders E src and E tgt for source domain and target domain.", "For domain-agnostic features, we adopt a sharing encoder E shr and a discriminator G d , which will be both trained as adversarial players.", "For the two domain-specific encoders, the input sentence is s src = { c s 1 , c s 2 , ..., c s n } from source domain, or sentence s tgt = { c t 1 , c t 2 , ..., c tm } from the target domain.", "The sequence representation of s src and s tgt can be obtained by E src and E tgt .", "Thus, the domain independent representations of s src and s tgt are H s R n l and H t R m l , where n and m denote the sequence lengths of s src and s tgt respectively, l is the output dimension of GCNN encoder.", "For the sharing encoder, we hope that E shr is able to generate representations that could fool the sentence level discriminator to correctly predict the domain of each sentence, such that E shr finally extracts domain-agnostic features.", "Formally, given sentences s src and s tgt from source domain and target domain, E shr will produce sequence features H s and H t for s src and s tgt respectively.", "The discriminator G d of the network aims to distinguish the domain of each sentence.", "Specifically, we will feed the final representation H of every sentence s to a binary classifier G y where we adopt the text CNN network (Kim, 2014).", "G y will produce a probability that the input sentence s is from the source domain or target domain.", "Thus, the loss function of the discriminator is: L d = E s p S ( s ) [log G y ( E shr ( s )] E s p T ( s ) [log (1 G y ( E shr ( s ))] , (7) Features generated by the sharing encoder E shr should be able to fool the discriminator to correctly predict the domain of s .", "Thus, the loss function for the sharing encoder L c is a flipped version of L d : L c = E s p S ( s ) [log (1 G y ( E shr ( s )]) E s p T ( s ) [log G y ( E shr ( s )] , (8) Finally, we concatenate H and H as the final sequence representation of the input sentence.", "For s src from source domain, H ( s src ) = [ H s H s ] , while for s tgt from the target domain, H ( s tgt ) = [ H t H t ] .", "The final representation will be fed into the CRF tagger.", "So far, our model can be jointly trained in an end-to-end manner with the standard back-propagation algorithm.", "More details about the adversarial training process are described in Algorithm 1.", "When there is no annotated dataset on the target domain, we could remove L tgt during the adversarial training process and use the segmenter on source domain for evaluation.", "Algorithm 1 Adversarial training algorithm.", "Input: Manually annotated dataset D s for source domain S , and distantly annotated dataset D t for target domain T for i 1 to epochs do for j 1 to num of steps per epoch do Sample mini-batches X s D s , X t D t if j %2 = 1 then loss = L src + L tgt + L d Update w.r.t loss else loss = L src + L tgt + L c Update w.r.t loss end end end Dataset Sents Words Chars Domain SRC PKU Train 47.3K 1.1M 1.8M News Test 6.4K 0.2M 0.3M TGT DL Full 40.0K 2.0M 2.9M Novel Test 1.0K 32.0K 47.0K FR Full 148K 5.0M 7.1M Novel Test 1.0K 17.0K 25.0K ZX Full 59.0K 2.1M 3.0M Novel Test 1.0K 21K 31.0K DM Full 32.0K 0.7M 1.2M Medical Test 1.0K 17K 30K PT Full 17.0K 0.6M 0.9M Patent Test 1.0K 34.0K 57.0K Table 1: Statistics of datasets.", "In this section, we conduct extensive cross-domain CWS experiments on multiple real-world datasets with different domains, then comprehensively evaluate our method and other approaches.", "Datasets Six datasets across various domains are used in our work.", "The statistics of all datasets are shown in Table 1.", "In this paper, we use PKU dataset (Emerson, 2005) as the source domain data, which is a benchmark CWS dataset on the newswire domain.", "In addition, the other five datasets in other domains will be utilized as the target domain datasets.", "Among the five target domain datasets there are three Chinese fantasy novel datasets, including DL (DoLuoDaLu) , FR (FanRenXiuXianZhuan) and ZX (ZhuXian) (Qiu and Zhang, 2015).", "An obvious advantage for fantasy novel datasets is that there are a large number of proper words originated by the author for each fiction, which could explicitly reflect the alleviation of the OOV problem for an approach.", "Besides the fiction datasets, we also use DM (dermatology) and PT (patent) datasets (Ye et al., 2019), which are from dermatology domain and patent domain respectively.", "All the domains of the target datasets are very different from the source dataset (newswire).", "To perform a fair and comprehensive evaluation, the full/test settings of the datasets follow Ye et al. (2019).", "Hyper-Parameters Table 2 shows the hyper-parameters used in our method.", "All the models are implemented with Tensorflow (Abadi et al., 2016) and trained using mini-batched back-propagation.", "Adam optimizer (Kingma and Ba, 2015) is used for optimization.", "Tesla V100 GPUs with CUDA .", "Evaluation Metrics We use standard micro-averaged precision (P), recall (R) and F-measure as our evaluation metrics.", "We also compute OOV rates to reflect the degree of the OOV issue.", "We make comprehensive experiments with selective previous proposed methods, which are: Partial CRF (Liu et al., 2014) builds partially annotated data using raw text and lexicons via handcrafted rules, then trains the CWS model based on both labeled dataset (PKU) and partially annotated data using CRF.", "CWS-DICT (Zhang et al., 2018) trains the CWS model with a BiLSTM-CRF architecture, which incorporates lexicon into a neural network by designing handcrafted feature templates.", "For fair comparison, we use the same domain dictionaries produced by the Domain-specific Words Miner for Partial CRF and CWS-DICT methods.", "WEB-CWS (Ye et al., 2019) is a semi-supervised word-based approach using word embeddings trained with segmented text on target domain to improve cross-domain CWS.", "Besides, we implement strong baselines to perform a comprehensive evaluation, which are: GCNN (PKU) uses the PKU dataset only, and we adopt the GCNN-CRF sequence tagging architecture (Wang and Xu, 2017).", "GCNN (Target) uses the distantly annotated dataset built on the target domain only.", "GCNN (Mix) uses the mixture dataset with both the PKU dataset and the distantly annotated target domain dataset.", "DA is a combination of GCNN (PKU) and domain-specific words.", "Details are introduced in 3.1.", "AT denotes the setting that we adopt adversarial training when no distantly annotated dataset on the target domain is provided, but the raw text is available.", "(1) Our DAAT model significantly outperforms previously proposed methods on all datasets, yielding the state-of-the-art results.", "Particularly, DAAT improves the F1-score on the five datasets from 93.5 to 94.1, 90.2 to 93.1, 89.6 to 90.9, 82.8 to 85.0 and 85.9 to 89.6 respectively.", "The results demon-1 source code and dataset will be available at https:// github.com/Alibaba-NLP/DAAT-CWS Hyper-parameter Name Value Threshold for p val 0.95 Char emb size 200 GCNN output dim 200 Text CNN num of filters 200 Text CNN filter size [3,4,5] GCNN layers 5 Dropout Rate 0.3 Batch size 128 Learning rate 0.001 Epochs 30 Table 2: Hyper-parameters.", "strate that the unified framework is empirically effective, for the alleviation of the OOV problem and the full utilization of source domain information.", "(2) As mentioned in section 3, the AT model uses the same adversarial training network as the DAAT, yet without annotation on the target domain dataset.", "Results on the AT setting could explicitly reflect the necessity to construct the annotated target domain dataset.", "Specifically, without the constructed dataset, the AT method only yields 90.7, 86.8, 85.0, 81.0 and 85.1 F1-scores on five datasets respectively.", "But when use the annotated target domain dataset, we can get the DAAT with the best performance.", "(3) WEB-CWS was the state-of-the-art approach that utilizes word embeddings trained on the segmented target text.", "Yet it is worth noticing that our model that only combines the base segmenter trained on PKU and domain-specific words ( DA ) could outperform WEB-CWS, which indicates that the distant annotation method could exploit more and deeper semantic features from the raw text.", "For the CWS-DICT method, which requires an external dictionary, we use the word collection (built by the Domain-specific Words Miner ) to guarantee the fairness of the experiments.", "We can observe that our framework could yield significantly better results than CWS-DICT.", "Moreover, CWS-DICT needs existing dictionaries as external information, which is difficult for the model to transfer to brand new domains without specific dictionaries.", "In contrast, our framework utilizes the Domain-specific Words Miner to construct the word collection with high flexibility across domains.", "In this section, we focus on exploring the ability to tackle the OOV problem for the DA method, which could distantly construct an annotated dataset from the raw text on the target domain.", "As illustrated in Table 4, the cross-domain CWS task suffers from a surprisingly serious OOV problem.", "All OOV rates (source) are above 10%, which will definitely degrade model performance.", "Nevertheless, after constructing an annotated dataset on the target domain, the OOV rate (target) drops significantly.", "Specifically, the DA method yields 9.92%, 13.1%, 14.09% 20.51% and 14.94% absolute OOV rate drop on the five out-domain datasets.", "The statistical result reveals that the Domain-specific Words Miner could accurately explore specific domain words for any domains from raw texts.", "Therefore, the DA of our framework could efficaciously tackle the OOV problem.", "Moreover, the module does not need any specific domain dictionaries, which means it can be transferred to new domains without limitations.", "Obviously, the setting of the hyper-parameter p val will directly affect the scale and quality of the domain-specific word collection.", "To analyze how p val affects the model performance, we conduct experiments with different setting p val in { 0 .", "7 , 0 .", "8 , 0 .", "9 , 0 .", "95 , 0 .", "99 } , and the size of word collection and model performance on DL and DM datasets are shown in Figure 4.", "Constant with intuition, the collection size will decrease as the increase of p val because the filter criterion for words will get more strict, which is also a process of noise reduction.", "However, the F1-score curves are not incremental or descending.", "When p val < = 0 .", "95 , the F1-scores on two datasets will increase because the eliminated words of this stage are mostly wrong.", "While the F1-scores will maintain or decrease when p val > 0 .", "95 , because in this case, some correct words will be eliminated.", "We set p val = 0 .", "95 to guarantee the quality and quantity of the word collection simultaneously, so as to guarantee the model performance.", "And in this setting, the collection sizes are 0.7k words for DL, 1.7k for FR, 3.3k for ZX, 1.5k for DM and 2.2k for PT respectively.", "We develop an adversarial training procedure to reduce the noise in the annotated dataset produced by DA .", "In Table 3, we find that GCNN (Target) method trained on the annotated target dataset constructed by DA achieves impressive performance on all the five datasets, outperforming the WEB-CWS method.", "In addition, with the adversarial training module, the model further yields the remark-Dataset Previous Methods (F1-score) Ours (F1-score) Partial CRF CWS-DICT WEB-CWS AT GCNN (PKU) DA GCNN(Mix) GCNN (Target) DAAT DL 92.5 92.0 93.5 90.7 90.0 93.6 93.9 93.9 94.1 (+0.6) FR 90.2 89.1 89.6 86.8 86.0 92.4 92.6 92.6 93.1 (+2.9) ZX 83.9 88.8 89.6 85.0 85.4 90.4 90.6 90.7 90.9 (+1.3) DM 82.8 81.2 82.2 81.0 82.4 83.8 83.9 84.3 85.0 (+2.2) PT 85.0 85.9 85.1 85.1 87.6 89.1 89.3 89.3 89.6 (+3.7) Table 3: The overall results on five datasets.", "4.7 Analysis of Feature Distribution As introduced in 3.2, in the process of adversarial learning, domain-independent encoders could learn domain-specific features H s and H t , and the sharing encoder could learn domain-agnostic features H s and H t .", "We use t -SNE (Maaten and Hinton, 2008) algorithm to project these feature representations into planar points for visualization to further analyze the feature learning condition.", "As illustrated in Figure 5, domain-independent features H s", "able improvements of the F1-scores.", "The results demonstrate that the adversarial network could capture deeper semantic features than simply using the GCNN-CRF model, via better making use of the information from both source and target domains.", "(green) and H t (black) have little overlap, indicating the distribution gap between different domains.", "However, the domain-agnostic feature distributions H s (red) and H t (blue) are very similar, implying that the learned feature representation can be well shared by both domains.", "In this subsection, we analyze the impact of the data usage for both source and target domain, the experiment is conducted on the PKU (source) and DL (target) datasets.", "In Figure 6, we respectively select 20% , 40% , 60% , 80% and 100% of the source domain data and 1% , 5% , 20% , 50% , 100% of the target domain data to perform the training procedure.", "The result demonstrates that increasing source and target data will both lead to an increase F1-score.", "Generally, the amount of the target data gives more impact on the whole performance, which conforms to the intuition.", "The 1% Target Training Data line indicates that the performance of the model will be strictly limited if the target data is severely missing.", "But when the amount of the target data increase to 5% , the performance will be improved significantly, which shows the ability to explore domain-specific information for our method.", "In this paper, we intuitively propose a unified framework via coupling distant annotation and adversarial training for the cross-domain CWS task.", "In our method, we investigate an automatic distant annotator to build the labeled target domain dataset, effectively address the OOV issue.", "Further, an adversarial training procedure is designed to capture information from both the source and target domains.", "Empirical results show that our framework significantly outperforms other proposed methods, achieving the state-of-the-art result on all five datasets across different domains.", "We sincerely thank all the reviewers for their insightful comments and suggestions.", "This research is partially supported by National Natural Science Foundation of China (Grant No. 61773229 and 61972219), the Basic Research Fund of Shenzhen City (Grand No. JCYJ20190813165003837), and Overseas Cooperation Research Fund of Graduate School at Shenzhen, Tsinghua University (Grant No. HW2018002)." ]
[ "abstain", "abstain", "abstain", "objective", "objective", "abstain", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "abstain", "abstain", "abstain", "objective", "abstain", "result", "result", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "other", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "objective", "other", "other" ]
[ "We often use perturbations to regularize neural models.", "For neural encoder-decoders, previous studies applied the scheduled sampling (Bengio et al., 2015) and adversarial perturbations (Sato et al., 2019) as perturbations but these methods require considerable computational time.", "Thus, this study addresses the question of whether these approaches are efficient enough for training time.", "We compare several perturbations in sequence-to-sequence problems with respect to computational time.", "Experimental results show that the simple techniques such as word dropout (Gal and Ghahramani, 2016) and random replacement of input tokens achieve comparable (or better) scores to the recently proposed perturbations, even though these simple methods are faster.", "Our code is publicly available at https://github.com/takase/rethink_perturbations.", "Recent advances in neural encoder-decoders have driven tremendous success for sequence-to-sequence problems including machine translation (Sutskever et al., 2014), summarization (Rush et al., 2015), and grammatical error correction (GEC) (Ji et al., 2017).", "Since neural models can be too powerful, previous studies have proposed various regularization methods to avoid over-fitting.", "To regularize neural models, we often apply a perturbation (Goodfellow et al., 2015; Miyato et al., 2017), which is a small difference from a correct input.", "During the training process, we force the model to output the correct labels for both perturbed inputs and unmodified inputs.", "In sequence-to-sequence problems, existing studies regard the following as perturbed inputs: (1) sequences containing tokens replaced from correct ones (Bengio et al., 2015; Cheng et al., 2019), (2) embeddings injected small differences (Sato et al., 2019).", "For example, Bengio et al. (2015) proposed the scheduled sampling that samples a token from the output probability distribution of a decoder and uses it as a perturbed input for the decoder.", "Sato et al. (2019) applied an adversarial perturbation, which significantly increases the loss value of a model, to the embedding spaces of neural encoder-decoders.", "Those studies reported that their methods are effective to construct robust encoder-decoders.", "However, their methods are much slower than the training without using such perturbations because they require at least one forward computation to obtain the perturbation.", "In fact, we need to run the decoder the same times as the required number of perturbations in the scheduled sampling (Bengio et al., 2015).", "For adversarial perturbations (Sato et al., 2019), we have to compute the backpropa-gation in addition to forward computation because we use gradients to obtain perturbations.", "Those properties seriously affect the training budget.", "For example, it costs approximately 1,800 USD for each run when we train Transformer (big) with adversarial perturbations (Sato et al., 2019) on the widely used WMT English-German training set in AWS EC2 1 .", "Most studies conduct multiple runs for the hyper-parameter search and/or model ensemble to achieve better performance (Barrault et al., 2019), which incurs a tremendous amount of training budget for using such perturbations.", "Strubell et al. (2019) and Schwartz et al. (2019) indicated that recent neural approaches increase computational costs substantially, and they encouraged exploring a cost-efficient method.", "For instance, Li et al. (2020) explored a training strategy to obtain the best model in a given training time.", "However, previous studies have paid little attention to the costs of computing perturbations.", "Thus, we rethink a time efficient perturbation method.", "In other words, we address the question whether perturbations proposed by recent studies as effective methods are time efficient.", "We compare 1 We assume that we use on-demand instances having 8 V100 GPUs.", "several perturbation methods for neural encoder-decoders in terms of computational time.", "We introduce light computation methods such as word dropout (Gal and Ghahramani, 2016) and using randomly sampled tokens as perturbed inputs.", "These methods are sometimes regarded as baseline methods (Bengio et al., 2015), but experiments on translation datasets indicate that these simple methods surprisingly achieve comparable scores to those of previous effective perturbations (Bengio et al., 2015; Sato et al., 2019) in a shorter training time.", "Moreover, we indicate that these simple methods are also effective for other sequence-to-sequence problems: GEC and summarization.", "In this paper, we address sequence-to-sequence problems such as machine translation with neural encoder-decoders, and herein we provide a definition of encoder-decoders.", "In sequence-to-sequence problems, neural encoder-decoders generate a sequence corresponding to an input sequence.", "Let x 1: I and y 1: J be input and output token sequences whose lengths are I and J , respectively: x 1: I = x 1 , ..., x I and y 1: J = y 1 , ..., y J .", "Neural encoder-decoders compute the following conditional probability: p ( Y | X ) = J +1 (cid:89) j =1 p ( y j | y 0: j 1 , X ) , (1) where y 0 and y J +1 are special tokens representing beginning-of-sentence (BOS) and end-of-sentence (EOS) respectively, X = x 1: I , and Y = y 1: J +1 .", "In the training phase, we optimize the parameters to minimize the negative log-likelihood in the training data.", "Let D be the training data consisting of a set of pairs of X n and Y n : D = { ( X n , Y n ) } |D| n =1 .", "We minimize the following loss function: L ( ) = 1 |D| (cid:88) ( X , Y ) D log p ( Y | X ; ) .", "This section briefly describes perturbations used in this study.", "This study focuses on three types of perturbations: word replacement , word dropout , and adversarial perturbations .", "Figure 1 shows perturbations used in this study.", "As shown in this figure, we can use all types of perturbations in the 5 Encoder x 1 Correct input tokens Word replacement Word dropout Adversarialperturbation x ' 1 b x 1 e ( x ' 1 ) r x 1 x 2 x ' 2 b x 2 e ( x ' 2 ) r x 2 y 0 y ' 0 b y 0 e ( y ' 0 ) r y 0 y 1 y ' 1 b y 1 e ( y ' 1 ) r y 1 Decoder y 1 y 2 Figure 1: Overview of perturbations used in this study.", "same time because perturbations are orthogonal to each other.", "In fact, we combine word replacement with word dropout in our experiments.", "For any approach that uses a sampled token instead of a correct token, such as the scheduled sampling (Bengio et al., 2015), we refer to this as a word replacement approach.", "In this approach, we construct a new sequence whose tokens are randomly replaced with sampled tokens.", "For the construction from the sequence X , we sample x i from a distribution Q x i and use it for the new sequence X (cid:48) with the probability : x i Q x i , (3) x (cid:48) i = (cid:40) x i with probability x i with probability 1 .", "(4) We construct Y (cid:48) from the sequence Y in the same manner.", "Bengio et al. (2015) used a curriculum learning strategy to adjust , and thus proposed several functions to decrease based on the training step.", "Their strategy uses correct tokens frequently at the beginning of training, whereas it favors sampled tokens frequently at the end of training.", "We also adjust with their use of the inverse sigmoid decay: t = max (cid:18) q, k k + exp( tk ) (cid:19) (5) where q and k are hyper-parameters.", "In short, t decreases to q from 1 , depending on the training step t .", "We use t as at t .", "For Q x i , we prepare three types of distributions: conditional probability , uniform , and similarity .", "Conditional Probability: REP (SS) Bengio et al. (2015) proposed the scheduled sampling which uses predicted tokens during training to address the gap between training and inference.", "Formally, the scheduled sampling uses the following conditional probability as Q y i : p ( y i | y (cid:48) 0: i 1 , X ) .", "Since the scheduled sampling is the method to compute the perturbation for the decoder side only, it uses the correct sequence as the input of the encoder side.", "In other words, the scheduled sampling does not provide any function for Q x i .", "The original scheduled sampling repeats the decoding for each of the tokens on the decoder side, and thus, requires computational time in proportion to the length of the decoder-side input sequence.", "To address this issue, Duckworth et al. (2019) proposed a more time efficient method: parallel scheduled sampling which computes output probability distributions corresponding to each position simultaneously.", "In this study, we use parallel scheduled sampling instead of the original method.", "Uniform: REP (UNI ) The scheduled sampling is slow even if we use parallel scheduled sampling because it requires decoding at least once to compute Equation (6).", "Thus, we introduce two faster methods to explore effective perturbations from the perspective of computational time.", "In uniform , we use the uniform distributions on each vocabulary as Q x i and Q y i , respectively.", "For example, we randomly pick up a token from the source-side vocabulary and use the token as x i in Equation (4) to construct the source-side perturbed input.", "This method is used as the baseline in the previous study (Bengio et al., 2015).", "Similarity: REP (SIM ) We also explore more sophisticated way than the uniform distribution.", "We assume that the conditional probability of Equation (6) assigns high probabilities to tokens that are similar to the correct input token.", "Based on this as-sumption, we construct a distribution that enables us to sample similar tokens frequently.", "Let V x be the source-side vocabulary, E x R |V x | d x be the d x dimensional embedding matrix, and e ( x i ) be the function returning the embedding of x i .", "We use the following probability distribution as Q x i : softmax( E x e ( x i )) , (7) where softmax( . ) is the softmax function.", "Thus, Equation (7) assigns high probabilities to tokens whose embeddings are similar to e ( x i ) .", "In other words, Equation (7) is the similarity against x i without considering any context.", "We compute the probability distribution for the target side by using e ( y i ) in the same manner.", "We apply the word dropout technique to compute the perturbed input.", "Word dropout randomly uses the zero vector instead of the embedding e ( x i ) for the input token x i (Gal and Ghahramani, 2016): b x i Bernoulli( ) , (8) WDrop( x i , b x i ) = b x i e ( x i ) , (9) where Bernoulli( ) returns 1 with the probability and 0 otherwise.", "Thus, WDrop( x i , b x i ) returns e ( x i ) with the probability and the zero vector otherwise.", "We apply Equation (9) to each token in the input sequence.", "Then, we use the results as the perturbed input.", "Miyato et al. (2017) proposed a method to compute adversarial perturbations in the embedding space.", "Their method adds adversarial perturbations to input embeddings instead of replacing correct input tokens with others.", "Sato et al. (2019) applied this approach to neural encoder-decoders and reported its effectiveness.", "Thus, this study follows the methods used in Sato et al. (2019).", "The method seeks the adversarial perturbation, which seriously damages the loss value, based on the gradient of the loss function L ( ) .", "Then, we add the adversarial perturbation to the input token embedding.", "Let r x i R d x be the adversarial perturbation vector for the input token x i .", "We obtain the perturbed input embedding e (cid:48) ( x i ) with the following equations: e (cid:48) ( x i ) = e ( x i ) + r x i , (10) r x i = (cid:15) c x i || c x i || , (11) c x i = e ( x i ) L ( ) , (12) where (cid:15) is a hyper-parameter to control the norm of the adversarial perturbation.", "We apply the above equations to all tokens in the input sequence.", "In the training using word replacement and/or word dropout perturbations, we search the parameters", "predicting the correct output sequence from the perturbed input.", "For example, in the word replacement approach, we minimize the following negative log-likelihood: L (cid:48) ( ) = 1 |D| (cid:88) D log p ( Y | X (cid:48) , Y (cid:48) ; ) , = 1 |D| (cid:88) D J +1 (cid:88) j =1 log p ( y j | y (cid:48) 0: j 1 , X (cid:48) ; ) .", "Virtual Adversarial Training When we use adversarial perturbations, we train parameters of the neural encoder-decoder to minimize both Equation (2) and a loss function A ( ) composed of perturbed inputs: J ( ) = L ( ) + A ( ) , (14) where is a hyper-parameter to control the balance of two loss functions.", "This calculation seems to be reasonably time efficient because adversarial perturbations require computing Equation (2).", "Sato et al. (2019) used the virtual adversarial training originally proposed in Miyato et al. (2016) as a loss function for perturbed inputs.", "In the virtual adversarial training, we regard the output probability distributions given the correct input sequence as positive examples: A ( ) = 1 |D| (cid:88) DKL ( p ( | X ; ) || p ( | X , r ; )) , (15) where r represents a concatenated vector of adversarial perturbations for each input token, and KL( || ) denotes the KullbackLeibler divergence.", "To obtain findings on sequence-to-sequence problems, we conduct experiments on various situations: different numbers of training data and multiple tasks.", "We mainly focus on translation datasets because machine translation is a typical sequence-to-sequence problem.", "We regard the widely used WMT English-German dataset as a standard setting.", "In addition, we vary the number of training data in machine translation: high resource in Section 4.2 and low resource in Section 4.3.", "Table 1 summarizes the number of training data in each configuration.", "Moreover, we conduct experiments Setting Genuine Synthetic Standard 4.5M High Resource 4.5M 20.0M Low Resource 160K Table 1: Sizes of training datasets on our machine translation experiments.", "on other sequence-to-sequence problems: grammatical error correction (GEC) in Section 5 and summarization in Appendix A to confirm whether the findings from machine translation are applicable to other tasks.", "Datasets We used the WMT 2016 English-German training set, which contains 4.5M sentence pairs, in the same as Ott et al. (2018), and followed their pre-processing.", "We used newstest2013 as a validation set, and newstest2010-2012, and 2014-2016 as test sets.", "We measured case-sensitive detokenized BLEU with SacreBLEU (Post, 2018) 2 .", "Methods We used Transformer (Vaswani et al., 2017) as a base neural encoder-decoder model because it is known as a strong neural encoder-decoder model.", "We used two parameter sizes: base and big settings in Vaswani et al. (2017).", "We applied perturbations described in Section 3 for comparison.", "For parallel scheduled sampling (Duckworth et al., 2019), we can compute output probability distributions multiple times but we used the first decoding result only because it is the fastest approach.", "We set q = 0 .", "9 , k = 1000 , and = 0 .", "9 .", "For ADV , we used the same hyper-parameters as in Sato et al. (2019).", "Our implementation is based on fairseq 3 (Ott et al., 2019).", "We trained each model for a total of 50,000 steps.", "Preliminary: To which sides do we apply perturbations?", "As described, perturbations based on REP (SS) can be applied to the decoder side only.", "Sato et al. (2019) reported their method was the most effective when they applied their ADV to both encoder and decoder sides.", "However, we do not have evidence for suitable sides in applying other perturbations.", "Thus, we applied REP (UNI ), 2 As reported in Ott et al. (2018), the BLEU score from SacreBLEU is often lower than the score from multi-bleu.perl but SacreBLEU is suitable for scoring WMT datasets (Post, 2018).", "3 https://github.com/pytorch/fairseq Method Position 2010 2011 2012 2013 2014 2015 2016 Average Transformer (base) w/o perturbation -24.27 22.06 22.43 26.11 27.13 29.70 34.40 26.59 REP (UNI ) enc 24.26 21.95 22.33 25.76 26.70 29.08 34.61 26.38 dec 24.27 21.99 22.29 26.31 27.28 29.74 34.42 26.61 both 24.30 22.20 22.43 26.06 26.82 29.42 34.13 26.48 REP (SIM ) enc 24.12 22.02 22.14 26.21 27.01 29.33 34.56 26.48 dec 24.32 21.96 22.55 26.36 27.23 29.86 34.33 26.66 both 23.94 21.85 22.29 25.84 26.61 29.50 34.20 26.32 WDROP enc 24.31 22.12 22.45 26.20 27.09 29.95 34.58 26.67 dec 23.96 22.08 22.22 26.36 27.08 29.91 33.98 26.51 both 24.33 22.14 22.35 26.10 26.82 29.51 34.51 26.54 Transformer (big) w/o perturbation -24.22 22.11 22.69 26.60 28.46 30.50 33.58 26.88 REP (UNI ) enc 24.79 22.49 23.10 27.07 28.39 30.52 34.51 27.27 dec 24.33 22.34 22.63 26.93 28.22 30.36 33.41 26.89 both 24.75 22.68 23.32 27.01 28.89 31.38 34.94 27.57 REP (SIM ) enc 24.68 22.91 23.13 27.03 28.25 30.81 34.40 27.32 dec 24.51 22.22 22.83 26.46 28.64 30.68 33.58 26.99 both 24.77 22.50 23.10 26.91 28.98 31.03 34.29 27.37 WDROP enc 24.60 22.32 23.27 27.07 28.40 31.00 34.61 27.32 dec 24.53 22.33 22.75 27.00 28.56 30.58 33.20 26.99 both 24.92 22.71 23.40 27.11 28.73 30.99 34.80 27.52 REP (UNI )+WD ROP both 24.82 22.82 23.38 27.30 28.56 30.65 35.02 27.51 REP (SIM )+WD ROP both 24.83 22.95 23.40 27.23 28.65 30.88 35.05 27.57 Table 2: BLEU scores on newstest2010-2016 and averaged scores.", "REP (SIM ), and WDROP to the encoder side, decoder side, and both as preliminary experiments.", "Table 2 shows BLEU scores on newstest2010-2016 and averaged scores when we varied the position of the perturbations.", "In this table, we indicate better scores than the original Transformer (Vaswani et al., 2017) (w/o perturbation) in bold.", "This table shows that it is better to apply word replacement (REP (UNI ) and REP (SIM )) to the decoder side in Transformer (base).", "For WDROP , applying the encoder side is slightly better than other positions in Transformer (base).", "In contrast, applying perturbations to both sides achieved the best averaged BLEU scores for all methods in Transformer (big).", "These results imply that it is better to apply to word replacement and/or word dropout to both encoder and decoder sides if we prepare enough parameters for neural encoder-decoders.", "Based on these results, we select methods to compare against scheduled sampling (REP (SS)) and adversarial perturbations (ADV ).", "Table 2 also shows the results when we combined each word replacement with word dropout (REP (UNI )+WD ROP and REP (SIM )+WD ROP ).", "REP (SIM )+WD ROP slightly outperformed the separated settings.", "of each method and computational speeds 4 based on Transformer (base) without any perturbations, i.e., larger is faster.", "In this table, we indicate the best score of each column for Transformer (base) and (big) settings in bold.", "This table indicates that Transformer without perturbations achieved a comparable score to previous studies (Vaswani et al., 2017; Ott et al., 2018) on newstest2014 in base and big settings.", "Thus, we consider that our trained Transformer models (w/o perturbation) can be regarded as strong baselines.", "This table shows that ADV achieved the best averaged score in Transformer (base), but this method required twice as much training time as the original Transformer (base).", "In contrast, REP (SIM ) and WDROP achieved comparable scores to ADV although they slightly affected the computational time.", "REP (UNI ) also achieved a slightly better averaged score than the original Transformer (base).", "In the Transformer (big) setting, all perturbations surpassed the performance of w/o perturbation in the averaged score.", "REP (SS) and ADV improved the performance, but other methods outperformed these two methods with a small training time.", "Moreover, REP (UNI ) and REP (SIM )+WD ROP achieved the best averaged score.", "values and BLEU scores on the validation set for each training time when we applied each perturbation to Transformer (big).", "In addition, Figure 2", "(c) shows the time required to achieve the BLEU score of Transformer w/o perturbation on the validation set (26.60, as described in Table 3).", "These figures show that ADV requires twice as much time or more relative to other methods to achieve performance comparable to others.", "In NLL curves, REP (UNI ), REP (SIM ), and WDROP achieved better values than those of Transformer w/o perturbation in the early stage.", "In addition, WDROP was the fastest to achieve better NLL value.", "Figure 2", "(c) indicates that REP (UNI ), REP (SIM ), and WDROP achieved 26.60 BLEU score with smaller training time than that of Transformer w/o perturbation.", "particular, when we prepare a large number of parameters for Transformer in machine translation, it is better to use these methods (and their combinations) as perturbations.", "We conduct more experiments to investigate whether these methods are also superior in other configurations.", "Datasets We add synthetic parallel data generated from the German monolingual corpus using back-translation (Sennrich et al., 2016a) to the training data used in Section 4.1.", "The origin of the German monolingual corpus is NewsCrawl 2015-2018 5 .", "We randomly sampled 5M sentences from each NewsCrawl corpus, and thus, obtained 20M sentences in total.", "We back-translated the corpus 5 data.statmt.org/news-crawl/de/ Method Positions 2010 2011 2012 2013 2014 2015 2016 Average w/o perturbation -25.63 23.62 24.54 28.39 31.50 32.96 36.47 29.02 REP (UNI ) both 26.36 24.18 25.14 28.54 32.35 33.80 37.73 29.73 REP (SIM ) both 26.04 23.79 25.01 28.43 32.06 33.28 37.40 29.43 WDROP both 26.65 24.34 25.18 28.66 32.25 33.75 37.65 29.78 REP (UNI )+WD ROP both 26.45 24.07 25.09 28.72 32.21 33.42 37.68 29.66 REP (SIM )+WD ROP both 26.55 24.20 25.19 28.55 31.92 33.64 37.96 29.72 REP (SS) dec 25.81 23.64 24.73 28.46 31.84 33.29 36.59 29.19 ADV both 25.79 24.07 24.92 28.64 32.04 33.35 37.20 29.43 Table 4: BLEU scores of each method trained with a large amount of data.", "with the German-English translation model, which is identical to Transformer (big) (w/o perturbation) used in Section 4.1 except for the direction of translation.", "Finally, we prepended a special token (cid:104) BT (cid:105) to the beginning of the source (English) side of the synthetic data following (Caswell et al., 2019).", "In addition, we upsampled the original bitext to adjust the ratio of the original and synthetic bitexts to 1:1.", "Methods In this setting, we increase the parameter size of Transformer from the (big) setting to take advantage of large training data.", "Specifically, we increased the internal layer size of the FFN part from 4096 to 8192, and used 8 layers for both the encoder and decoder.", "The other hyper-parameters are same as in Section 4.1.", "Results Table 4 shows BLEU scores of each method when we used a large amount of training data.", "This table indicates that all perturbations outperformed Transformer w/o perturbation in all test sets.", "Moreover, the fast methods REP (UNI ), REP (SIM ), WDROP , and their combinations achieved the same or better averaged scores than REP (SS) and ADV .", "Thus, these methods are not only fast but also significantly improve the performance of Transformer.", "In particular, since Table 3 shows that REP (UNI ) and WDROP barely have any negative effect on the computational time, we consider them as superior methods.", "Datasets We also conduct an experiment on a low resource setting.", "We used IWSLT 2014 German-English training set which contains 160k sentence pairs.", "We followed the preprocessing described in fairseq 6 (Ott et al., 2019).", "We used dev2010, 2012, and tst2010-2012 as a test set.", "Results Table 5 shows BLEU scores of each method on the low resource setting.", "We trained three models with different random seeds for each method, and reported the averaged scores.", "In this table, we also report the results of REP (UNI ), REP (SIM ), WDROP , and their combinations trained with twice the number of updates (below 2 training steps).", "This table shows that all perturbations also improved the performance from Transformer w/o perturbation.", "In contrast to Tables 3 and 4, ADV achieved the top score when each model was trained with the same number of updates.", "However, as reported in Section 4.1, ADV requires twice or more as long as other perturbations for training.", "Thus, when we train Transformer with other perturbations with twice the number of updates, the training time is almost equal.", "In the comparison of (almost) equal training time, WDROP achieved a comparable score to ADV .", "Moreover, REP (UNI )+WD ROP and REP (SIM )+WD ROP 7 outperformed ADV .", "Thus, in this low resource setting, REP (UNI )+WD ROP and REP (SIM )+WD ROP are slightly better than ADV in computational time.", "7 In the low resource setting, we applied only WDROP to an encoder side for REP (UNI )+WD ROP and REP (SIM )+WD ROP because the configuration achieved better performance than applying both perturbations to both sides.", "Recent studies have used perturbations, especially adversarial perturbations, to improve the robustness of encoder-decoders (Sato et al., 2019; Cheng et al., 2019; Wang et al., 2019).", "In particular, Cheng et al. (2019) analyzed the robustness of models trained with their adversarial perturbations over perturbed inputs.", "Following them, we also investigate the robustness of our trained Transformer (big) models.", "We constructed perturbed inputs by replacing words in source sentences based on pre-defined ratio.", "If the ratio is 0.0, we use the original source sentences.", "In contrast, if the ratio is 1.0, we use the completely different sentences as source sentences.", "We set the ratio 0 .", "01 , 0 .", "05 , and 0 .", "10 .", "In this process, we replaced a randomly selected word with a word sampled from vocabulary based on uniform distribution.", "We applied this procedure to source sentences in newstest2010-2016.", "Table 6 shows averaged BLEU scores 8 of each method on perturbed newstest2010-2016.", "These BLEU scores are calculated against the original reference sentences.", "This table indicates that all perturbations improved the robustness of the Transformer (big) because their BLEU scores are better than one in the setting w/o perturbation.", "In comparison among perturbations, REP (SIM ) (and REP (SIM )+WD ROP ) achieved significantly better scores than others on perturbed inputs.", "We emphasize that REP (SIM ) surpassed ADV even though ADV is originally proposed to improve the robustness of models.", "This result implies that REP (SIM ) is effective to construct robust models as well as to improve the performance.", "Datasets Following Kiyono et al. (2020), we used a publicly available dataset from the BEA shared task (Bryant et al., 2019).", "This dataset contains training, validation, and test splits.", "We also used the CoNLL-2014 test set (CoNLL) (Ng et al., 2014) as an additional test set.", "We report F 0 .", "5 score measured by the ERRANT scorer (Bryant et al., 2017; Felice et al., 2016) for the BEA dataset and M 2 scorer (Dahlmeier and Ng, 2012) for CoNLL.", "Methods We used the same settings as Kiyono et al. (2020).", "Specifically, we trained Transformer (big) model w/o perturbation on the same parallel pseudo data provided by Kiyono et al. (2020), and then fine-tuned the model with perturbations.", "The hyper-parameters for perturbations are the same as those described in Section 4.1.", "Results Table 7 shows the results of each method.", "This table reports the averaged score of five models trained with different random seeds.", "Moreover, we present the scores of Kiyono et al. (2020); our w/o perturbation model is a rerun of their work, that is, the experimental settings are identical 9 .", "Table 7 shows that all perturbations improved the scores except for REP (UNI ) and REP (SIM ) in the BEA test set (Test).", "Similar to the machine translation results, the simple methods WDROP , REP (UNI )+WD ROP , and REP (SIM )+WD ROP achieved comparable scores to ADV .", "Thus, these faster methods are also effective for the GEC task.", "Word Replacement The naive training method of neural encoder-decoders has a discrepancy between training and inference; we use the correct 9", "tokens as inputs of the decoder in the training phase but use the token predicted at the previous time step as an input of the decoder in the inference phase.", "To address this discrepancy, Bengio et al. (2015) proposed the scheduled sampling that stochastically uses the token sampled from the output probability distribution of the decoder as an input instead of the correct token.", "Zhang et al. (2019) modified the sampling method to improve the performance.", "In addition, Duckworth et al. (2019) refined the algorithm to be suited to Transformer (Vaswani et al., 2017).", "Their method is faster than the original scheduled sampling but slower and slightly worse than more simple replacement methods such as REP (UNI ) and REP (SIM ) in our experiments.", "Xie et al. (2017) and Kobayashi (2018) used the un-igram language model and neural language model respectively to sample tokens for word replacement.", "In this study, we ignored contexts to simplify the sampling process, and indicated that such simple methods are effective for sequence-to-sequence problems.", "Word Dropout Gal and Ghahramani (2016) applied word dropout to a neural language model and it is a common technique in language modeling (Merity et al., 2018; Yang et al., 2018; Takase et al., 2018).", "Sennrich and Zhang (2019) reported that word dropout is also effective for low resource machine translation.", "However, word dropout has not been commonly used in the existing sequence-to-sequence systems.", "Experiments in this study show that word dropout is not only fast but also contributes to improvement of scores in various sequence-to-sequence problems.", "Adversarial Perturbations Adversarial perturbations were first discussed in the field of image processing (Szegedy et al., 2014; Goodfellow et al., 2015).", "In the NLP field, Miyato et al. (2017) applied adversarial perturbations to an embedding space and reported its effectiveness on text classi-fication tasks.", "In sequence-to-sequence problems, Wang et al. (2019) and Sato et al. (2019) applied adversarial perturbations to embedding spaces in neural encoder-decoders.", "Moreover, Sato et al. (2019) used virtual adversarial training (Miyato et al., 2016) in training of their neural encoder-decoders.", "Cheng et al. (2019) computed token-level adversarial perturbations in machine translation.", "In other words, they introduced the strategy of adversarial perturbations into word replacement.", "Their method is also effective but requires more computational time than Wang et al. (2019) and Sato et al. (2019) because it runs language models to obtain candidate tokens for perturbations.", "We compared perturbations for neural encoder-decoders in view of computational time.", "Experimental results show that simple techniques such as word dropout (Gal and Ghahramani, 2016) and random replacement of input tokens achieved comparable scores to sophisticated perturbations: the scheduled sampling (Bengio et al., 2015) and adversarial perturbations (Sato et al., 2019), even though those simple methods are faster.", "In the low resource setting in machine translation, adversarial perturbations achieved high BLEU score but those simple methods also achieved comparable scores to the adversarial perturbations when we spent almost the same time for training.", "For the robustness of trained models, REP (SIM ) is superior to others.", "This study indicates that simple methods are suf-ficiently effective, and thus, we encourage using such simple perturbations as a first step.", "In addition, we hope for researchers of perturbations to use the simple perturbations as baselines to make the usefulness of their proposed method clear.", "We thank Motoki Sato for sharing his code with us to compare adversarial perturbations.", "We thank Jun Suzuki and Sosuke Kobayashi for their valuable comments.", "We thank anonymous reviewers for their useful suggestions.", "This work was supported by JSPS KAKENHI Grant Number JP18K18119 and JST ACT-X Grant Number JPMJAX200I.", "The first author is supported by Microsoft Research Asia (MSRA) Collaborative Research Program." ]
[ "method", "abstain", "method", "method", "abstain", "other", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "other", "method", "abstain", "method", "method", "method", "abstain", "other", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "other", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "other", "other", "other", "abstain", "other", "abstain", "other", "other", "other", "objective", "other", "other", "other", "other", "other", "other", "other", "method", "abstain", "result", "abstain", "objective", "objective", "other", "other", "other", "other", "other" ]
[ "Training data for sentiment analysis are abundant in multiple domains, yet scarce for other domains.", "It is useful to leveraging data available for all existing domains to enhance performance on different domains.", "We investigate this problem by learning domain-specific representations of input sentences using neural network.", "In particular, a descriptor vector is learned for representing each domain, which is used to map adversarially trained domain-general Bi-LSTM input representations into domain-specific representations.", "Based on this model, we further expand the input representation with exemplary domain knowledge, collected by attending over a memory network of domain training data.", "Results show that our model outperforms existing methods on multi-domain sentiment analysis significantly, giving the best accuracies on two different benchmarks.", "Sentiment analysis has received constant research attention due to its importance to business (Pang et al., 2002; Hu and Liu, 2004; Choi and Cardie, 2008; Socher et al., 2012; Vo and Zhang, 2015; Tang et al., 2014).", "For multiple domains, such as movies, restaurants and digital products, manually annotated datasets have been made available.", "A useful research question is how to leverage resources available across all domains to improve sentiment classification on a certain domain.", "One naive domain-agnostic baseline is to combine all training data, ignoring domain differences.", "However, domain knowledge is one valuable source of information available.", "To utilize this, there has been recent work on domain-aware models via multi-task learning (Liu et al., 2016; Nam and Han, 2016), building an output layer for each domain while sharing a representation network.", "Given an input sentence and a specific test domain, the output layer of the test domain is chosen for calculating the output.", "These methods have been shown to improve over the naive domain-agnostic baseline.", "However, a limitation is that outputs for different domains are constructed using the same domain-agnostic input representation, which leads to weak utilization of domain knowledge.", "For different domains, sentiment words can differ.", "For example, the word beast can be a positive indicator of camera quality, but irrelevant to restaurants or movies.", "Also, easy is frequently used in the electronics domain to express positive sentiment (e.g. the camera is easy to use), while expressing negative sentiment in the movie domain (e.g. the ending of this movie is easy to guess).", "We address this issue by investigating a model that learns domain-specific input representations for multi-domain sentiment analysis.", "In particular, given an input sentence, our model first uses a bidirectional LSTM to learn a general sentence-level representation.", "For better utilizing data from all domains, we use adversarial training (Ganin and Lempitsky, 2015; Goodfellow et al., 2014) on the Bi-LSTM representation.", "The general sentence representation is then mapped into a domain-specific representation by attention over the input sentence using explicitly learned domain descriptors , so that the most salient parts of the input are selected for the specific domain for sentiment classification.", "Some examples are shown in Figure 2, where our model pays attention to word engaging for movie reviews, but not for laptops, restaurants or cameras.", "Similarly, the word beast receives attention for laptops and cameras, but not for restaurants or movies.", "In addition to the domain descriptors, we further introduce a memory network for explicitly representing domain knowledge .", "Here domain knowl-541 I am satisfied with this camera Sequence: Embedding Layer Lookup: Bi-LSTM: Average Pooling: Softmax: \"", "(c) Our model: domain knowledge is better utilized by domain descriptors, memories and adversarial training.", "edge refers to example training data in a specific domain, which can offer useful background context.", "For example, given a sentence Keep cool if you think it's a wonderful life will be a heartwarming tale about life like finding nemo', algorithms can mistakenly classify it as positive based on wonderful' and heartwarming', ignoring the fact that it's a wonderful life' is a movie.", "In this case, necessary domain knowledge revealed in other sentences, such as The last few minutes of the movie: it's a wonderful life don't cancel out all the misery the movie contained' is helpful.", "Given a domain-specific input representation, we make attention over the domain knowledge memory network to obtain a background context vector, which is used in conjunction with the input representation for sentiment classification.", "Results on two real-world datasets show that our model outperforms the aforementioned multi-task learning methods for domain-aware training, and also generalizes to unseen domains.", "Our code is released 1 .", "Formally, we assume the existence of m sentiment datasets { D i } mi =1 , each being drawn from a domain i .", "D i contains | D i | data points ( s ij , d i , y ij ) , where s ij is a sequence of words w 1 , w 2 ...w | s ij | , each being drawn from a vocabulary V , y ij indicates the sentiment label (e.g. y ij { 1 , +1 } for binary sentiment classification) and d i is a domain indicator (since we use 1 to m to number each domain, d i = i ).", "The task is to learn a function f which maps each input ( s ij , d i ) to its corresponding sentiment label y ij .", "The challenge of the task lies in how to improve the generalization performance of mapping function f both in-domain and cross-domain by exploring the correlations between different domains.", "One naive baseline solution ignores the domain characteristics when learning f .", "It simply combines the datasets { D i } mi =1 into one and learns a single mapping function f .", "We refer to this baseline as Mix , which is depicted in Figure 1", "(a).", "Given an input s ij , its word sequence w 1 , w 2 ...w | s ij | is fed into a word embedding layer to obtain embedding vectors x 1 , x 2 ...x | s ij | .", "The word embedding layer is parameterized by an embedding matrix E w RK | V | , where K is the embedding dimension.", "Bidirectional LSTM : To acquire a semantic representation of input s ij , a bidirectional extension (Graves and Schmidhuber, 2005) of Long Short-Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) is applied to capture sentence-level semantics both left-to-right and right-to-left.", "As a result, two sequences of hidden states are obtained, denoted as (cid:1) h 1 , (cid:1) h 2 ... (cid:1) h | s ij | and (cid:0) h 1 , (cid:0) h 2 ... (cid:0) h | s ij | , respectively.", "We concatenate (cid:1) h t 1 https://github.com/leuchine/ multi-domain-sentiment 542 and (cid:0) h t at each time step to obtain the hidden states h 1 , h 2 ...h | s ij | , which are of sizes 2 K .", "Output Layer : Average pooling (Boureau et al., 2010) is applied on the hidden states h 1 , h 2 ...h | s ij | to obtain an input representation I ij for s ij , I ij = P | s ij | t =1 h t | s ij | (1) Finally, softmax is applied over I ij to obtain a probability distribution of all sentiment labels.", "During training, cross entropy is used as loss function, denoted as L ( f ( s ij ) , y ij ) for data points ( s ij , d i , y ij ) , and AdaGrad (Duchi et al., 2011) is applied to update parameters.", "We build a second baseline for domain-aware sentiment analysis.", "A state-of-the-art architecture (Liu et al., 2016; Nam and Han, 2016) is used as depicted in Figure 1", "(b), where m mapping functions f i are learned for each domain.", "Given the input representation I ij obtained in Equation 1, multi-task learning is conducted, where each domain has a domain-specific set of parameters for softmax to predict sentiment labels with shared input representation layers.", "The input domain indicator d i instructs which set of softmax parameters to use here and each domain has its own cross entropy loss L i ( f i ( s ij , d i ) , y ij ) for data points ( s ij , d i , y ij ) .", "We denote this baseline as Multi .", "The above baseline Multi achieves state-of-the-art performance for multi-domain sentiment analysis (Liu et al., 2016), yet the domain indicator d i is used solely to select softmax parameters.", "As a result, domain knowledge is hidden and under-utilized.", "Similar to Mix and Multi , we use a Bi-LSTM to learn representations shared across domains.", "However, we introduce domain-specific layers to better capture domain characteristics as shown in Figure 1", "(c).", "Different domains have their own sentiment lexicons and domain differences largely lie in which words are relatively more important for deciding the sentiment signals.", "We use the neural attention mechanism (Bahdanau et al., 2014) to select words, obtaining domain-specific input representations.", "In our model, domain descriptors are introduced to explicitly capture domain characteristics, which are parametrized by a matrix N R 2 K m .", "Each domain descriptor corresponds to one column of N and has a length of 2 K , the same as the bidirectional LSTM hidden states h t .", "This matrix is automatically learned during training.", "Given an input ( s ij , d i ), we apply an embedding layer and Bi-LSTM to generate its domain-general representation h 1 , h 2 , ..., h | s ij | and use the corresponding domain descriptor N i to weigh h 1 , h 2 , ..., h | s ij | for obtaining a domain-specific representation.", "To this end, there are two most commonly used attention mechanisms: additive attention (Bahdanau et al., 2014) and dot product attention (Ashish Vaswani, 2017).", "We choose additive attention here, which utilizes a feed-forward network with a single hidden layer, since it achieves better accuracies in our development.", "The input representation I ij becomes a weighted sum of hidden states: I ij = | s ij | X t =1 a ijt h t s.t. | s ij | X t =1 a ijt = 1 (2) The weight a ijt reflects the similarity between the domain i 's descriptor N i and the hidden state h t .", "a ijt is evaluated as: l ijt = v T tanh ( P N i + Qh t ) a ijt = exp ( l ijt ) P | s ij | p =1 exp ( l ijp ) (3) Here P R 4 K 2 K , Q R 4 K 2 K and v R 4 K are parameters of additive attention.", "P and Q linearly project N i and h t to a hidden layer, respectively.", "The projected space is set as 4 K empirically, since we find it beneficial to project the vectors into a larger layer.", "v serves as the output layer.", "Softmax is applied to normalize l ijt .", "We name this method DSR for learning domain-specific representations.", "DSR uses a single domain descriptor to attend over input words.", "However, relations between domains are not considered (e.g. sentiment lexicons for domain camera' are more similar to the lexicons of domain laptop' than those of domain 543 restaurant').", "To model the interaction between domains, a self-attention layer is applied using dot product attention empirically, as shown in Figure 1", "(c): N newi = N softmax ( NTN i ) (4) We compute dot products between N i and every domain descriptors.", "The dot products are normalized using the softmax function, and N newi is a weighted sum of all domain descriptors.", "N newi is used to attend over hidden states, employing Equation 2 and 3.", "During back propagation training, domain descriptors of similar domains could be updated simultaneously.", "We name this method DSR-sa , which denotes domain-specific representation with self-attention.", "To further capture domain characteristics, we devise a memory network (Weston et al., 2014; Sukhbaatar et al., 2015; Kumar et al., 2016) framework to explicitly represent domain knowledge.", "Our memory networks hold example training data of a specific domain for retrieving context data during predictions.", "Formally, we use a memory M i R 2 K | D i | ( | D i | is the total number of training instances of domain i ) to hold domain-specific representations I ij of training instances for the domain i .", "Obtaining A Context Vector Using Background Knowledge : Given an input I ij , we generate a context vector C ij to support predictions by memory reading:", "Dot product attention is applied here, which is faster and more space-efficient than additive attention, since it can be implemented using highly optimized matrix multiplication.", "Dot products are performed between I ij and each column of M i and the scores are normalized using the softmax function.", "The final context vector is a weighted sum of M i 's columns.", "Output : We concatenate the context vector and the domain-specific input representation, feeding the result to softmax layers.", "Similar to the baseline Multi , each domain has its own loss L i ( f i ( s ij , d i ) , y ij ) .", "We name this method as DSR-ctx for context vector enhancements.", "Reducing Memory Size : In the naive implementation, the memory size | M i | is equal to the total number of saved sequences, which can be very large in practice.", "We explore two ways to reduce memory size.", "(1) Organizing memory by the vocabulary.", "We set | M i | = | V | , where each memory column of M i corresponds to a word in the vocabulary.", "During memory writing, I ij updates all the columns that correspond to the words w in its input sequence s ij by exponential moving average: M iw = decay M iw + (1 decay ) I ij In this way, two input representations update the same column of the memory network if and only if they share at least one common word.", "(2) Fixing the memory size by clustering.", "| M i | is set to a fixed size and I ij only updates the memory column that is most similar to I ij , i.e. I ij only update the column arg max ( M i ) TI ij .", "In this way, semantically similar inputs are clustered and update the same column.", "We use embeddings and Bi-LSTM, parametrized by dg , to generate domain-general representations.", "However, the distributions of domain-general representations for all domains can be different (Goodfellow et al., 2014), which contaminates the representations (Liu et al., 2017) and imposes negative effects for in-domain predictions.", "For cross-domain testing, the discrepancies cause domain shift, which harms prediction accuracies on target domains (Ganin and Lempitsky, 2015).", "Thus, models that can generate domain-invariant representations for all domains are favorable for utilizing multi-domain datasets.", "We incorporate adversarial training to enhance the domain-general representations.", "As shown in Figure 1", "(c), domain classifier layers are introduced, parametrized by dc , which predicts how likely the input sequence s ij comes from each domain i .", "We denote its cross entropy loss as L at ( f at ( s ij ) , d i ) for data points ( s ij , d i , y ij ) from domain i (note that we use d i as its label instead of input here).", "Now consisting of domain-general layers, domain-specific layers and domain classifier lay-544 ers, the model is trained by a minimax game.", "For dataset D i drawn from domain i , we minimize its loss L i ( f i ( s ij , d i ) , y ij ) for sentiment predictions, while maximizing the domain classifier loss L at ( f at ( s ij ) , d i ) , controlled by : min dg , ds XD i L i ( f i ( s ij , d i ) , y ij ) L at ( f at ( s ij ) , d i ) , (7) where ds is the set of domain-specific parameters including domain descriptors, attention weights and softmax parameters.", "We fix dc and update dg and ds here.", "Its adversarial part maximizes the loss by updating dc , while fixing dg and ds .", "Equations 7 and 8 are performed iteratively to generate domain-invariant representations.", "We name this method DSR-at .", "We evaluate the effectiveness of the model both in-domain and cross-domain.", "The former refers to the setting where the domain of the test data falls into one of the m training data domains, and the latter refers to the setting where the test data comes from one unknown domain.", "We conduct experiments on two benchmark datasets.", "The datasets are balanced, so we use accuracy as the evaluation metric in the experiments.", "The dataset 1 contains four domains.", "The statistics are shown in Table 1 , which also shows the accuracies using baseline method Mix trained and tested on each domain.", "Camera 2 consists of reviews with respect to digital products such as cameras and MP3 players (Hu and Liu, 2004).", "Laptop and Restaurant are laptop and restaurant reviews, respectively, obtained from SemEval 2015 Task 12 3 .", "Movie 4 are movie reviews provided by Pang and Lee (2004).", "The dataset 2 is Blitzer's multi-domain sentiment dataset (Blitzer et al., 2007), which contains 2 http://www.cs.uic.edu/liub/FBS/ sentiment-analysis.html 3 Since the original dataset targets aspect-level sentiment analysis, we remove the sentences with opposite polarities on different aspects.", "The remaining sentences are labeled with the unambiguous polarity.", "4 https://www.cs.cornell.edu/people/ pabo/movie-review-data/ Domain Instance Vocab Size Accuracy Camera (CR) 3770 5340 0.802 Laptop (LT) 1907 2837 0.871 Restaurant (RT) 1572 2930 0.783 Movie (M) 10662 18765 0.773 Table 1: Dataset 1 statistics.", "product reviews taken from Amazon.com, including 25 product types (domains) such as books, beauty and music.", "More statistics can be found at its official website 5 .", "Given each dataset, we randomly select 80%, 10% and 10% of the instances as training, development and testing sets, respectively.", "In addition to the Mix baseline, the Multi baseline (Liu et al., 2016) and our domain-aware models, DSR , DSR-sa , DSR-ctx , DSR-at , we also experiment with the following baselines:", "MTRL (Zhang and Yeung, 2012) is a state-of-the-art multi-task learning method with discrete features.", "The method models covariances between task classifiers, and in turn the covariances regularize task-specific parameters.", "The feature extraction for MTRL follows (Blitzer et al., 2007).", "We use this baseline to demonstrate the effectiveness of dense features generated by neural models.", "MDA (Chen et al., 2012) is a cross-domain baseline, which utilizes marginalized de-noising auto-encoders to learn a shared hidden representation by reconstructing pivot features from corrupted inputs.", "FEMA (Yang and Eisenstein, 2015) is a cross-domain baseline, which utilizes techniques from neural language models to directly learn feature embeddings and is more robust to domain shift.", "NDA (Kim et al., 2016) is a cross-domain baseline, which uses m + 1 LSTMs, where one LSTM captures global information across all m domains and the remaining m LSTM capture domain-specific information.", "We set the size of word embeddings K to 300 , which are initialized using the word2vec model 6 on news.", "To obtain the best performance, the parameters are set using grid search based on development results.", "The dropout ratio is chosen from [0 . 3 , , 1] .", "Learning rate is chosen from 5 https://www.cs.jhu.edu/mdredze/ datasets/sentiment/ 6 https://code.google.com/archive/p/ word2vec/ 545 Dataset Method Train TestMTRL Mix Multi DSR DSR-saDSR-ctxDSR-at LT+RT LT 0.817 0.896 0.90 0.908 0.911 0.914 0.92 * LT+RT RT 0.781 0.820 0.85 0.860 0.859 0.863 0.883 * LT+M LT 0.825 0.882 0.90 0.887 0.90 0.904 0.913 * LT+M M 0.743 0.7780.7720.788 0.79 0.803 0.811 * LT+CR LT 0.869 0.9040.9060.921 0.915 0.92 0.925 LT+CR CR 0.774 0.8000.8020.822 0.826 0.832 0.844 * RT+M RT 0.792 0.8300.8330.853 0.86 0.883 0.9 * RT+M M 0.729 0.7650.7850.795 0.801 0.816 0.83 * RT+CRRT 0.783 0.8280.8220.847 0.851 0.878 0.887 * RT+CRCR 0.756 0.8040.8140.812 0.817 0.831 0.84 * M+CR M 0.745 0.7750.7880.798 0.802 0.830 0.839 * M+CR CR 0.758 0.7990.811 0.819 0.812 0.817 0.812 Average 0.778 0.8180.8320.842 0.845 0.857 0.867* Table 2: Results using two training domains on dataset 1 .", "[0 . 0001 , 0 . 001 , , 1] .", "The vocabulary size is chosen from [6000 , 8000 , , 16000] .", "The batch size is chosen from [10 , , 100] .", "is chosen from [0 . 0001 , 0 . 001 , , 1] .", "As a result, the mini-batch size, the size of the vocabulary V , dropout rate, learning rate for AdaGra and for adversarial training are set to 50 , 10000 , 0 .", "4 , 0 .", "5 and 0 .", "1 , respectively.", "Also, gradient clipping (Pascanu et al., 2013) is adopted to prevent gradient exploding and vanishing during training process.", "Since all datasets only have thousands of instances, we set memory network sizes as training instance sizes in the experiments.", "In this section, we perform in-domain validations.", "We first combine two datasets for training and test on each domain's hold-out testing dataset.", "The results on dataset 1 are shown in Table 2 (the results on Blitzer's dataset exhibit similar results and are omitted due to space constraints).", "The accuracies of MTRL are significantly lower than the neural models, which demonstrates the effectiveness of dense features over discrete features.", "The baseline Mix improves the average accuracy from 0 .", "778 to 0 .", "818 , and most multi-domain training accuracies are better compared to single-domain training in Table 1.", "Mix simply combines the two datasets for trainings and ignores domain characteristics, yet improves over single dataset training.", "This demonstrates that more data reduces over-fitting and leads to better generalization capabilities.", "Multi further improves the average accuracy by 1 .", "4% , which confirms the effectiveness of utilizing domain information.", "Among our models, DSR further improves the accuracy over Multi by 1% , which confirms the effectiveness of domain-specific input representations in multi-domain sentiment analysis.", "DSR-sa slightly outperforms DSR by 0 .", "03% .", "Adopting an additional self-attention layer, DSR-sa trains similar domain descriptors together, thus better modeling domain relations, which will be further studied in Section 5.5.2.", "DSR-ctx outperforms DSR-sa by 1.2%, which demonstrates the effectiveness of memory networks in utilizing domain-specific example knowledge.", "DSR-at gives significantly the best results, confirming that domain-invariant representations achieved by adversarial training indeed benefit in-domain training.", "The results are significant using McNeymar's test.", "The results combining all the 4 domains and the 25 domains of the two datasets are shown in the In domain' sections of Table 3 and Table 4, respectively.", "Here the models are trained using all do-mains' training data, and tested on each domain's hold-out test data.", "Similar patterns are observed as in Table 2 and DSR-at achieves significantly the best accuracies (0.867 and 0.907 for the two datasets, respectively).", "We validate the algorithms cross-domain.", "For dataset 1 , models are trained on three domains, yet validated and tested on the other domain.", "For dataset 2 , models are trained on 24 domains, yet validated and tested on the 25th.", "Since DSR-at has m outputs (one for each training domain), we adopt an ensemble approach to obtain a single output for unknown test domains.", "In particular, since the domain classifier outputs probabilities on how likely the test data come from each training domain, we use these probabilities as weights to average the m outputs.", "For NDA , Multi , DSR and DSR-sa and DSR-ctx , we use average pooling to combine the m outputs.", "Since MDA and FEMA are devised to train on a single source domain, we combine the training data of m domains for training.", "The results are shown in the Cross domain' section of Table 3 and Table 4, respectively.", "One observation is that cross-domain accuracies are worse than in-domain accuracies, showing challenges in unknown-domain testing.", "Contrast between our models and FEMA / NDA shows the advantage of leveraging resources from all domains, versus a single source domain for cross-domain modelling.", "Among the baselines, 546 In domain Cross domain Dataset MTRL Mix Multi DSR DSR-sa DSR-ctx DSR-at MTRL Mix MDA Multi FEMA NDA DSR DSR-sa DSR-ctx DSR-at LT 0.813 0.831 0.900 0.897 0.902 0.898 0.915 * 0.763 0.792 0.801 0.808 0.811 0.816 0.822 0.823 0.854 0.878 * RT 0.776 0.801 0.825 0.841 0.845 0.855 0.870 * 0.772 0.786 0.789 0.779 0.774 0.776 0.78 0.784 0.814 0.847 * M 0.800 0.803 0.783 0.807 0.812 0.820 0.828 * 0.616 0.636 0.642 0.668 0.679 0.684 0.692 0.695 0.725 0.729 CR 0.775 0.786 0.819 0.825 0.828 0.836 0.854 * 0.714 0.721 0.736 0.735 0.741 0.745 0.751 0.753 0.789 0.809 * Average 0.791 0.805 0.832 0.843 0.847 0.852 0.867* 0.716 0.734 0.742 0.748 0.751 0.755 0.761 0.764 0.796 0.815 * Table 3: In-domain learning and cross-domain results on dataset 1 .", "NDA also considered domain-specific representations.", "On the other hand, it duplicates the full set of model parameters for each domain, yet underper-forms DSR and DSR-sa , which records only one domain descriptor vector for each domain.", "The contrast shows the advantages of learning domain descriptors explicitly in terms of both efficiency and accuracy.", "Similar to the known domain results, DST-sa and DSR-ctx further improve upon DSR and DSR-sa , showing the effectiveness of domain memory and adversarial learning.", "On both datasets, DSR-at achieves significantly the best performances, which shows the advantages of domain-invariant representations for unknown-domain testing.", "To obtain a better understanding of input attention with domain descriptors, we examine the attention weights of inputs and three examples are displayed in Figure 2, where the x axis denotes the four domains from the first dataset and the y axis shows the words.", "In Figure 2", "(a), the domain-specific word ease' is only selected for the domains LT and CR , while the domain-independent word great' is salient in all domains.", "Similarly, in Figure 2", "(b), meaty' and engaging' are only salient in RT and M , respectively.", "In Figure 2", "(c), the domain-specific word beast' is chosen in LT and CR .", "These confirm the effectiveness of input attention and DSR-ctx has the capability to pick out sentiment lexicons in conformity with domain characteristics.", "We take out the twenty-five domain descriptors for Blitzer's dataset and calculate the cosine similarities between each pair.", "Also, we calculate the cosine similarities of twenty-five domains based on unigram and bigram representations for ground truth.", "Pearson correlation coefficient is used to measure the correlations between two sets of cosine values.", "The final score is 0 .", "796 , which shows that domain descriptor similarities can serve as indicators for domain similarities.", "0.320.400.480.560.640.720.800.880.96", "We further study the attention of memory networks by randomly picking instances in the test sets and listing the context instances with the greatest attention weights obtained from Equation 6.", "The results of three test instances and their context instances are shown in Table .", "One observation is that semantically similar instances are selected to provide extra knowledge for predictions (e.g. a1, a2, b3, c1, c2, c3).", "Another observation is that the sentiment polarities between test instances and selected context instances are usually the same.", "We conclude that the memory networks are capable of selecting instructive instances for facilitating predictions.", "Domain Adaptation (Blitzer et al., 2007; Titov, 2011; Yu and Jiang, 2015) adapts classifiers trained on a source domain to an unseen target domain.", "One stream of work focuses on learning a general representation for different domains based on the co-occurrences of domain-specific and domain-independent features (Blitzer et al., 2007; Pan et al., 2011; Yu and Jiang, 2015; Yang et al., 2017).", "Another stream of work tries to identify domain-specific words to improve cross-domain classification (Bollegala et al., 2011; Li et al., 2012; Zhang et al., 2014; Qiu and Zhang, 2015).", "Different from previous work, we utilize multiple source domains for cross-domain validation, which makes our method more general and domain-aware.", "Multi-domain Learning jointly learn multiple domains to improve generalization.", "One strand of work (Dredze and Crammer, 2008; Saha et al., 2011; Zhang and Yeung, 2012) uses covari-This place blew me away.", "ance matrix to model domain relatedness, jointly learns domain-specific parameters and domain-independent parameters of linear classifiers.", "Another strand of work (Liu et al., 2016; Nam and Han, 2016) adopts neural network with shared input layers and multiple output layers for prediction.", "Our work belongs to the latter, yet we introduce domain descriptor matrix and memory networks to better capture domain characteristics and achieve better performance.", "Memory Networks reason with inference components combined with a long-term memory component.", "Weston et al. (2014) devise a memory network to explicitly store the entire input sequences for question answering.", "An end-to-end memory network is further proposed by Sukhbaatar et al. (2015) by storing embeddings of input sequences, which requires much less supervision compared to Weston et al. (2014).", "Kumar et al. (2016) introduces a general dynamic memory network, which iteratively attends over episodic memories to generate answers.", "Xiong et al. (2016) extends Kumar et al. (2016) by introducing a new architecture to cater image inputs and better capture input dependencies.", "In similar spirits, our memory network stores the domain-specific training instances for obtaining context knowledge.", "We investigated domain representations in multitask learning for multi-domain sentiment analysis, showing that leveraging domain descriptors, examples and adversarial training to learn domain representations give significant improve-548", "strong multi-task learning", "Acknowledgments We thank the anonymous reviewers for their insightful comments.", "Yue Zhang is the corresponding author.", "Unsupervised domain adaptation by backpropagation.", "In International Conference on Machine Learning .", "pages 11801189.", "Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio.", "2014.", "Generative adversarial nets.", "In Advances in neural information processing systems .", "pages 26722680.", "Alex Graves and Jurgen Schmidhuber.", "2005.", "Frame-wise phoneme classification with bidirectional lstm and other neural network architectures.", "Neural Networks pages 602610.", "Sepp Hochreiter and Jurgen Schmidhuber.", "1997.", "Long short-term memory.", "MIT Press, volume 9, pages 17351780.", "Minqing Hu and Bing Liu.", "2004.", "Mining and summarizing customer reviews.", "In Proceedings of the tenth ACM SIGKDD .", "ACM, pages 168177.", "Young-Bum Kim, WA Redmond, Karl Stratos, and Ruhi Sarikaya.", "2016.", "Frustratingly easy neural domain adaptation.", "In Proceedings of COLING 2016 .", "Ankit Kumar, Ozan Irsoy, Peter Ondruska, Mohit Iyyer, James Bradbury, Ishaan Gulrajani, Victor Zhong, Romain Paulus, and Richard Socher.", "2016.", "Ask me anything: Dynamic memory networks for natural language processing.", "In ICML .", "pages 13781387.", "Fangtao Li, Sinno Jialin Pan, Ou Jin, Qiang Yang, and Xiaoyan Zhu.", "2012.", "Cross-domain co-extraction of sentiment and topic lexicons.", "In ACL .", "Association for Computational Linguistics, pages 410419.", "Pengfei Liu, Xipeng Qiu, and Xuanjing Huang.", "2016.", "Recurrent neural network for text classification with multi-task learning.", "arXiv preprint arXiv:1605.05101 .", "Pengfei Liu, Xipeng Qiu, and Xuanjing Huang.", "2017.", "Adversarial multi-task learning for text classification.", "arXiv preprint arXiv:1704.05742 .", "Hyeonseob Nam and Bohyung Han.", "2016.", "Learning multi-domain convolutional neural networks for visual tracking.", "In CVPR .", "pages 42934302.", "Sinno Jialin Pan, Ivor W Tsang, James T Kwok, and Qiang Yang.", "2011.", "Domain adaptation via transfer component analysis.", "IEEE Transactions on Neural Networks 22(2):199210.", "Bo Pang and Lillian Lee.", "2004.", "A sentimental educa-tion: Sentiment analysis using subjectivity summarization based on minimum cuts.", "In ACL .", "Association for Computational Linguistics, page 271.", "Bo Pang, Lillian Lee, and Shivakumar Vaithyanathan.", "2002.", "Thumbs", "up?: sentiment classification using machine learning techniques.", "In ACL .", "Association for Computational Linguistics, pages 7986.", "Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio.", "2013.", "On the difficulty of training recurrent neural networks.", "ICML 28:13101318." ]
[ "abstain", "abstain", "objective", "abstain", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "method", "abstain", "result", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "result", "result", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "method", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain" ]
[ "The input vocabulary and their learned representations are crucial to the performance of neural NLP models.", "Using the full vocabulary results in less explainable and more memory intensive models, with the embedding layer often constituting the majority of model parameters.", "It is thus common to use a smaller vocabulary to lower memory requirements and construct more interpertable models.", "We propose a vocabulary selection method that views words as members of a team trying to maximize the model's performance.", "We apply power indices from cooperative game theory, including the Shapley value and Banzhaf index, that measure the relative importance of individual team members in accomplishing a joint task.", "We approximately compute these indices to identify the most influential words.", "Our empirical evaluation examines multiple NLP tasks, including sentence and document classification, question answering and textual entailment.", "We compare to baselines that select words based on frequency, TF-IDF and regression coefficients under L1 regularization, and show that this game-theoretic vocabulary selection outperforms all baselines on a range of different tasks and datasets.", "Most state-of-the-art NLP methods use neural networks that require a pre-defined vocabulary to vec-torise and encode text.", "In large text datasets, the vocabulary size can grow to hundreds of thousands of words, and having an embedding space over the entire vocabulary results in models that are expensive in memory and compute, and hard to interpret.", "Many of the words in the vocabulary are not crucial to task performance, and can be removed without a significant drop in final task performance.", "Work done during an internship at DeepMind.", "Corresponding email: [email protected] or [email protected].", "It is common to use heuristics such as frequency or TF-IDF to reduce vocabulary size.", "After filtering to obtain a smaller vocabulary, out-of-vocabulary (OOV) words are replaced with an unknown word token < UNK > .", "This reduction in vocabulary size has many advantages.", "Models with reduced vocabulary are more easily interpretable and achieve increased transparency (Adadi and Berrada, 2018; Samek et al., 2019), require less memory, can be used in resource constrained settings, and are less prone to overfitting (Sennrich et al., 2015; Shi and Knight, 2017; L'Hostis et al., 2016; Chen et al., 2019).", "However, reducing the vocabulary size with a heuristic such as frequency is often not optimal.", "For example, Figure 1 shows the top ranked words according to frequency (blue), that are largely unimportant for the sentiment task at hand.", "We consider the vocabulary selection problem: given a target vocabulary size k (or equivalently, a target memory footprint or a budget of model parameters for the embedding layer), what is the optimal word subset we should use as our vocabulary?", "Our solution's output, based on the Shapley value, is also shown in Figure 1, demonstrating that it focuses on words relevant to the task.", "Our Contribution: We use game theoretic principles to propose a vocabulary selection method.", "We cast the vocabulary selection problem as a cooperative game , which considers subset of words as a team whose goal is to solve the NLP task at hand.", "We define the performance of a team as the performance of a model that uses only those words as its vocabulary.", "Our method applies solution concepts from game theory to determine the relative importance of each word in achieving the goal.", "Specifically, we consider the Shapley value (Shapley, 1953) and Banzhaf index (Banzhaf III, 1964), key concepts in game theory, that are used as power-indices for measuring the individual contribution of team members to the success of the team.", "We approximate these indices by sampling subsets of words and training a model on each subset to contrast model performance when including and omitting a target word.", "We evaluate our approach against baselines such as TF, TF-IDF and ranking using logistic regression coefficients under L1 regularization.", "We evaluate on a range of datasets and task structures: single-sentence classification, pairwise-sentence classification and document classification.", "While our method is significantly more demanding computationally than these simple baselines, we empirically demonstrate that it outperforms these baselines on all tasks, offering better tradeoffs between the vocabulary size and the model's performance.", "We assume a dataset D and a training method M for training on D and producing a model f where are tuned model parameters.", "The model is evaluated on a validation set T to estimate how well the model generalizes to the true data distribution.", "An evaluation metric (for example the model accuracy or F 1 score, as evaluated on the validation set) for each model f is denoted by q ( f ) , thus allowing an assessment of the performance of a subset of words.", "We first briefly discuss preliminaries from cooperative game theory (Chalkiadakis et al., 2011).", "a set A = { a 1 , . . . , a n } of players and a characteristic function v : 2 A R mapping any subset of players C A (called a team or coalition) to a real value v ( C ) indicating the performance of the team when working together.", "The Shapley value (Shapley, 1953), denoted ( v ) = ( 1 , . . . n ) , reflects each player's individual contribution to the success of the team, adhering to fairness axioms (Dubey, 1975).", "1 Similarly, the Banzhaf index (Banzhaf III, 1964), denoted ( v ) = ( 1 , . . . n ) , measures impact of individuals on the success of a team, using different axioms (Dubey and Shapley, 1979; Strafiin Jr, 1988).", "Consider quantifying the individual contribution of a player a i A in a game with the characteristic function v .", "Examine the player a i and a coalition C A \\ { a i } that does not contain that player.", "The marginal contribution of a i to the coalition C is defined as m ( a i , C ) = v ( C { a i } ) v ( C ) , i.e. the increase in value arising from adding a i to the coalition C .", "Similarly, denote the set of permutations over then n players as (i.e. each is a bijection : A A ), and denote the predecessors of a i A in the permutation as b ( a i , ) .", "The marginal contribution of a i in the permutation is defined as m ( a i , ) = v ( b ( a i , ) { a i } ) v ( b ( a i , )) , i.e. the increase in value arising from adding a i to the players appearing before it in the permutation .", "The Banzhaf index i of player a i is the marginal contribution of player a i averaged over all possible coalitions that do not contain that player: i = 1 2 n 1 (cid:88) C A | i C v ( C { i } ) v ( C ) The Shapley value i of a player a i is the marginal contribution of that player, averaged across all permutations: i = 1 n !", "The Banzhaf index of a i can be viewed as the expected increase in performance under uncertainty about the participation of other players in the team", "1 The Shapley value has also been used to examine power in team formation (Aziz et al., 2009; Mash et al., 2017; Bachrach et al., 2020), combinatorial tasks (Ueda et al., 2011; Banarse et al., 2019), pricing and auctions (Bachrach, 2010; Kamboj et al., 2011; Blocq et al., 2014) or political settings (Bilbao et al., 2002; Bachrach et al., 2011; Filmus et al., 2019), or feature importance for model explainability (Lundberg and Lee, 2017).", "if each of the other players has an equal probability of joining the team or not joining it, how much value to we expect to add when a i joins the team.", "Similarly, the Shapley value can be viewed as the expected increase in team value that a i would yield when players join the team in a random order.", "2 2.2 Our Approach: Vocabulary Selection by Comparing Power Indices Given the entire vocabulary V and a budget of k words to use, our method selects a subset V (cid:48) V where | V (cid:48) | = k , optimizing the performance q ( f V (cid:48) ) of a model f V (cid:48) trained using a vocabulary consisting only of the words in V (cid:48) .", "We view each word as a player and each subset of words C V as a team, and construct a cooperative game.", "The characteristic function v : V R maps a subset of words (partial vocabularies) to the performance obtained when training a model with only these words a vocabulary.", "Formally, we define the performance v ( C ) of the team C V to be the performance q ( f C ) of an NLP model f C with the words in C as its input vocabulary.", "3 Given a vocabulary C V , evaluating v ( C ) requires training a model f on dataset D using only the words in C as the vocabulary 4 , and measuring its performance on the validation set T to obtain v ( C ) = q ( f C ) .", "We compute the Shapley value i or Banzhaf index i of any word w i V (see Section 2.1).", "Words with high values are ones that have a larger positive influence on performance, whereas words with lower values are ones that do not impact task performance when they are removed.", "5 Observe that the Banzhaf index i is the expected marginal contribution m ( a i , C ) for a coalition C sampled uniformly at random from the set { C V | a i C } , and the Shapley value i is the expected marginal contribution m ( a i , ) for a permutation sampled uniformly at random from .", "We can approximate these by taking a sample of coalitions or permutations, and examining a i 's average marginal contribution in the sample.", "For the 2 An equivalent formula for the Shapley value is: i = (cid:80) C A | i C | C |", "!( | A || C | 1)!", "| A | ! v ( C { i } ) v ( C ) , showing the different weights the indices give to different size coalitions.", "3 For example, for text classification we may define v ( C ) to be the model's accuracy when using C as the vocabulary.", "4 For example in a text classification task, one could train a neural network classifier f C on the dataset D , replacing all the words in V \\ C with the UNK token.", "5 The direct formulas for the Shapley or Banzhaf indices enumerate over all possible word subsets or permutations, which is intractable.", "Hence, we use an approximation algorithm (Matsui and Matsui, 2000; Bachrach et al., 2010).", "Shapley value, the sample consists of permutations of words in the vocabulary, where for each permutation we train two models on vocabularies that differ by a single word w .", "The performance difference between the two models is then the marginal contribution of the word w .", "For the Banzhaf index, we directly construct the vocabulary by flipping a fair coin per word to determine its inclusion in the vocabulary.", "The power index is approximated as the average marginal contribution of the word across the samples.", "Finally, we select the k words with the highest power index as our vocabulary V (cid:48) .", "This is shown in Algorithms 1, 2.", "Algorithm 2 Shapley Vocabulary Selection 1: Inputs: NLP dataset D with full vocabulary V 2: for each word w in V do 3: w 0 (initialise Shapley value estimate) 4: for i=1 to S (number of sampled permutations do 5: Random-Permutation( V ) 6: C 1 b ( w , ) (predecessors of w ) 7: C 2 C 1 { w } (predecessors including w ) 8: f C 1 TrainModel( C 1 ) (Train on vocabulary C 1 ) 9: f C 2 TrainModel( C 1 ) (Train on vocabulary C 2 ) 10: m ( w , ) q ( f C 2 ) q ( f C 1 ) 11: w w + m ( w , ) 12: end for 13: w 1 S w (average marginal contributions) 14: end for", "We evaluate our algorithm on multiple tasks, contrasting its performance with common baselines.", "Single Sentence Classification: the task requires a model to encode the words of a given sentence and output a classification based on properties of sentences (for e.g., sentiment or acceptability).", "We evaluate on a sentiment-analysis task using the SST-2 dataset (Socher et al., 2013) and a corpus acceptability task using the CoLA dataset (Warstadt et al., 2019; Wang et al., 2018).", "The sentiment analysis task contains 9.6k sentences labelled with a positive or negative sentiment, while the acceptability task contains 8.5k sentences labelled with an acceptability judgement about whether or not it is a grammatically correct English sentence.", "Entailment and Question Pair Classification: this task requires a model to encode two sentences and output a classification based on the relation between them.", "We evaluate on a textual entailment task using the SNLI dataset (Bowman et al., 2015a) and a question pair classification task using the QQP dataset (Wang et al., 2018).", "SNLI contains 550k sentence pairs and requires models to encode two different sentences, a premise and a hypothesis, and predict one of three relations between them: an entailment, a contradiction or a neutral relation.", "The QQP task contains 364k pairs and requires models to encode two different text inputs, a question and an alternate question composed of different words, and to predict whether or not the two questions correspond to the same answer.", "Document Classification: this task requires models to encode an input document or article, and predict a class based on properties of the document.", "We evaluate on the AG-News and Yelp datasets (Zhang et al., 2015).", "The AG-News dataset contains the title and description of 120,000 news articles in four categories (the prediction target is the category).", "The Yelp dataset contains 130,000 million samples with text reviews, with the prediction target being the polarity of the review (positive or negative).", "The number of words in each text instance (document) are significantly larger than in the single sentence classification task, requiring models to capture phenomena like co-reference and temporal order that are prevalent in longer texts.", "Our method in Section 2.2 is agnostic to the specific model and training procedure: we simply assume we have access to an algorithm that trains on a", "quality q ( f ) is evaluated on a validation set T .", "We perform our empirical evaluation using both an LSTM classifier and a logistic regression classifier.", "Our method trains many models with different vocabularies to select the final vocabulary V (cid:48) .", "We then evaluate the quality of the chosen reduced vocabulary V (cid:48) by training a final model f V (cid:48) which uses only the vocabulary V (cid:48) and evaluate the performance of f V (cid:48) on a held out test T (cid:48) .", "To maximize performance, one should use the same architecture during the vocabulary selection process as the evaluation.", "However, words that are strong features for one architecture are likely to also be strong features for another architecture.", "Hence, we can select the the vocabulary using one architecture even if we intend to use this vocabulary for another architecture.", "As our vocabulary selection procedure trains many models, we use logistic regression models during the vocabulary selection process.", "We show it still significantly outperforms baselines, and allows faster and more efficient computation of the Shapley value.", "We then evaluate the quality of the vocabulary using an LSTM model.", "Training logistic regression models: To train the logistic regression classifier in the single-text case, we represent each sentence or document as the set of words that occur in that text sample.", "For the pairwise-sentence case, we similarly represent each paired input with three times the number of word features, using a one hot encoding indicating that the word occurred only in the first sentence (e.g. question), only in the second sentence (e.g. answer) or whether it occurred in both sentences.", "This model is far simpler than state-of-the-art text classification models, but we find it is a good-enough proxy for the Shapley computation step, and much more economical computationally.", "To train the LSTM classifier, we encode words using an embedding layer of size 100.", "These embeddings are fed one at a time to an LSTM encoder with a hidden layer size of 100, and the output of the LSTM encoder is fed into a feedforward neural network yielding the final classification (Deng and Liu, 2018) over some number of classes.", "Our experiments show that even when using the simple logistic regression for the vocabulary selection process we achieve a significant performance improvement over baselines, as evaluated with an LSTM model.", "In other words, the vocabulary quality improvement transfers to more complex models.", "We contrast the performance of our approach (Al-gorithm 1 based on the Banzhaf index and Algorithm 2 based on the Shapley value) with several baselines.", "We first consider ranking by term frequency (TF), i.e selecting the most frequently oc-curing words in the dataset.", "We also consider ranking words by TF-IDF scores (Ramos et al., 2003), which is commonly used for web search.", "As a stronger baseline we consider ranking words based on their regression coefficients, a method used for estimating feature importance (Ellis, 2010; Nimon and Oswald, 2013).", "In this baseline, we train a logistic regression model with L 1 regularization on the dataset D (the regularization encourages the model to have low weights, setting the weight of many features to zero when the regularization is strong enough); we then rank features by the abso-lute coefficient of each feature in the trained model.", "We refer to this as the L 1 baseline .", "6 Our approach for calculating the Banzhaf index or Shapley value is based on a random sample of coalitions, and achieving a good accuracy requires taking many samples, especially when ranking a vocabulary with many words.", "To keep the required compute manageable while achieving a reasonable approximation, we first apply a pre-filtering step, selecting a large vocabulary (but not the full vocabulary) by applying the TF heuristic, then selecting the final small vocabulary from this large vocabulary using our approach.", "For instance, with a target vocabulary size of 100 words, we first filter out all but the 1,000 most frequent words and then rank based on the Shapley value (and contrast the performance of this method with selecting the top 100 words based solely on TF or TF-IDF score).", "When comparing against the L 1 baseline, we similarly apply an L 1 based pre-filtering.", "6 In logistic regression with L 1 regularization, the regression coefficients and derived word ranking depend on the degree of regularization and the initialization.", "Methods like GLMpath (Friedman et al., 2010) obtain the entire L 1 path of the GLM at the cost of fitting a single model.", "In the spirit of stability selection (Meinshausen and Bhlmann, 2010), to alleviate stochasticity we average 20 training runs of the L 1 regularized model, averaging coefficients to obtain the ranking over words (still cheaper computationally than our approach).", "Figure 3 contrasts our method with the TF and TF-IDF baselines in the SST-2 dataset.", "It shows that for all methods, increasing the allowed vocabulary size improves the model quality (at the cost of an increased number of parameters or memory).", "The figure indicates that both the Banzhaf and Shapley algorithms offer a significantly better tradeoff between vocabulary size and model quality they produce a better performing model at all the tested vocabulary sizes (the performance gap is especially pronounced for smaller vocabulary sizes).", "Interestingly, the performance of both the Banzhaf and Shapley is very similar.", "Although they both select words with high marginal contributions, they rely on different power indices.", "To determine whether they select the same words, we examined the words selected at a target vocabulary size of | V (cid:48) | = 100 .", "Figure 2 shows the top words according to the different methods.", "The top 100 Figure 4: Performance of a Shapley, TF and TF-IDF in additional task structures.", "words under the Banzhaf and Shapley algorithms intersected on less than 70% of the words, so although they have similar performance, there are non-negligible differences in the words they select.", "Figure 3 relates to single sentence classification.", "Figure 4 shows similar results for the two other types of tasks: pairwise sentence classification and document classification.", "Similarly to the previous figure, these results indicate that our approach achieves a significantly better tradeoff between vocabulary size and model accuracy.", "This indicates that our proposed approach offers advantages across a wide set of NLP tasks.", "Table 1 shows the performance of an LSTM classifier across all tasks and datasets for the various methods.", "It shows a consistent improvement over the baselines in all the tasks for both the Banzhaf and Shapley methods (which have very similar performance in all the datasets).", "Comparison with the L 1 baseline: Section 3.2 considered the stronger baseline of ranking by regression coefficients in an L 1 regularized logistic regression.", "The high-level motivation of this baseline is similar to our approach in that words are ranked based on their influence as measured by training a model; however, the L 1 method trains a single model (or has a computational cost similar to training one or few models), whereas a power index computation relies on training a sample of models.", "Figure 5 shows our approach outperforms the L 1 baseline.", "Comparison with subword approaches: Subword embeddings (Sennrich et al., 2015) is a recent approach which considers tokens that can be parts of words, resulting in a less sparse vocabulary and having features shared across words.", "Such approaches are flexible and allow choosing a target vocabulary size.", "Our approach can also work with subword embeddings: after computing some set of subwords over the vocabulary, we can still filter out less important subwords to improve task Task & Dataset Method Vocab Acc SST-2 TF-IDF 17,539 80.2 (Socher et al., 2013) Frequency 80.3 Banzhaf 81.7 Shapley 81.9 COLA TF-IDF 9007 63.5 (Warstadt et al., 2019) Frequency 63.7 Banzhaf 63.9 Shapley 64.2 SNLI TF-IDF 42,392 83.9 (Bowman et al., 2015b) Frequency 83.9 Banzhaf 84.1 Shapley 84.3 QQP TF-IDF 117,303 80.8 (Wang et al., 2018) Frequency 81.2 Banzhaf 81.9 Shapley 81.9 AG-NEWS TF-IDF 159,697 79.6 (Zhang et al., 2015) Frequency 78.5 Banzhaf 79.9 Shapley 80.2 YELP TF-IDF 458,705 84.5 (Zhang et al., 2015) Frequency 83.9 Banzhaf 86.7 Shapley 87 Table 1: Performance of vocabulary selection methods across datasets and tasks, at a target vocabulary size of | V (cid:48) | = 750 words (column 3 is initial vocabulary size).", "performance.", "We evaluated whether applying our approach on top of using subword embeddings can still lead to improved performance.", "We first run a byte-pair encoding (BPE) algorithm (Sen-nrich et al., 2015; Provilkov et al., 2019; Kudo and Richardson, 2018) over each input vocabulary for a dataset.", "This algorithm operates by merging together the most frequent sequence of adjacent tokens in each iteration.", "We do this for a total number of 10,000 merges, resulting in a smaller vocabulary that now composed of subwords.", "We then apply Shapley, Banzhaf, TF and TF-IDF rankings of these subword tokens, as we have done in the word-level experiments.", "Figure 6 shows that we have improved performance over the baselines in the subword case as well.", "The results in Section 4 show that a game theoretic approach to vocabulary selection can achieve better tradeoffs between the vocabulary size and model performance than heuristics such as TF and TF-IDF based selection, or a method based on regression coefficients in an L 1 regularized logistic regression.", "This advantage comes at the cost of having a significantly higher computational cost of selecting the vocabulary.", "Following the expensive selection step, we now have the benefit of a smaller model which is more interpretable and explainable, has a reduced memory consumption and potentially less prone to overfitting.", "We have proposed several ways to mitigate the compute load of selecting the vocabulary: applying a heuristic pre-filtering step and using logisitic regression models rather than the full model while estimating power indices.", "We proposed a vocabulary selection method for NLP tasks, using cooperative game theory.", "We discuss related work on model compression, tailoring the vocabulary in NLP tasks and using subword embeddings, and approximating game theoretic solutions and using them for explainable AI.", "Model compression: Using the full vocabulary to train models limits the applicability of models in memory-constrained or computation-constrained scenarios (Faruqui et al., 2015; Yo-gatama et al., 2015).", "Earlier work discusses methods for compressing model size.", "These yield models that are less expensive in memory and compute, and that are also more easily interpretable.", "Model compression methods include matrix compression methods such as sparsification of weights in a matrix (Wen et al., 2016), Bayesian inference for compression (Molchanov et al., 2017; Neklyu-dov et al., 2017), feature selection methods such as ANOVA (Girden, 1992), precision reduction methods (Han et al., 2015; Hubara et al., 2017) and approximations of the weight matrix (Tjandra et al., 2017; Le et al., 2015).", "Our method relies on game theoretic principles; it filters our vocabulary words, and can thus operate with any NLP architecture (i.e. the method is agnostic to the model architecture used).", "Further, the interpretability in our case stems from having few features, clearly highlighting the most impactful features in the dataset.", "Vocabulary selection methods and subword and character level embeddings: earlier work examined selecting a vocabulary for an NLP task.", "Some alternatives drop out words (Chen et al., 2019), whereas character-level methods that attempt to represent the input text at the level of individual characters (Kim et al., 2015; Lee et al., 2017; Ling et al., 2015) while subword methods attempt to tokenize words into parts of words in a more efficient way (Sennrich et al., 2015; Kudo and Richardson, 2018).", "Character level embedding methods decompose words to allow each individual character to have its own embedding.", "This reduces the vocabulary size to the number of characters, much smaller than the number of words in the full vocabulary.", "However, this is not applicable for some character-free languages (e.g. Chinese, Japanese, Korean).", "Also, such methods have reduced performance, and typically use larger embedding sizes than word embedding models to obtain reasonable quality (Zhang et al., 2015; Kim et al., 2015).", "In contrast, subword embeddings have shown improved performance for several NLP tasks.", "Such methods typically merge pairs of frequent character sequences, to get a more optimal token vocabulary from an information-theoretic viewpoint.", "Byte-pair encoding (BPE) algorithms construct subword vocabulary that is less sparse, and increases shared features between words 7 , allowing better propoga-tion of semantic meaning.", "As shown in Section 4, our method can operate on top of subword embeddings, and achieve good tradeoffs between the model size and performance.", "Cooperative game theory and applications for explainable AI: we use concepts from game theory, viewing words as players in a game whose goal is to improve model performance.", "Such settings have been a key topic of study in game theory since the 1950s (Weintraub, 1992).", "Many solution concepts have been proposed, examining issues such as stability and fairness.", "Power indices such as the Banzhaf index (Banzhaf III, 1964) and Shapley value (Shapley, 1953) to measure the relative impact players have on the outcome of the game.", "It is computationally hard to calculate them even in simple games (Matsui and Matsui, 2001; Elkind et al., 2007).", "We have applied a Monte-Carlo sampling approximation based on existing methods (Fatima et al., 2008; Bachrach et al., 2010).", "Our use of the Shapley value is akin to recent explainable AI methods, that attempt to allow AI models to provide human readable insights to explain their decisions (Adadi and Berrada, 2018; Samek et al., 2019).", "For example, power indices (such as the Shapley value) have been used to explain individual model predictions (Datta et al., 2016; Lundberg and Lee, 2017), by estimating the contribution of individual features on each prediction.", "This can be done for linear models (Lundberg and Lee, 2017) as well as tree-based models (Lundberg et al., 2020).", "Explainable AI methods typically take a trained model and a given instance as input, and perturb the features of the instance, using the same model to output predictions for many perturbed inputs.", "In contrast, our goal is not to understand the predictions of a given model, but to select an small input vocabulary set for a task, focusing on the most relevant part of the input space and yielding simpler and more interpretable models.", "Further, we train many models to estimate contributions, rather than perturbing the inputs for a single model.", "7 For instance, the word sadder\" could be split into sad\" and er\" , where the ending er has a similar meaning in other circumstances faster\" , nearer\" etc. 7 Conclusion We proposed a vocabulary selection method based on cooperative game theory and empirically showed improvements over baselines in multiple NLP tasks.", "Our approach, with its task-specific vocabulary, offers an improved model size and quality tradeoffs.", "Several questions remain open for future research on better vocabulary selection.", "Could alternative power indices, apart from what we have shown using the Shapley and Banzhaf indeces, achieve better performance?", "Is there a way to better combine our methods with subword embeddings?", "Moreover, given that our method is computationally demanding during vocabulary construction time, an interesting problem is to explore ways to speed up this process; both theoretically, through a different power index calculation, and practically, through better parallelization.", "We would like to acknowledge Thore Graepel for formative advice and discussions at early stages of the project.", "We would also like to thank Richard Everett, Thomas Anthony, Andrea Tacchetti and Tom Eccles for helpful questions and comments throughout the project, and Ellie Pavlick for suggestions on robustly evaluating NLI models.", "We would also like to acknowledge the anonymous reviewers and area chairs, whose feedback and suggested changes for additional experiments were hugely helpful in making the paper stronger, more robust, and clearer than before." ]
[ "abstain", "abstain", "abstain", "objective", "method", "method", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "objective", "objective", "method", "objective", "objective", "objective", "method", "method", "objective", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "other", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "objective", "method", "other", "other", "other", "other", "method", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "objective", "other", "other", "other", "other", "method", "method", "other", "other", "other", "objective", "objective", "objective", "abstain", "other", "abstain", "method", "objective", "other", "other", "other" ]
[ "We propose a straightforward vocabulary adaptation scheme to extend the language capacity of multilingual machine translation models, paving the way towards efficient continual learning for multilingual machine translation.", "Our approach is suitable for large-scale datasets, applies to distant languages with unseen scripts, incurs only minor degradation on the translation performance for the original language pairs and provides competitive performance even in the case where we only possess monolingual data for the new languages.", "The longstanding goal of multilingual machine translation (Firat et al., 2016; Johnson et al., 2017; Aharoni et al., 2019; Gu et al., 2018) has been to develop a universal translation model, capable of providing high-quality translations between any pair of languages.", "Due to limitations on the data available, however, current approaches rely on first selecting a set of languages for which we have data and training an initial translation model on this data jointly for all languages in a multi-task setup.", "In an ideal setting, one would continually update the model once data for new language pairs arrives.", "This setting, dubbed in the literature as continual learning (Ring, 1994; Rebuffi et al., 2017; Kirkpatrick et al., 2017; Lopez-Paz and Ranzato, 2017), introduces new challenges not found in the traditional multi-task setup, most famously catastrophic forgetting (McCloskey and Cohen, 1989), in which the model may lose its previously-learned knowledge as it learns new language pairs.", "This situation is further complicated by the training procedures of standard tokenizers, such as Byte-Pair Encoding (BPE) (Sennrich et al., 2015b) or Sentencepiece (Kudo and Richardson, 2018), which necessitate access to monolingual data for all the languages considered before producing the vocabulary.", "Failing to comply with these requirements, one risks suboptimal segmentation rules which in the worst case could result in strings of entirely <UNK> tokens for text in a previously-unseen alphabet.", "In this work, we investigate how vocabularies derived from BPE transform if they are rebuilt with the same settings but with additional data from a new language.", "We show in Section 3.1 that there is a large token overlap between the original and updated vocabularies.", "This large overlap allows us to retain the performance of a translation model after replacing its vocabulary with the updated vocabulary that additionally supports a new language.", "Past works have explored adapting translation models to new languages, typically focusing on related languages which share similar scripts (Gu et al., 2018; Neubig and Hu, 2018; Lakew et al., 2019; Chronopoulou et al., 2020).", "These works usually focus solely on learning the new language pair, with no consideration for catastrophic forgetting.", "Moreover, these works only examine the setting where the new language pair comes with parallel data, despite the reality that for a variety of low-resource languages, we may only possess high-quality monolingual data with no access to parallel data.", "Finally, unlike our approach, these approaches do not recover the vocabulary one would have built if one had access to the data for the new language from the very beginning.", "Having alleviated the vocabulary issues, we study whether we are able to learn the new language pair rapidly and accurately, matching the performance of a model which had access to this data at the beginning of training.", "We propose a simple adaptation scheme that allows our translation model to attain competitive performance with strong bilingual and multilingual baselines in a small amount of additional gradient steps.", "Moreover, our model retains most of the translation quality on the original language pairs it was trained on, exhibiting no signs of catastrophic forgetting.", "Related works Adapting translation models to new languages has been studied in the past.", "Neubig and Hu (2018) showed that a large multilingual translation model trained on a subset of languages of the TED dataset (Qi et al., 2018) could perform translation on the remaining (related) languages.", "Tang et al. (2020) was able to extend the multilingual translation model mBART (Liu et al., 2020) from 25 to 50 languages by exploiting the fact that mBART's vocabulary already supported those additional 25 languages.", "(Escolano et al., 2021) was able to add new languages to machine translation models by training language-specific encoders and decoders.", "Other works (Zoph et al., 2016; Lakew et al., 2018, 2019; Escolano et al., 2019) have studied repurposing translation models as initializations for bilingual models for a target low-resource language pair.", "Most recently (Chronopoulou et al., 2020) examined reusing language models for high-resource languages as initializations for unsupervised translation models for a related low-resource language through the following recipe: build vocabulary VX and a language model for high-resource language X ; once data for low-resource language Y arrives, build a joint vocabulary V X,Y and let VY | X be the tokens from Y that appear in V X,Y ; substitute the vocabulary for the language model with the one given by VXY VY | X and use the language model as the initialization for the translation model.", "Our approach In this work, we are not only interested in the performance of our multilingual translation models on new language pairs, we also require that our models retain the performance on the multiple language pairs that they were initially trained on.", "We will also be interested in how the performance of these models compares with those obtained in the oracle setup where we have all the data available from the start.", "The approaches discussed above generate vocabularies that are likely different (both in selection and number of tokens) from the vocabulary one would obtain if one had a priori access to the missing data, due to the special attention given to the new language.", "This architectural divergence will only grow as we continually add new languages, which inhibits the comparisons to the oracle setup.", "We eliminate this mismatch by first building a vocabulary VN on the N languages available, then once the new language arrives, build a new vocabulary VN ` 1 as we would have if we had possessed the data from the beginning and replace VN with VN ` 1 .", "We then reuse the embeddings for tokens in the intersection 1 and continue training.", "The success of our approach relies on the fact for large N (i.e. the multilingual setting), VN and VN ` 1 are mostly equivalent, which allows the model to retain its performance after we substitute vocabularies.", "We verify this in the following section.", "In this section, we outline the set of experiments we conducted in this work.", "We first discuss the languages and data sources we use for our experiments.", "We then provide the training details for how we trained our initial translation models.", "Next, we compute the token overlap between various vocabularies derived from BPE before and after we include data for a new language and empirically verify that this overlap is large if the vocabulary already sup-pots a large amount of languages.", "We then examine the amount of knowledge retained after vocabulary substitution by measuring the degradation of the translation performance on the original language pairs from replacing the original vocabulary with an updated one.", "Finally, we examine the speed and quality of the adaptation to new languages under various settings.", "Languages considered Our initial model will have to access to data coming from 24 languages 2 .", "Our monolingual data comes primarily from the newscrawl datasets 3 and Wikipedia, while the parallel data comes WMT training sets and Paracrawl.", "We will adapt our model to the following four languages: Kazakh , which is not related linguistically to any of the original 24 languages, but does share scripts with Russian and Bulgarian; Bengali , which is related to the other Indo-Aryan languages but possesses a distinct script; Polish , which is related to (and shares scripts with) many of the Slavic languages in our original set; Pashto , which 1 Tokens shared between the two vocabularies are also forced to share the same indices.", "rewritten but we still reuse the outdated embeddings.", "2 In alphabetical order: Bulgarian, Czech, Danish, German, Greek, English, Spanish, Estonian, Finnish, French, Gujarati, Hindi, Croatian, Hungarian, Italian, Lithuanian, Latvian, Portugese, Romanian, Russian, Slovak, Slovenian, Tamil.", "3 http://data.statmt.org/news-crawl/ Figure 1: The degradation in BLEU from substituting vocabularies at inference .", "is not closely related 4 to any of the languages in our original set and has a distinct script.", "We provide an in-depth account of the data available for each language in the appendix.", "Model configurations We perform our experiments in JAX (Bradbury et al., 2018), using the neural network library FLAX 5 .", "We use Transformers (Vaswani et al., 2017) as the basis of our translation models.", "We use the Transformer Big configuration and a shared BPE model of 64k tokens with byte-level fallback using the Sentencepiece 6 library.", "We used a maximum sequence length of 100, discarded all sequences longer than that during training.", "Sampling scheme We train our models leveraging both monolingual and parallel datasets, following previous work (Siddhant et al., 2020; Garcia et al., 2020).", "We sample examples from monolingual and parallel sources with equal probability.", "Within each source, we use a temperature-based sampling scheme based on the numbers of samples of the relevant datasets with a temperature of 5 (Arivazhagan et al., 2019).", "Training objectives We apply the MASS objective (Song et al., 2019) on the monolingual data and cross-entropy on the parallel data.", "We used the Adam(Kingma and Ba, 2015) optimizer, with an initial learning rate of 4e-4, coupled with a linear warmup followed by a linear decay to", "0. The initial warmup took 1k steps, and the total training time was 500k steps.", "We also included weight decay with a hyperparameter of 0.2.", "Evaluation We use beam search with a beam size of 4 and a length penalty of 0 . 6 for decoding. We evaluate the quality of our models using BLEU scores (Papineni et al., 2002). We exclusively use detokenized BLEU, computed through sacreBLEU (Post, 2018) for consistency with previous work and future reproducability. 7 3.1 Transfer learning from vocabulary substitution Measuring token overlap We now examine the impact on the vocabulary derived from a BPE model upon the inclusion on a new language. We first build corpora consisting of text 8 from 1, 5, 10, 15, 20, and 24 of our original languages. For each corpus, we make copies and add additional data for either Bengali, Polish, Kazakh, Pashto, or their union, yielding a total of 30 corpora. We build BPE models using the same settings for each corpus, compute the token overlap between the vocabularies with and without the additional language, and 7 BLEU + case.mixed + numrefs.1 + smooth.exp + tok.13a + version.1.4.14 8 We used 1 million lines of raw text per language. Model PMIndiabn en newsdev2020pl en newstest2019kk en FLoRes devset ps en Original Vocabulary Unadapted 0.0 0.0 2.4 4.0 0.7 2.2 0.0 0.0 xx monolingual & parallel 5.7 13.6 20.2 26.2 3.9 17.2 2.8 10.3 4xx monolingual & parallel 5.3 15.1 18.3 25.0 2.7 15.8 2.3 8.4 Adapted Vocabulary xx monolingual 0.0 1.7 13.9 24.3 0.9 19.0 0.0 6.5 xx monolingual (+BT) -21.3 24.1 4.7 19.5 -xx monolingual & parallel 10.0 27.2 21.5 27.5 5.9 20.2 6.6 15.1 4xx monolingual & parallel 10.5 26.4 20.3 26.8 5.6 20.5 6.7 15.2 Oracle xx monolingual & parallel 10.1 29.2 19.6 26.8 5.4 20.5 6.6 14.7 4xx monolingual & parallel 10.0 28.6 18.9 26.4 5.4 20.3 6.2 14.4 Table 2: BLEU scores on the new languages. The monolingual models have access to exclusively monolingual data for the new language(s), while monolingual & parallel models add parallel data as well.", "report the results in Table", "1. In the multilingual setting, we attain large token overlap, more than 90%, even for languages with distinct scripts or when we add multiple languages at once.", "We extend this analysis to different vocabulary sizes and examine which tokens are lost in Appendix A.3.", "Measuring the deterioration from swapping vocabularies at inference To measure the amount of knowledge transferred through the vocabulary substitution, we compute the translation performance of our initial translation model with the adapted vocabularies without any additional updates .", "For each new language, we compute the change in BLEU from the model with its original vocabulary and the one utilizing the adapted one and plot the results in Figure", "1. Notably, we only incur minor degradation in performance from the vocabulary substitution.", "We now study the effect of introducing a new language into our translation model.", "We require an adaptation recipe which enjoys the following properties: fast , in terms of number of additional gradient steps; performant , in terms of BLEU scores on the new language pair; retentive , in terms of minimal regression in the translation performance of the model on the original language pairs.", "Our solution: re-compute the probabilities for the temperature-based sampling scheme using the new data, upscale the probabilities of sampling new datasets by a factor then rescale the remaining probabilities so that their combined sum is one.", "We limit ourselves to either 15k or 30k additional steps (3% and 6% respectively of the training time for the original model) depending on the data available 9 to ensure fast adaptation.", "We reset the Adam opti-mizer's stored accumulators, reset the learning rate to 5e-5 and keep it fixed.", "We provide more details in Appendix A.2.", "Aside from these modifications, we continue training with the same objectives as before unless noted otherwise.", "We include the results for oracle models trained in the same way as the original model but with access to both the adapted 9 We use 15k steps if we leverage both monolingual and parallel data for a single language pair.", "We use 30k steps if we only use monolingual data or if we are adapting to all four languages at once.", "vocabulary and the missing data.", "We compute the BLEU scores and report them in Table", "2. Our models adapted with parallel data are competitive with the oracle models, even when we add all four languages at once and despite the restrictions we imposed on our adaption scheme.", "For languages that share scripts with the original ones (Kazakh and Polish), we can also attain strong performance leveraging monolingual data alone, albeit we need to introduce back-translation (Sennrich et al., 2015a) for optimal performance.", "We can also adapt the translation model using the original vocabulary, but the quality lags behind the models using the adapted vocabularies.", "This gap is larger for Bengali and Pashto, where the model is forced to rely on byte-level fallback, further reaffirming the value of using the adapted vocabularies.", "To examine whether catastrophic forgetting has occured, we proceed as in Section 3.1 and examine the performance on the original language pairs after adaptation on the new data against the oracle model which had access to this data in the beginning of training.", "We present the results for the models adapted to Kazakh in Figure", "2. All the models' performance on the original language pairs deviate only slightly from the oracle model, mitigating some of the degradation from the vocabulary substitution i.e. compare the kk and bn+pl+kk+ps curves in Figure 1 to the curves in Figure", "2. Lastly, we compare our models with external baselines for Kazakh.", "We consider the multilingual model mBART (Liu et al., 2020) as well as all the WMT submissions that reported results on English Kazakh.", "Of these baselines, only mBART and (Kocmi et al., 2018) use sacreBLEU which inhibits proper comparison with the rest of the models.", "We include them for completeness.", "We report the scores in Table", "3. Our adapted models are able to outperform mBART in both directions, and as well some of the weaker WMT submissions, despite those models specifically optimizing for that language pair and task.", "We present an approach for adding new languages to multilingual translation models.", "Our approach allows for rapid adaptation to new languages with distinct scripts with only a minor degradation in performance on the original language pairs." ]
[ "objective", "objective", "abstain", "objective", "abstain", "other", "abstain", "abstain", "objective", "result", "objective", "abstain", "abstain", "objective", "objective", "objective", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "objective", "result", "abstain", "objective", "objective", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "objective", "objective" ]
[ "Recent work has questioned the importance of the Transformer's multi-headed attention for achieving high translation quality.", "We push further in this direction by developing a hard-coded attention variant without any learned parameters.", "Surprisingly, replacing all learned self-attention heads in the encoder and decoder with fixed, input-agnostic Gaussian distributions minimally impacts BLEU scores across four different language pairs.", "However, additionally hard-coding cross attention (which connects the decoder to the encoder) significantly lowers BLEU, suggesting that it is more important than self-attention.", "Much of this BLEU drop can be recovered by adding just a single learned cross attention head to an otherwise hard-coded Transformer.", "Taken as a whole, our results offer insight into which components of the Transformer are actually important, which we hope will guide future work into the development of simpler and more efficient attention-based models.", "The Transformer (Vaswani et al., 2017) has become the architecture of choice for neural machine translation.", "Instead of using recurrence to contextualize source and target token representations, Transformers rely on multi-headed attention mechanisms (MHA), which speed up training by enabling parallelization across timesteps.", "Recent work has called into question how much MHA contributes to translation quality: for example, a significant fraction of attention heads in a pretrained Transformer can be pruned without appreciable loss in BLEU (Voita et al., 2019; Michel et al., 2019), and self-attention can be replaced by less expensive modules such as convolutions (Yang et al., 2018; Wu et al., 2019).", "any learned parameters (Section 3).", "Concretely, we replace each attention head with a hard-coded version, which is simply a standard normal distribution centered around a particular position in the sequence (Figure 1).", "1 When we replace all encoder and decoder self-attention mechanisms with our hard-coded variant, we achieve almost identical BLEU scores to the baseline Transformer for four different language pairs (Section 4).", "2 These experiments maintain fully learned MHA cross attention , which allows the decoder to condition its token representations on the encoder's outputs.", "We next attempt to additionally replace cross attention with a hard-coded version, which results in substantial drops of 5-10 BLEU.", "Motivated to find the minimal number of learned attention 1 In Figure 1, the hard-coded head distribution centered on the word to (shown in green) is [0 . 054 , 0 . 24 , 0 . 40 , 0 . 24 , 0 . 054] .", "2 Our code is available at https://github.com/ fallcat/stupidNMT L1 L2 L3 L4 L5 0 10 20 30 D i s t a n c e Encoder Self-Attention Head 1 Head 2 L1 L2 L3 L4 L5 Layer Decoder Self-Attention L1 L2 L3 L4 L5 Decoder Cross-Attention Figure 2: Most learned attention heads for a Transformer trained on IWSLT16 En-De focus on a local window around the query position.", "parameters needed to make up this deficit, we explore configurations with only one learned cross attention head in total, which performs just slightly worse (1-3 BLEU) than the baseline.", "By replacing MHA with hard-coded attention, we improve memory efficiency (26.4% more tokens per batch) and decoding speed (30.2% in-crease in sentences decoded per second) without significantly lowering BLEU, although these efficiency improvements are capped by other more computationally-expensive components of the model (Section 5).", "We also perform analysis experiments (Section 6.2) on linguistic properties (e.g., long-distance subject-verb agreement) that MHA is able to better model than hard-coded attention.", "Finally, we develop further variants of hard-coded attention in Section 6.3, including a version without any attention weights at all.", "Our hard-coded Transformer configurations have intuitively severe limitations: attention in a particular layer is highly concentrated on a local window in which fixed weights determine a token's importance.", "Nevertheless, the strong performance of these limited models indicates that the flexibility enabled by fully-learned MHA is not as crucial as commonly believed: perhaps attention is not all you need.", "We hope our work will spur further development of simpler, more efficient models for neural machine translation.", "In this section, we first briefly review the Transformer architecture of Vaswani et al. (2017) with a focus on its multi-headed attention.", "Then, we provide an analysis of the learned attention head distributions of a trained Transformer model, which motivates the ideas discussed afterwards.", "The Transformer is an encoder-decoder model formed by stacking layers of attention blocks.", "Each encoder block contains a self-attention layer followed by layer normalization, a residual connection, and a feed-forward layer.", "Decoder blocks are identical to those of the encoder except they also include a cross attention layer, which connects the encoder's representations to the decoder.", "To compute a single head of self-attention given a sequence of token representations t 1 ...n , we first project these representations to queries q 1 ...n , keys k 1 ...n , and values v 1 ...n using three different linear projections.", "Then, to compute the self-attention distribution at a particular position i in the sequence, we take the scaled dot product between the query vector q i and all of the key vectors (represented by matrix K ).", "We then use this distribution to compute a weighted average of the values ( V ): Attn ( q i , K , V ) = softmax ( q i K (cid:62) d k ) V (1) where d k is the dimensionality of the key vector.", "For MHA, we use different projection matrices to obtain the query, key, and value representations for each head.", "The key difference between self-attention and cross attention is that the queries and keys come from different sources: specifically, the keys are computed by passing the encoder's final layer token representations through a linear projection.", "To summarize, MHA is used in three different components of the Transformer: encoder self-attention, decoder self-attention, and cross attention.", "The intuition behind MHA is that each head can focus on a different type of information (e.g., syntactic or semantic patterns).", "While some heads have been shown to possess interpretable patterns (Voita et al., 2019; Correia et al., 2019), other work has cautioned against using attention patterns to explain a model's behavior (Jain and Wallace, 2019).", "In our analysis, we specifically examine the behavior of a head with respect to the current query token's position in the sequence.", "We train a baseline Transformer model (five layers, two heads per layer) on the IWSLT 2016 En De dataset, and compute aggregated statistics on its learned heads.", "Figure 2 shows that outside of a few layers, most of the model's heads focus their attention (i.e., the argmax of the attention distribution) on a local neighborhood around the current sequence position.", "For example, both self-attention heads in the first layer of the encoder tend to focus on just a one to two token window around the current position.", "The decoder self-attention and cross attention heads show higher variability, but most of their heads are still on average focused on local information.", "These results beg the question of whether replacing self-attention with hard-coded patterns that focus on local windows will significantly affect translation quality.", "While learned attention enables model flexibility (e.g., a head can look far away from the current position if it needs to), it is unclear from the above analysis how crucial this flexibility is.", "To examine this question, we replace the attention distribution computation in Equation 1 (i.e., scaled dot product of queries and keys) with a fixed Gaussian distribution.", "3 In doing so, we remove all learned parameters from the attention computation: the mean of the Gaussian is determined by the position i of the current query token, and the standard deviation is always set to 1.", "4 As Transformers contain both self-attention and cross attention, the rest of this section details how we replace both of these components with simplified versions.", "We will re-3 Yang et al. (2018) implement a similar idea, except the mean and standard deviation of their Gaussians are learned with separate neural modules.", "4 Preliminary experiments with other standard deviation values did not yield significant differences, so we do not vary the standard deviation for any experiments in this paper.", "fer to experimental results on the relatively small IWSLT16 English-German dataset throughout this section to contextualize the impact of the various design decisions we describe.", "Section 4 contains a more fleshed out experimental section with many more datasets and language pairs.", "In self-attention, the queries and keys are derived from the same token representations and as such have the same length n .", "The baseline Transformer ( BASE ) computes the self-attention distribution at position i by taking the dot product between the query representation q i and all of the key vectors k 1 ...n .", "We instead use a fixed Gaussian distribution centered around position i 1 (token to the left), i (the query token), or i + 1 (token to the right).", "More formally, we replace Equation 1 with Attn ( i, V ) = N ( f ( i ) , 2 ) V .", "The mean of the Gaussian f ( i ) and its standard deviation 2 are both hyperparameters; for all of our experiments, we set to 1 and f ( i ) to either i 1 , i or i + 1 , depending on the head configuration.", "5 Note that this definition is completely agnostic to the input representation: the distributions remain the same regardless of what sentence is fed in or what layer we are computing the attention at.", "Additionally, our formulation removes the query and key projections from the attention computation; the Gaussians are used to compute a weighted average of the value vectors.", "6 Instead of learning different query and key projection matrices to define different heads, we simply design head distributions with different means.", "Figure 1 shows an example of our hard-coded self-attention for a simple sentence.", "We iterate over different configurations of distribution means f ( i ) on the IWSLT16 En-De dataset, while keeping the cross attention learned.", "7 Our best validation result with hard-coded self-attention ( HC-SA ) replaces encoder self-attention with distributions centered around i 1 and i + 1 and decoder self-attention with distributions centered around i 1 and i .", "This 5 The Gaussian distribution is cut off on the borders of the sentence and is not renormalized to sum to one.", "6 Preliminary models that additionally remove the value projections performed slightly worse when we hard-coded cross attention, so we omit them from the paper.", "7 See Appendix for a table describing the effects of varying f ( i ) on IWSLT16 En-De BLEU score.", "We turn next to cross attention, which on its face seems more difficult to replace with hard-coded distributions.", "Unlike self-attention, the queries and keys in cross attention are not derived from the same token representations; rather, the queries come from the decoder while the keys come from the encoder.", "Since the number of queries can now be different from the number of keys, setting the distribution means by position is less trivial than it is for self-attention.", "Here, we describe two methods to simplify cross attention, starting with a fully hard-coded approach and moving onto a minimal learned configuration.", "Hard-coded cross attention: We begin with a simple solution to the problem of queries and keys having variable lengths.", "Given a training dataset, we compute the length ratio by dividing the average source sentence length by the average target sentence length.", "Then, to define a hard-coded cross attention distribution for target position i , we center the Gaussian on positions (cid:98) i 1 (cid:99) , (cid:98) i (cid:99) , and (cid:98) i + 1 (cid:99) of the source sentence.", "When we implement this version of hard-coded cross attention and also hard-code the encoder and decoder self-attention as described previously ( HC-ALL ), our BLEU score on IWSLT16 En-De drops from 30.3 to 21.1 .", "Clearly, cross attention is more important for maintaining translation quality than self-attention.", "Michel et al. (2019) notice a similar phenomenon when pruning heads from a pretrained Transformer: removing certain cross attention heads can substantially lower BLEU.", "Learning a single cross attention head: Prior to the advent of the Transformer, many neural machine translation architectures relied on just a single cross attention head (Bahdanau et al., 2015).", "The Transformer has many heads at many layers, but how many of these are actually necessary?", "Here, we depart from the parameter-free approach by instead removing cross attention at all but the final layer of the decoder, where we include only a single learned head ( SH-X ).", "Note that this is the only learned head in the entire model, as both the encoder and decoder self-attention is hard-coded.", "On IWSLT16 En-De, our BLEU score improves from 21.1 to 28.1 , less than 2 BLEU under the BASE Transformer.", "The previous section developed hard-coded configurations and presented results on the relatively small IWSLT16 En-De dataset.", "Here, we expand our experiments to include a variety of different datasets, language pairs, and model sizes.", "For all hard-coded head configurations, we use the optimal IWSLT16 En-De setting detailed in Section 3.1 and perform no additional tuning on the other datasets.", "This configuration nevertheless proves robust, as we observe similar trends with our hard-coded Transformers across all of datasets.", "8 4.1 Datasets We experiment with four language pairs, English { German, Romanian, French, Japanese } to show the consistency of our proposed attention variants.", "For the En-De pair, we use both the small IWSLT 2016 9 and the larger WMT 2014 datasets.", "For all datasets except WMT14 En De and WMT14 En Fr, 10 we run experiments in both directions.", "For English-Japanese, we train and evaluate on IWSLT 2017 En Ja TED talk dataset.", "More dataset statistics are shown in Table 1.", "Our BASE model is the original Transformer from Vaswani et al. (2017), reimplemented in PyTorch (Paszke et al., 2019) by Akoury et al. (2019).", "11 To implement hard-coded attention, we only modify the attention functions in this code-base and keep everything else the same.", "For the two small IWSLT datasets, we follow prior work 8 Code and scripts to reproduce our experimental results to be released after blind review.", "9 We report BLEU on the IWSLT16 En-De dev set following previous work (Gu et al., 2018; Lee et al., 2018; Akoury et al., 2019).", "For other datasets, we report test BLEU.", "10 As the full WMT14 En Fr is too large for us to feasibly train on, we instead follow Akoury et al. (2019) and train on just the Europarl / Common Crawl subset, while evaluating using the full dev/test sets.", "11 https://github.com/dojoteef/synst BASE HC-SA HC-ALL SH-X IWSLT16 En-De 30.0 30.3 21.1 28.2 IWSLT16 De-En 34.4 34.8 25.7 33.3 IWSLT17 En-Ja 20.9 20.7 10.6 18.5 IWSLT17 Ja-En 11.6 10.9 6.1 10.1 WMT16 En-Ro 33.0 32.9 25.5 30.4 WMT16 Ro-En 33.1 32.8 26.2 31.7 WMT14 En-De 26.8 26.3 21.7 23.5 WMT14 En-Fr 40.3 39.1 35.6 37.1 Table 2: Comparison of the discussed Transformer variants on six smaller datasets (top) 14 and two larger datasets (bottom).", "by using a small Transformer architecture with embedding size 288, hidden size 507, four heads, 12 five layers, and a learning rate 3e-4 with a linear scheduler.", "For the larger datasets, we use the standard Tranformer base model, with embedding size 512, hidden size 2048, eight heads, six layers, and a warmup scheduler with 4,000 warmup steps.", "For all experiments, we report BLEU scores using SacreBLEU (Post, 2018) to be able to compare with other work.", "13 4.3 Summary of results Broadly, the trends we observed on IWSLT16 EnDe in the previous section are consistent for all of the datasets and language pairs.", "Our findings are summarized as follows: A Transformer with hard-coded self-attention in the encoder and decoder and learned cross attention ( HC-SA ) achieves almost equal BLEU scores to the BASE Transformer.", "Hard-coding both cross attention and self-attention ( HC-ALL ) considerably drops BLEU compared to BASE , suggesting cross attention is more important for translation quality.", "A configuration with hard-coded self-12 For hard-coded configurations, we duplicate heads to fit this architecture (e.g., we have two heads per layer in the encoder with means of i + 1 and i 1 ).", "13 SacreBLEU signature: BLEU+case.mixed+lang.LANG +numrefs.1+smooth.exp+test.TEST+tok.intl+version.1.2.11,withLANG { en-de, de-en, en-fr } and TEST { wmt14/full, iwslt2017/tst2013 } .", "For WMT16 EnRo and IWSLT17 En-Ja, we follow previous work for preprocessing (Sennrich et al., 2016), encoding the latter with a 32K sentencepiece vocabulary ( https://github.com/google/sentencepiece ) and measuring the de-tokenized BLEU with SacreBLEU.", "attention and a single learned cross attention head in the final decoder layer ( SH-X ) consistently performs 1-3 BLEU worse than BASE .", "These results motivate a number of interesting analysis experiments (e.g., what kinds of phenomena is MHA better at handling than hard-coded attention), which we describe in Section 6.", "The strong performance of our highly-simplified models also suggests that we may be able to obtain memory or decoding speed improvements, which we investigate in the next section.", "We have thus far motivated our work as an exploration of which components of the Transformer are necessary to obtain high translation quality.", "Our results demonstrate that encoder and decoder self-attention can be replaced with hard-coded attention distributions without loss in BLEU, and that MHA brings minor improvements over single-headed cross attention.", "In this section, we measure efficiency improvements in terms of batch size increases and decoding speedup.", "Experimental setup: We run experiments on WMT16 En-Ro with the larger architecture to support our conclusions.", "15 For each model variant discussed below, we present its memory efficiency as the maximum number of tokens per batch allowed during training on a single GeForce RTX 2080 Ti.", "Additionally, we provide inference speed as the number of sentences per second each model can decode on a 2080 Ti, reporting the average of five runs with a batch size of 256.", "Hard-coding self-attention yields small efficiency gains: Table 7 summarizes our profiling experiments.", "Hard-coding self-attention and preserving learned cross attention allows us to fit 17% more tokens into a single batch, while also providing a 6% decoding speedup compared to BASE on the larger architecture used for WMT16 En-Ro.", "The improvements in both speed and memory usage are admittedly limited, which motivates us to measure the maximum efficiency gain if we only modify self-attention (i.e., preserving learned cross attention).", "We run a set of upper bound experiments where we entirely remove self-attention in the encoder and decoder.", "The resulting encoder 15 Experiments with the smaller IWSLT16 En-De model are described in the Appendix.", "thus just becomes a stack of feed-forward layers on top of the initial subword embeddings.", "Somewhat surprisingly, the resulting model still achieves a fairly decent BLEU of 27.0 compared to the BASE model's 33.0 .", "As for the efficiency gains, we can fit 27% more tokens into a single batch, and decoding speed improves by 12.3% over BASE .", "This relatively low upper bound for HC-SA shows that simply hard-coding self-attention does not guarantee significant speedup.", "Previous work that simpli-fies attention (Wu et al., 2019; Michel et al., 2019) also report efficiency improvements of similar low magnitudes.", "Single-headed cross attention speeds up decoding: Despite removing learned self-attention from both the encoder and decoder, we did not observe huge efficiency or speed gains.", "However, reducing the source attention to just a single head results in more significant improvements.", "By only keeping single-headed cross attention in the last layer, we are able to achieve 30.2% speed up and fit in 26.4% more tokens to the memory compared to BASE .", "Compared to HC-SA , SH-X obtains a 22.9% speedup and 8.0% bigger batch size.", "From our profiling experiments, most of the speed and memory considerations of the Transformer are associated with the large feed-forward layers that we do not modify in any of our experiments, which caps the efficiency gains from modifying the attention implementation.", "While we did not show huge efficiency improvements on modern GPUs, it remains possible that (1) a more tailored implementation could leverage the model simpli-fications we have made, and (2) that these differences are larger on other hardware (e.g., CPUs).", "We leave these questions for future work.", "Taken as a whole, our experimental results suggest that many of the components in the Transformer can be replaced by highly-simplified versions without adversely affecting translation quality.", "In this section, we explain how hard-coded self-attention does not degrade translation quality (Section 6.1), perform a detailed analysis of the behavior of our various models by comparing the types of errors made by learned versus hard-coded attention (Sec-tion 6.2), and also examine different attention configurations that naturally follow from our experiments (Section 6.3).", "Given the good performance of HC-SA on multiple datasets, it is natural to ask why hard-coding self-attention does not deteriorate translation quality.", "We conjecture that feed-forward (FF) layers play a more important role in HC-SA than in BASE by compensating for the loss of learned dynamic self-attention.", "To test this hypothesis, we conduct an analysis experiment in which we train four model configurations while varying the number of layers: BASE , BASE without feed-forward layers ( BASE /-FF ), HC-SA and HC-SA without feed-forward layers ( HC-SA /FF ).", "As shown in Figure 3, BASE and HCSA have similar performance and both -FF models have consistently lower BLEU scores.", "However, HC-SA without FF layers performs much worse <10 10-20 20-30 30-40 >40 reference length 16 18 20 22 24 26 28 BLEUWMT En-De BASE HC-SA HC-ALL SH-X Figure 4: BLEU difference vs. BASE as a function of reference length on the WMT14 En-De test set.", "compared to its BASE counterpart.", "This result confirms our hypothesis that FF layers are more important in HC-SA and capable of recovering the potential performance degradation brought by hard-coded self-attention.", "Taking a step back to hard-coding cross attention, the failure of hard-coding cross attention might be because the feed-forward layers of the decoder are not powerful enough to compensate for modeling both hard-coded decoder self-attention and cross attention.", "Is learned attention more important for longer sentences?", "Since hard-coded attention is much less flexible than learned attention and can struggle to encode global information, we are curious to see if its performance declines as a function of sentence length.", "To measure this, we categorize the WMT14 En-De test set into five bins by reference length and plot the decrease in BLEU between BASE and our hard-coded configurations for each bin.", "Somewhat surprisingly, Figure 4 shows that the BLEU gap between BASE and HC-SA seems to be roughly constant across all bins.", "16 However, the fully hard-coded HC-ALL model clearly deteriorates as reference length increases.", "Does hard-coding attention produce any systematic linguistic errors?", "For a more fine-grained analysis, we run experiments on LingEval97 (Sennrich, 2017), an English German dataset consisting of contrastive 16 We note that gradients will flow across long distances if the number of layers is large enough, since the effective window size increases with multiple layers (van den Oord et al., 2016; Kalchbrenner et al., 2016).", "translation pairs.", "This dataset measures targeted errors on thirteen different linguistic phenomena such as agreement and adequacy.", "BASE and HC-SA perform 17 very similarly across all error types (Table 4), which is perhaps unsurprising given that their BLEU scores are almost identical.", "Interestingly, the category with the highest decrease from BASE for both HC-SA and HC-ALL is deleted negations ; 18 HC-ALL is 11% less accurate (absolute) at detecting these substitutions than BASE (94% vs 83%).", "On the other hand, both HC-SA and HC-ALL are actually better than BASE at detecting inserted negations , with HC-ALL achieving a robust 98.7% accuracy.", "We leave further exploration of this phenomenon to future work.", "Finally, we observe that for the subject-verb agreement category, the discrepancy between BASE and the hard-coded models increases as the distance between subject-verb increases (Figure 5).", "This result confirms that self-attention is important for modeling some long-distance phenomena, and that cross attention may be even more crucial.", "self-attention focuses on non-local information?", "Since hard-coded models concentrate most of the attention probability mass on local tokens, they might underperform on sentences for which the 17 Accuracy is computed by counting how many references have lower token-level cross entropy loss than their contrastive counterparts.", "learned heads of the BASE model focus on tokens far from the current query position.", "We define a token to be off-diagonal when the maximum probability of that token's attention is at least two steps away from query position.", "A sen-tence's off-diagonality is then the proportion of off-diagonal tokens within the sentence.", "We bin the sentences in IWSLT En-De development set by their off-diagonality and analyze the translation quality of our models on these different bins.", "Figure 6 shows that for decoder self attention, the BLEU gap between HC-ALL and BASE increases as off-diagonality increases, while the gap between BASE and SH-X remains relatively constant across all bins.", "HC-SA even outperforms BASE for sentences with fewer off-diagonal tokens.", "Is it important for the Gaussian to span the entire sequence?", "One natural question about the hard-coded attention strategy described in Sec-Original Conv (window=3) Indexing En-De 30.3 30.1 29.8 En-Ro 32.4 32.3 31.4 Table 5: Comparison of three implementations of HCSA .", "tion 3 is whether it is necessary to assign some probability to all tokens in the sequence.", "After all, the probabilities outside a local window become very marginal, so perhaps it is unnecessary to preserve them.", "We take inspiration from Wu et al. (2019), who demonstrate that lightweight convolutions can replace self-attention in the Transformer without harming BLEU, by recasting our hard-coded attention as a convolution with a hard-coded 1-D kernel.", "While this decision limits the Gaussian distribution to span over just tokens within a fixed window around the query token, it does not appreciably impact BLEU (second column of Table 5).", "We set the window size to 3 in all experiments, so the kernel weights become [0 . 242 , 0 . 399 , 0 . 242] .", "Are any attention weights necessary at all?", "The previous setting with constrained window size suggests another follow-up: is it necessary to have any attention weights within this local window at all?", "A highly-efficient alternative is to have each head simply select a single value vector associated with a token in the window.", "Here, our implementation requires no explicit multiplication with a weight vector, as we can compute each head's representation by simply indexing into the value vectors.", "Mathematically, this is equivalent to convolving with a binary kernel (e.g., convolution with [1 , 0 , 0] is equivalent to indexing the left token rep-resentation).", "The third column of Table 5 shows that this indexing approach results in less than 1 BLEU drop across two datasets, which offers an interesting avenue for future efficiency improvements.", "heads?", "Our experiments with cross attention so far have been limited to learning just a single head, as we have mainly been interested in minimal configurations.", "If we have a larger budget of cross attention heads, where should we put them?", "Is it better to have more cross attention heads in the last layer in the decoder (and no heads anywhere else), or to distribute them across multiple layers 1 2 3 4 Number of learned heads 27.5 28.5 29.5 30.5 BLEU WMT2016 En-Ro Multiple heads same layer Single head across layers Figure 7: Adding more cross attention heads in the same layer helps less than adding individual heads across different layers.", "of the decoder?", "Experiments on the WMT16 En-Ro dataset 19 (Figure 7) indicate that distributing learned heads over multiple layers leads to significantly better BLEU than adding all of them to the same layer.", "Attention mechanisms were first introduced to augment vanilla recurrent models (Kalchbrenner and Blunsom, 2013; Sutskever et al., 2014; Bahdanau et al., 2015; Luong et al., 2015; Chorowski et al., 2015; Wu et al., 2016; Miceli Barone et al., 2017) but have become the featured component of the state-of-the-art Transformer architecture (Vaswani et al., 2017) for NMT.", "We review recent research that focuses on analysing and improving multiheaded attention, and draw connections to our work.", "The intuitive advantage of MHA is that different heads can focus on different types of information, all of which will eventually be helpful for translation.", "Voita et al. (2019) find that some heads focus on adjacent tokens to the query (mirroring our analysis in Section 2), while others focus on specific dependency relations or rare tokens.", "Correia et al. (2019) discover that some heads are sensitive to subword clusters or interrogative words.", "Tang et al. (2018) shows that the number of MHA heads affects the ability to model long-range dependencies.", "Michel et al. (2019) show that pruning many heads from a pretrained model does not significantly impact BLEU scores.", "Similarly, Voita et al. (2019) prune many encoder self-attention heads without degrading BLEU, while Tang et al. (2019) further 19 We used the smaller IWSLT En-De architecture for this experiment.", "simplify the Transformer by removing the entire encoder for a drop of three BLEU points.", "In contrast to existing literature on model pruning, we train our models without learned attention heads instead of removing them post-hoc.", "There have been many efforts to modify MHA in Transformers.", "One such direction is to inject linguistic knowledge through auxiliary supervised tasks (Garg et al., 2019; Pham et al., 2019).", "Other work focuses on improving inference speed: Yang et al. (2018) replace decoder self-attention with a simple average attention network, assigning equal weights to target-side previous tokens.", "20 Wu et al. (2019) also speed up decoding by replacing self-attention with convolutions that have time-step dependent kernels; we further simplify this work with our fixed convolutional kernels in Section 6.", "Cui et al. (2019) also explore fixed attention while retaining some learned parameters, and Vashishth et al. (2019) show that using uniform or random attention deteriorates performances on paired sentences tasks including machine translation.", "Other work has also explored modeling locality (Shaw et al., 2018; Yang et al., 2018).", "In this paper, we present hard-coded Gaussian attention, which while lacking any learned parameters can rival multi-headed attention for neural machine translation.", "Our experiments suggest that encoder and decoder self-attention is not crucial for translation quality compared to cross attention.", "We further find that a model with hard-coded self-attention and just a single cross attention head performs slightly worse than a baseline Transformer.", "Our work provides a foundation for future work into simpler and more computationally efficient neural machine translation.", "We thank the anonymous reviewers for their thoughtful comments, Omer Levy for general guidance and for suggesting some of our efficiency experiments, the UMass NLP group for helpful comments on earlier drafts, Nader Akoury for assisting with modifications to his Transformer codebase, and Kalpesh Krishna for advice on the structure of the paper.", "20 In preliminary experiments, we find that using uniform distributions for encoder self-attention decreases BLEU.", "This result is similar to the indexing implementation we describe in Section 6.3." ]
[ "abstain", "objective", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "other", "abstain", "method", "result", "abstain", "result", "abstain", "other", "objective", "result", "method", "objective", "abstain", "abstain", "abstain", "abstain", "objective", "other", "other", "other", "objective", "method", "method", "abstain", "other", "other", "other", "other", "abstain", "method", "other", "other", "other", "other", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "other", "abstain", "other", "other", "other", "method", "other", "objective", "other", "other", "other", "method", "other", "other", "method", "method", "result", "method", "other", "other", "other" ]
[ "Recurrent neural networks (RNNs) have achieved impressive results in a variety of linguistic processing tasks, suggesting that they can induce non-trivial properties of language.", "We investigate here to what extent RNNs learn to track abstract hierarchical syntactic structure.", "We test whether RNNs trained with a generic language modeling objective in four languages (Italian, English, Hebrew, Russian) can predict long-distance number agreement in various constructions.", "We include in our evaluation nonsensical sentences where RNNs cannot rely on semantic or lexical cues (The colorless green ideasideasideasideasideasideasideasideasideasideasideasideasideasideasideasideasideas I ate with the chair sleepsleepsleepsleepsleepsleepsleepsleepsleepsleepsleepsleepsleepsleepsleepsleepsleep furiously), and, for Italian, we compare model performance to human intuitions.", "Our language-model-trained RNNs make reliable predictions about long-distance agreement, and do not lag much behind human performance.", "We thus bring support to the hypothesis that RNNs are not just shallow-pattern extractors, but they also acquire deeper grammatical competence.", "Recurrent neural networks (RNNs; Elman, 1990) are general sequence processing devices that do not explicitly encode the hierarchical structure that is thought to be essential to natural language (Everaert et al., 2015).", "Early work using artificial languages showed that they may nevertheless be able to approximate context-free languages (Elman, 1991).", "More recently, RNNs have The work was conducted during the internship at Facebook AI Research, Paris.", "achieved impressive results in large-scale tasks such as language modeling for speech recognition and machine translation, and are by now standard tools for sequential natural language tasks (e.g., Mikolov et al., 2010; Graves, 2012; Wu et al., 2016).", "This suggests that RNNs may learn to track grammatical structure even when trained on noisier natural data.", "The conjecture is supported by the success of RNNs as feature extractors for syntactic parsing (e.g., Cross and Huang, 2016; Kiperwasser and Goldberg, 2016; Zhang et al., 2017).", "Linzen et al. (2016) directly evaluated the extent to which RNNs can approximate hierarchical structure in corpus-extracted natural language data.", "They tested whether RNNs can learn to predict English subject-verb agreement, a task thought to require hierarchical structure in the general case (the girlgirlgirlgirlgirlgirlgirlgirlgirlgirlgirlgirlgirlgirlgirlgirlgirl the boys like. . . isisisisisisisisisisisisisisisisis or areareareareareareareareareareareareareareareareare?).", "Their experiments confirmed that RNNs can, in principle, handle such constructions.", "However, in their study RNNs could only succeed when provided with explicit supervision on the target task.", "Linzen and colleagues argued that the unsupervised language modeling objective is not sufficient for RNNs to induce the syntactic knowledge necessary to cope with long-distance agreement.", "The current paper reevaluates these conclusions.", "We strengthen the evaluation paradigm of Linzen and colleagues in several ways.", "Most importantly, their analysis did not rule out the possibility that RNNs might be relying on semantic or collocational/frequency-based information, rather than purely on syntactic structure.", "In dogsdogsdogsdogsdogsdogsdogsdogsdogsdogsdogsdogsdogsdogsdogsdogsdogs in the neighbourhood often barkbarkbarkbarkbarkbarkbarkbarkbarkbarkbarkbarkbarkbarkbarkbarkbark, an RNN might get the right agreement by encoding information 1195 about what typically barks (dogs, not neighbour-hoods), without relying on more abstract structural cues.", "In a follow-up study to Linzen and col-leagues', Bernardy and Lappin (2017) observed that RNNs are better at long-distance agreement when they construct rich lexical representations of words, which suggests effects of this sort might indeed be at play.", "We introduce a method to probe the syntactic abilities of RNNs that abstracts away from potential lexical, semantic and frequency-based confounds.", "Inspired by Chomsky's (1957) insight that grammaticalness cannot be identified with mean-ingfulness (p. 106), we test long-distance agreement both in standard corpus-extracted examples and in comparable nonce sentences that are grammatical but completely meaningless, e.g., (para-phrasing Chomsky): The colorless green ideasideasideasideasideasideasideasideasideasideasideasideasideasideasideasideasideas I ate with the chair sleepsleepsleepsleepsleepsleepsleepsleepsleepsleepsleepsleepsleepsleepsleepsleepsleep furiously.", "We extend the previous work in three additional ways.", "First, alongside English, which has few morphological cues to agreement, we examine Italian, Hebrew and Russian, which have richer morphological systems.", "Second, we go beyond subject-verb agreement and develop an automated method to harvest a variety of long-distance number agreement constructions from treebanks.", "Finally, for Italian, we collect human judgments for the tested sentences, providing an important comparison point for RNN performance.", "1 We focus on the more interesting unsupervised setup, where RNNs are trained to perform generic, large-scale language modeling (LM): they are not given explicit evidence, at training time, that they must focus on long-distance agreement, but they are rather required to track a multitude of cues that might help with word prediction in general.", "Our results are encouraging.", "RNNs trained with a LM objective solve the long-distance agreement problem well, even on nonce sentences.", "The pattern is consistent across languages, and, crucially, not far from human performance in Italian.", "Moreover, RNN performance on language modeling (measured in terms of perplexity) is a good predictor of long-distance agreement accuracy.", "This suggests that the ability to capture structural generalizations is an important aspect of what makes the best RNN architectures so good 1 The code to reproduce our experiments and the data used for training and evaluation, including the human judgments in Italian, can be found at https://github.com/ facebookresearch/colorlessgreenRNNs .", "at language modeling.", "Since our positive results contradict, to some extent, those of Linzen et al. (2016), we also replicate their relevant experiment using our best RNN (an LSTM).", "We outperform their models, suggesting that a careful archi-tecture/hyperparameter search is crucial to obtain RNNs that are not only good at language modeling, but able to extract syntactic generalizations.", "Overview.", "We construct our number agreement test sets as follows.", "Original sentences are automatically extracted from a dependency treebank.", "They are then converted into nonce sentences by substituting all content words with random words with the same morphology, resulting in grammatical but nonsensical sequences.", "An LM is evaluated on its predictions for the target (second) word in the dependency, in both the original and nonce sentences.", "Long-distance agreement constructions.", "Agreement relations, such as subject-verb agreement in English, are an ideal test bed for the syntactic abilities of LMs, because the form of the second item (the target) is predictable from the first item (the cue).", "Crucially, the cue and the target are linked by a structural relation, where linear order in the word sequence does not matter (Everaert et al., 2015).", "Consider the following subject-verb agreement examples: the girlgirlgirlgirlgirlgirlgirlgirlgirlgirlgirlgirlgirlgirlgirlgirlgirl thinksthinksthinksthinksthinksthinksthinksthinksthinksthinksthinksthinksthinksthinksthinksthinksthinks. . . , the girlgirlgirlgirlgirlgirlgirlgirlgirlgirlgirlgirlgirlgirlgirlgirlgirl [you met] thinksthinksthinksthinksthinksthinksthinksthinksthinksthinksthinksthinksthinksthinksthinksthinksthinks. . . , the girlgirlgirlgirlgirlgirlgirlgirlgirlgirlgirlgirlgirlgirlgirlgirlgirl [you met yesterday] thinksthinksthinksthinksthinksthinksthinksthinksthinksthinksthinksthinksthinksthinksthinksthinksthinks. . . , the girlgirlgirlgirlgirlgirlgirlgirlgirlgirlgirlgirlgirlgirlgirlgirlgirl [you met yesterday through her friends] thinksthinksthinksthinksthinksthinksthinksthinksthinksthinksthinksthinksthinksthinksthinksthinksthinks. . . .", "In all these cases, the number of the main verb thinks is determined by its subject (girl), and this relation depends on the syntactic structure of the sentence, not on the linear sequence of words.", "As the last sentence shows, the word directly preceding the verb can even be a noun with the opposite number (friends), but this does not influence the structurally-determined form of the verb.", "When the cue and the target are adjacent (the girlgirlgirlgirlgirlgirlgirlgirlgirlgirlgirlgirlgirlgirlgirlgirlgirl thinksthinksthinksthinksthinksthinksthinksthinksthinksthinksthinksthinksthinksthinksthinksthinksthinks. . . ), an LM can predict the target without access to syntactic structure: it can simply extract the relevant morphosyntactic features of words (e.g., number) and record the co-occurrence frequencies of patterns such as N Plur V Plur (Mikolov et al., 2013).", "Thus, we focus here on long-distance agreement, where an arbitrary num-1196", "Figure 1 : Example agreement constructions defined by a dependency and the separating context, in", "(a) English,", "(b) Russian and", "(c) Italian.", "ber of words can occur between the elements of the agreement relation.", "We limit ourselves to number agreement (plural or singular), as it is the only overt agreement feature shared by all of the languages we study.", "Identifying candidate constructions.", "We started by collecting pairs of part-of-speech (POS) tags connected by a dependency arc.", "Independently of which element is the head of the relation, we refer to the first item as the cue and to the second as the target .", "We additionally refer to the POS sequence characterizing the entire pattern as a construction , and to the elements in the middle as context .", "For each candidate construction, we collected all of the contexts in the corpus that intervene between the cue and the target (we define contexts as the sequence of POS tags of the top-level nodes in the dependency subtrees).", "For example, for the English subject-verb agreement construction shown in Fig. 1a, the context is defined by VERB (head of the relative clause) and ADV (adverbial modifier of the target verb), which together dominate the sequence the boys like often.", "For the Russian adjective-noun agreement construction in Fig. 1b, the context is NOUN , because in the dependency grammar we use the noun moment is the head of the prepositional phrase at that mo-ment, which modifies the adjective deep.", "The candidate agreement pair and the context form a construction, which is characterized by a sequence of POS tags, e.g., NOUN VERB ADV VERB or VERB NOUN CCONJ VERB (Fig. 1c).", "Our constructions do not necessarily correspond to standard syntactic structures.", "The English subject-verb agreement construction NOUN VERB VERB , for example, matches both object and subject relative clause contexts, e.g., girlgirlgirlgirlgirlgirlgirlgirlgirlgirlgirlgirlgirlgirlgirlgirlgirl the boys like isisisisisisisisisisisisisisisisis and girlsgirlsgirlsgirlsgirlsgirlsgirlsgirlsgirlsgirlsgirlsgirlsgirlsgirlsgirlsgirlsgirls who stayed at home werewerewerewerewerewerewerewerewerewerewerewerewerewerewerewerewere.", "Conversely, standard syntactic structures might be split between different constructions, e.g., relative clause contexts occur in both NOUN VERB VERB and NOUN VERB ADV VERB constructions (the latter is illustrated by the English example in Fig. 1a).", "Construction contexts can contain a variable numbers of words.", "Since we are interested in challenging cases, we only considered cases in which at least three tokens intervened between the cue and the target.", "Excluding non-agreement constructions.", "In the next step, we excluded constructions in which the candidate cue and target did not agree in number in all of the instances of the construction in the treebank (if both the cue and the target were morphologically annotated for number).", "This step retained English subject-verb constructions, for example, but excluded verb-object constructions, since any form of a verb can appear both with singular and plural objects.", "To focus on robust agreement patterns, we only kept constructions with at least 10 instances of both plural and singular agreement.", "When applied to the treebanks we used (see Section 3), this step resulted in between two (En-glish) and 21 (Russian) constructions per lan-1197 guage.", "English has the poorest morphology and consequently the lowest number of patterns with identifiable morphological agreement.", "Only the VP-conjunction construction (Fig. 1c) was identified in all four languages.", "Subject-verb agreement constructions were extracted in all languages but Russian; Russian has relatively flexible word order and a noun dependent preceding a head verb is not necessarily its subject.", "The full list of extracted constructions in English and Italian is given in Tables 2 and 3, respectively.", "For the other languages, see the Supplementary Material (SM).", "2 Original sentence test set.", "Our original sentence test set included all sentences from each construction where all words from the cue and up to and including the target occurred in the LM vocabulary (Section 3), and where the singular/plural counterpart of the target occurred in the treebank and in the language model vocabulary (this is required by the evaluation procedure outlined be-low).", "The total counts of constructions and original sentences in our test sets are provided in Table 1. The average number of context words separating the cue and the target ranged from 3.6 (He-brew) to 4.5 (Italian).", "Generating nonce sentences.", "We generated nine nonce variants of each original sentence as follows.", "Each content word (noun, verb, adjective, proper noun, numeral, adverb) in the sentence was substituted by another random content word from the treebank with matching POS and morphological features.", "To avoid forms that are ambiguous between several POS, which are particularly frequent in English (e.g., plural noun and singular verb forms), we excluded the forms that appeared with a different POS more than 10% of the time in the treebank.", "Function words (determin-ers, pronouns, adpositions, particles) and punctuation were left intact.", "For example, we generated the nonce (1b) from the original sentence (1a): (1)", "a. It presentspresentspresentspresentspresentspresentspresentspresentspresentspresentspresentspresentspresentspresentspresentspresentspresents the case for marriage equality and statesstatesstatesstatesstatesstatesstatesstatesstatesstatesstatesstatesstatesstatesstatesstatesstates.", ".", ". b. It staysstaysstaysstaysstaysstaysstaysstaysstaysstaysstaysstaysstaysstaysstaysstaysstays the shuttle for honesty insurance and findsfindsfindsfindsfindsfindsfindsfindsfindsfindsfindsfindsfindsfindsfindsfindsfinds.", ".", ". Note that our generation procedure is based on morphological features and does not guarantee that argument structure constraints are respected 2 The SM is available as a standalone file on the project's public repository.", "(e.g., it stays the shuttle in (1b)).", "Evaluation procedure.", "For each sentence in our test set, we retrieved from our treebank the form that is identical to the agreement target in all morphological features except number (e.g., finds instead of find in (1b)).", "Given a sentence with prefix p up to and excluding the target, we then compute the probabilities P ( t 1 | p ) and P ( t 2 | p ) for the singular and plural variants of the target, t 1 and t 2 , based on the language model.", "Following Linzen et al. (2016), we say that the model identified the correct target if it assigned a higher probability to the form with the correct number.", "In (1b), for example, the model should assign a higher probability to finds than find.", "3 3 Experimental setup Treebanks.", "We extracted our test sets from the Italian, English, Hebrew and Russian Universal Dependency treebanks (UD, v2.0, Nivre et al., 2016).", "The English and Hebrew treebanks were post-processed to obtain a richer morphological annotation at the word level (see SM for details).", "LM training data.", "Training data for Italian, English and Russian were extracted from the respective Wikipedias.", "We downloaded recent dumps, extracted the raw text from them using WikiExtractor 4 and tokenized it with TreeTagger (Schmid, 1995).", "We also used the TreeTagger lemma annotation to filter out sentences with more than 5% unknown words.", "For Hebrew, we used the preprocessed Wikipedia corpus made available by Yoav Goldberg.", "5 We extracted 90M token subsets for each language, shuffled them by sentence and split them into training and validation sets (8-to-1 pro-portion).", "For LM training, we included the 50K most frequent words in each corpus in the vocabulary, replacing the other tokens with the UNK symbol.", "The validation set perplexity values we report below exclude unknown tokens.", "RNN language models.", "We experimented with simple RNNs (sRNNs, Elman, 1990), and their most successful variant, long-short term memory models (LSTMs, Hochreiter and Schmidhu-3 Obviously, in the nonce cases, the LMs never assigned the highest overall probability to either of the two candidates. Qualitatively, in such cases LMs assigned the largest absolute probabilities to plausible frequent words. 4 https://github.com/attardi/ wikiextractor 5 http://u.cs.biu.ac.il/yogo/hebwiki/ 1198 ber, 1997).", "We use the PyTorch RNN implementation.", "6 We trained the models with two hidden layer dimensionalities (650 and 200 units), and a range of batch sizes, learning rates and dropout rates.", "See SM for details on hyperparameter tuning.", "In general, a larger hidden layer size was the best predictor of lower perplexity.", "Given that our LSTMs outperformed our sRNNs, our discussion of the results will focus on the former; we will use the terms LSTM and RNN interchangeably.", "7 Baselines.", "We consider three baselines: first, a unigram baseline, which picks the most frequent form in the training corpus out of the two candidate target forms (singular or plural); second, a 5-gram model with Kneser-Ney smoothing ( KN , Kneser and Ney, 1995) trained using the IRSTLM package (Federico et al., 2008) and queried using KenLM (Heafield, 2011); and third, a 5-gram LSTM , which only had access to windows of five tokens (Chelba et al., 2017).", "Compared to KN, the 5-gram LSTM can generalize to unseen n-grams thanks to its embedding layer and recurrent connections.", "However, it cannot discover long-distance dependency patterns that span more than five words.", "See SM for details on the hyperparam-eters of this baseline.", "Human experiment in Italian.", "We presented the full Italian test set (119 original and 1071 nonce sentences) to human subjects through the Amazon Mechanical Turk interface.", "8 We picked Italian because, being morphologically richer, it features more varied long-distance constructions than English.", "Subjects were requested to be native Italian speakers.", "They were presented with a sentence up to and excluding the target.", "The singular and plural forms of the target were presented below the sentence (in random order), and subjects were asked to select the more plausible form.", "To prevent long-distance agreement patterns from being too salient, we mixed the test set with the same number of filler sentences.", "We started from original fillers, which were random treebank-extracted sentences up to a content word in singular or plural form.", "We then generated nonce fillers from the original ones using the procedure outlined in Section 2. A control subset of 688 fillers was manually selected by a linguistically-trained 6 https://github.com/pytorch/examples/ tree/master/word_language_model 7 Detailed results for sRNNs can be found in the SM.", "Table 1 : Experimental results for all languages averaged across the five best models in terms of perplexity on the validation set.", "Original/Nonce rows report percentage accuracy, and the numbers in small print represent standard deviation within the five best models.", "Italian native speaker as unambiguous cases.", "To make sure we were only using data from native (or at least highly proficient) Italian speakers, we filtered out the responses of subjects who chose the wrong target in more than 20% of the fillers.", "We collected on average 9.5 judgments for each item (minimum 5 judgments).", "To account for the variable number of judgments across sentences, accuracy rates were first calculated within each sentence and then averaged across sentences.", "The overall results are reported in Table 1. We report results averaged across the five models with the lowest validation perplexity, as well as standard deviations across these models.", "In summary, 1199 N V V V NP conj V Italian Original 93.3 4 .", "Table 2 : LSTM accuracy in the constructions N V V (subject-verb agreement with an intervening embedded clause) and V NP conj V (agree-ment between conjoined verbs separated by a complement of the first verb).", "the LSTM clearly outperformed the other LMs.", "Rather surprisingly, its performance on nonce sentences was only moderately lower than on original ones; in Italian this gap was only 6.6%.", "The KN LM performed poorly; its accuracy on nonce sentences was comparable to that of the unigram baseline.", "This confirms that the number of the target in nonce sentences cannot be captured by shallow n-gram patterns.", "The 5-gram LSTM model greatly improved over the KN baseline; its accuracy dropped only modestly between the original and nonce sentences, demonstrating its syntactic generalization ability.", "Still, the results are substantially below those of the LSTM with unlimited history.", "This confirms that our test set contains hard long-distance agreement dependencies, and, more importantly, that the more general LSTM model can exploit broader contexts to learn about and track long-distance syntactic relations.", "The increase in accuracy scores across the three LMs (KN, 5-gram LSTM and unbounded-context LSTM) correlates well with their validation perplexities in the language modeling task.", "We also found a strong correlation between agreement accuracy and validation perplexity across all the LSTM variants we explored in the hyperparameter search (68 models per language), with Pearson correlation coefficients ranging from r = 0 .", "55 in Hebrew to r = 0 .", "78 in English ( p < 0 . 001 in all languages).", "This suggests that acquiring abstract syntactic competence is a natural component of the skills that improve the generic language modeling performance of RNNs.", "Differences across languages.", "English was by far the hardest language.", "We conjecture that this is due to its poorer morphology and higher POS ambiguity, which might not encourage a generic language model to track abstract syntactic configura-tions.", "There is an alternative hypothesis, however.", "We only extracted two constructions for English, both of which can be argued to be linguistically complex: subject-verb agreement with an intervening embedded clause, and agreement between two conjoined verbs with a nominal complement intervening between the verbs.", "Yet the results on these two constructions, comparable across languages (with the exception of the subject-verb construction in Russian, which was not extracted), confirm that English is particularly hard (Table 2).", "A qualitative inspection suggests that the low accuracy in the verb conjunction case (67.5%) is due to ambiguous sentences such as if you havehavehavehavehavehavehavehavehavehavehavehavehavehavehavehavehave any questions or need needneed need need needneedneed need needneedneed need needneedneedneed/needs needsneeds needs needs needsneedsneeds needs needsneedsneeds needs needsneedsneedsneeds, where the target can be re-interpreted as a noun that is acceptable in the relevant context.", "9 In languages such as Italian and Russian, which have richer morphology and less ambiguity at the part-of-speech level than English, the LSTMs show much better accuracy and a smaller gap between original and nonce sentences.", "These results are in line with human experimental studies that found that richer morphology correlates with fewer agreement attraction errors (Lorimor et al., 2008).", "The pattern of accuracy rates in general, and the accuracy for the shared V NP conj V construction in particular, are consistent with the finding that Russian is less prone to human attraction errors than Italian, which, in turn, shows less errors than English.", "The largest drop in accuracy between original and nonce sentences occurred in Hebrew.", "A qualitative analysis of the data in this language suggests that this might be due to the numerical prevalence of a few constructions that can have multiple alternative readings, some of which can license the incorrect number.", "We leave a more systematic analysis of this finding for future research.", "Human results.", "To put our results in context and provide a reasonable upper bound on the LM performance, in particular for nonce sentences, we next compare model performance to that of human 9 The nonce condition has higher accuracy because our substitution procedure in English tends to reduce POS ambiguity.", "Table 3 : Subject and LSTM accuracy on the Italian test set, by construction and averaged.", "subjects in Italian.", "Table 3 reports the accuracy of the LSTMs and the human subjects, grouped by construction.", "10 There was a consistent gap in human accuracy between original and nonce sentences (6.1% on av-erage).", "The gap in accuracy between the human subjects and the model was quite small, and was similar for original and nonce sentences (2.4% and 2.9%, respectively).", "In some of the harder constructions, particularly subject-verb agreement with an embedded clause, the accuracy of the LSTMs on nonce sentences was comparable to human accuracy (92.5 2 . 1 vs. 92.3%).", "To test whether the human subjects and the models struggle with the same sentences, we computed for each sentence (1) the number of times the human subjects selected the correct form of the target minus the number of times they selected the incorrect form, and (2) the difference in model log probability between the correct and incorrect form.", "The Spearman correlation between these quantities was significant, for both original ( p < 0 . 05 ) and nonce sentences ( p < 0 . 001 ).", "This indicates that humans were more likely to select the correct form in sentences in which the models were more confident in a correct prediction.", "10 The SM contains the results for the other languages broken down by construction.", "Note that Table 3 reports linguistically intuitive construction labels.", "The corresponding POS patterns are (in same order as table rows): DET ADJ NOUN , NOUN VERB PRON VERB , NOUN VERB VERB , ADJ ADJ CCONJ ADJ , NOUN ADJ PUNCT PRON VERB , NOUN NOUN ADV ADJ , NOUN NOUN VERB , VERB NOUN CCONJ VERB .", "NOUN 11 and ADJ [conjoined ADJ s] ADJ , one or more adjectives that intervene between the cue and the target agree in number with the target, providing shorter-distance evidence about its correct number.", "For example, in (2) una filmmovie inutileinutileinutileinutileinutileinutileinutileinutileinutileinutileinutileinutileinutileinutileinutileinutileinutileuseless mabut almenoat.least festivofestivofestivofestivofestivofestivofestivofestivofestivofestivofestivofestivofestivofestivofestivofestivofestivofestive eand giovanilegiovanilegiovanilegiovanilegiovanilegiovanilegiovanilegiovanilegiovanilegiovanilegiovanilegiovanilegiovanilegiovanilegiovanilegiovanilegiovanileyouthful A useless but at least festive and youthful movie the adjective festivo is marked for singular number, offering a nearer reference for the target number than the cue inutile.", "At the other end, NOUN [PP] VERB (participial) and NOUN [PP] ADVERB ADJ are difficult.", "Particularly in the nonce condition, where semantics is unhelpful or even misleading, the target could easily be interpreted as a modifier of the noun embedded in the preceding prepositional phrase.", "For example, for the nonce case: (3) orto ortoorto ortoortoorto ortoorto orto ortoorto ortoortoorto ortoorto ortoorchard diof regolamenti regolamentiregolamenti regolamentiregolamentiregolamenti regolamentiregolamenti regolamenti regolamentiregolamenti regolamentiregolamentiregolamenti regolamentiregolamenti regolamentirules davverotruly pedonale/i pedonale/ipedonale/i pedonale/ipedonale/ipedonale/i pedonale/ipedonale/i pedonale/i pedonale/ipedonale/i pedonale/ipedonale/ipedonale/i pedonale/ipedonale/i pedonale/ipedestrian truly pedestrian orchard of rules both the subjects and the model preferred to treat pedestrian as a modifier of rules (orchard of truly pedestrian rules), resulting in the wrong agreement given the intended syntactic structure.", "Attractors.", "We define attractors as words with the same POS as the cue but the opposite number, which intervene in the linear order of the sen-11 The relatively low nonce LSTM performance on this construction is due to a few adjectives that could be reinterpreted as nouns.", "Figure 2 : Accuracy by number of attractors in Italian.", "Human performance is shown in red and LSTM in blue (median model among top 5 ranked by perplexity).", "Error bars show standard error.", "tence between the cue and the target.", "Attractors constitute an obvious challenge for agreement processing (Bock and Miller, 1991).", "We show how their presence affects human and model behavior in Fig. 2. We limit our analysis to a maximum of two attractors, since there were only two original sentences in the test corpus with three attractors or more.", "Both model and human accuracies degraded with the number of attractors; the drop in accuracy was sharper in the nonce condition.", "While the model performed somewhat worse than humans, the overall pattern was comparable.", "Our results suggest that the LSTM is quite robust to the presence of attractors, in contrast to what was reported by Linzen et al. (2016).", "We directly compared our English LSTM LM to theirs by predicting verb number on the Linzen et al. (2016) test set.", "We extracted sentences where all of the words between subject and verb were in our LM vocabulary.", "Out of those sentences, we sampled 2000 sentences with 0, 1 and 2 attractors and kept all the sentences with 3 and 4 attractors (1329 and 347 sentences, respectively).", "To ensure that our training set and Linzen's test set do not overlap (both are based on Wikipedia texts), we filtered out all of test sentences that appeared in our training data (187 sentences).", "Fig. 3 compares our results to the results of the best LM-trained model in Linzen et al. (2016) (their Google LM).", "12 Not only did our LM greatly outperform theirs, but it approached the performance of their supervised model.", "13 This 12 These subject-verb agreement results are in general higher than for our own subject-verb agreement construction ( NOUN VERB VERB ) because the latter always includes an embedded clause, and it is therefore harder on average.", "Figure 3 : Linzen's attractor set.", "Our LM-trained LSTM (blue; median model) compared to their LSTM with explicit number supervision (green) and their best LM-trained LSTM (red).", "difference in results points to the importance of careful tuning of LM-trained LSTMs, although we must leave to a further study a more detailed understanding of which differences crucially determine our better performance.", "Early work showed that RNNs can, to a certain degree, handle data generated by context-free and even context-sensitive grammars (e.g., Elman, 1991, 1993; Rohde and Plaut, 1997; Christiansen and Chater, 1999; Gers and Schmidhuber, 2001; Cartling, 2008).", "These experiments were based on small and controlled artificial languages, in which complex hierarchical phenomena were often overrepresented compared to natural languages.", "Our work, which is based on naturally occurring data, is most closely related to that of Linzen et al. (2016) and Bernardy and Lappin (2017), which we discussed in the introduction.", "Other recent work has focused on the morphological and grammatical knowledge that RNN-based machine-translation systems and sentence embeddings encode, typically by training classifiers to decode various linguistic properties from hidden states of the network (e.g., Adi et al., 2017; Be-linkov et al., 2017; Shi et al., 2016), or looking at whether the end-to-end system correctly translates sentences with challenging constructions (Sen-nrich, 2017).", "Previous work in neurolinguistics and psycholinguistics used jabberwocky, or pseudo-word, sentences to probe how speakers process syntactic information (Friederici et al., 2000; Moro et al., Linzen's dataset was recently reported by Yogatama et al. (2018).", "2001; Johnson and Goldberg, 2013).", "Such sentences are obtained by substituting original words with morphologically and phonologically acceptable nonce forms.", "We are not aware of work that used nonce sentences made of real words to evaluate the syntactic abilities of models or human subjects.", "As a proof of concept, Pereira (2000) and, later, Mikolov (2012) computed the probability of Chomsky's famous colorless green ideas sentence using a class-based bigram LM and an RNN, respectively, and showed that it is much higher than the probability of its shuffled ungrammatical variants.", "We ran an extensive analysis of the abilities of RNNs trained on a generic language-modeling task to predict long-distance number agreement.", "Results were consistent across four languages and a number of constructions.", "They were above strong baselines even in the challenging case of nonsense sentences, and not far from human performance.", "We are not aware of other collections of human long-distance agreement judgments on nonsensical sentences, and we thus consider our publicly available data set an important contribution of our work, of interest to students of human language processing in general.", "The constructions we considered are quite infrequent (according to a rough estimate based on the treebanks, the language in which they are most common is Hebrew, and even there they occur with average 0.8% sentence frequency).", "Moreover, they vary in the contexts that separate the cue and the target.", "So, RNNs are not simply memorizing frequent morphosyntactic sequences (which would already be impressive, for systems learning from raw text).", "We tentatively conclude that LM-trained RNNs can construct abstract grammatical representations of their input.", "This, in turn, suggests that the input itself contains enough information to trigger some form of syntactic learning in a system, such as an RNN, that does not contain an explicit prior bias in favour of syntactic structures.", "In future work, we would like to better understand what kind of syntactic information RNNs are encoding, and how.", "On the one hand, we plan to adapt methods to inspect information flow across RNN states (e.g., Hupkes et al., 2017).", "On the other, we would like to expand our empirical investigation by focusing on other long-distance phenomena, such as overt case assignment (Blake, 2001) or parasitic gap licensing (Culicover and Postal, 2001).", "While it is more challenging to extract reliable examples of such phenomena from corpora, their study would probe more sophisticated syntactic capabilities, possibly even shedding light on the theoretical analysis of the underlying linguistic structures.", "Finally, it may be useful to complement the corpus-driven approach used in the current paper with constructed evaluation sentences that isolate particular syntactic phenomena, independent of their frequency in a natural corpus, as is common in psycholinguistics (En-guehard et al., 2017).", "We thank the reviewers, German Kruszewski, Gerhard Jager, Adam Liska, Tomas Mikolov, Gemma Boleda, Brian Dillon, Cristophe Pallier, Roberto Zamparelli and the Paris Syntax and Semantics Colloquium audience for feedback and advice." ]
[ "abstain", "objective", "objective", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "objective", "objective", "method", "method", "result", "abstain", "abstain", "abstain", "other", "abstain", "result", "result", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "abstain", "other", "method", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "method", "method", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "result", "abstain", "abstain", "result", "other", "other", "abstain", "other", "other", "other", "other", "method", "other", "method", "abstain", "abstain", "objective", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "other" ]
[ "We introduce dodeca Dialogue: a set of 12 tasks that measures if a conversational agent can communicate engagingly with personality and empathy, ask questions, answer questions by utilizing knowledge resources, discuss topics and situations, and perceive and converse about images.", "By multi-tasking on such a broad large-scale set of data, we hope to both move towards and measure progress in producing a single unified agent that can perceive, reason and converse with humans in an open-domain setting.", "We show that such multi-tasking improves over a BERT pre-trained baseline, largely due to multi-tasking with very large dialogue datasets in a similar domain, and that the multi-tasking in general provides gains to both text and image-based tasks using several metrics in both the finetune and task transfer settings.", "We obtain state-of-the-art results on many of the tasks, providing a strong baseline for this challenge.", "One of the goals of AI is to build a seeing, talking agent that can discuss, reason, empathize, and provide advice in short a system that can perform natural communication displaying many of the properties expected when speaking to a human partner.", "Ideally, it should be able to be knowledgeable and personable, expert and engaging, serious or humorous depending on the situation.", "It should be capable of answering questions, asking questions, responding to statements, having its own persona, and grounding the dialogue with external information and images.", "While no single task exists that can train an agent or measure its ability on all of these axes at once, a number of distinct large-scale datasets targeting subsets of these skills have recently become available.", "We thus assemble these disparate tasks to form a single challenge: dodeca Dialogue, consisting of 12 subtasks.", "Each contains both training data to build the skills we desire for our agent, and validation and test sets to measure our agent's ability at that skill.", "The overall goal is a single agent that can display all these skills.", "As some of the subtasks have very large datasets, e.g. 2.2 billion utterances, they can possibly help the agent with other skills too.", "We thus build a model capable of training and multi-tasking on all these sources.", "We employ a transformer-based architecture (Vaswani et al., 2017) which accepts an image, external textual information and dialogue history as input, and generates a response for a given dialogue turn.", "Practically, by pre-training on the largest of the subtasks and then multi-tasking on all them, we can obtain state-of-the-art results compared to existing independently reported performance on all 10 of the 12 subtasks that have previous comparable results.", "We hence set a strong baseline for this challenge.", "While many existing approaches use large-scale pre-training on general text corpora, we show that using dialogue datasets instead, which are more closely linked to the desired agent's goals, is a strong alternative.", "However, many challenges remain.", "While multitasking performs well, and has clear benefits, as shown in other works (Liu et al., 2015; Raffel et al., 2019), when compared to fine-tuning of the same system we do obtain typically small losses.", "Zero-shot transfer to left-out tasks is also demanding for current approaches.", "We analyze these aspects, along with our model's ability to ground on external knowledge and images in conjunction with the dialogue context, the impact of decoding algorithms, analysis of the weighting of tasks during multi-tasking as well as cross-task transfer ability in order to shed light and make progress on this challenging topic.", "The dodeca Dialogue task is intended to assemble important aspects of an engaging conversational agent into a single collection, where each subtask covers some of those goals.", "Such an agent should be able to get to know you when you first talk to it (ConvAI2), discuss everyday topics (DailyDialog, pushshift.io Reddit, Twitter, Cornell Movie), speak knowledgeably at depth (Wizard of Wikipedia, Ubuntu) and answer questions on such topics (ELI5).", "It must be able to handle situated conversations and demonstrate empathy (Empa-thetic Dialog, LIGHT) .", "It can also discuss images, as this is a vital part of human connection (Image Chat, IGC).", "We note that all of the provided subtasks are in English.", "ConvAI2 ConvAI2 is a dataset used at the NeurIPS 2018 competition of the same name, and is based on PersonaChat (Zhang et al., 2018; Dinan et al., 2020).", "The training data involves paired crowdworkers having a conversation where they get to know each other, in which each is given a role to play based on sentences describing their persona, which were also separately crowdsourced (while they cannot see their partner's persona).", "It thus involves asking and answering questions, responding in kind, and getting to know the other speaker and engaging them in friendly conversation useful skills for an open-domain conversational agent.", "DailyDialog Li et al. (2017) built a dialogue dataset intended to reflect conversations occurring in daily life.", "It covers ten categories ranging from holidays to financial topics, rather than focusing on one domain.", "Compared to ConvAI2, these conversations seem more in keeping with partners who already know each other, and want to discuss typical life details, again useful skills for a conversational agent.", "The dataset is also annotated with topic, emotion and utterance acts, but here we ignore these annotations and learn only from the utterances in the dialogue turns.", "Wizard of Wikipedia This task involves discussing a given topic in depth, where the goal is to both engage the partner as well as display expert knowledge (Dinan et al., 2019).", "The training set consists of 1247 topics and a retrieval system over Wikipedia from which the dialogues were grounded during the human-human crowdsourced conversations.", "The topics were also crowdsourced and range from e-books to toga parties to showers.", "A model can thus learn to also perform similar retrieval and grounding at test time to potentially discuss any topic if it can generalize.", "We use the gold knowledge version of the task.", "We see this skill as a core component of an agent being able to not just chitchat, but actually engage a user in discussing real information about the world, e.g. by retrieving over documents from the internet.", "(2019) constructed a dataset of crowdworker conversations grounded in an emotional situation.", "In each dialogue, one speaker describes a personal situation and the other plays a listener role, displaying empathy during the discussion.", "The dataset contains descriptions of the situations being discussed with an attached emotion label, but these are not used here.", "Trained models are measured playing the part of the empathetic listener, an important feature of an agent to which humans wish to speak.", "Cornell Movie Danescu-Niculescu-Mizil and Lee (2011) constructed a corpus containing a collection of fictional conversations from movie scripts, thus covering a large diversity of topics and emotional states.", "LIGHT LIGHT (Urbanek et al., 2019) involves situated interactions between characters in a text adventure game.", "Similar to ConvAI2, personas for each character are given, with the training set including conversations between crowdworkers playing those roles.", "Different from ConvAI2, included are emotes and actions grounded within the game world (e.g. picking up and giving objects).", "As such, it measures the ability of a conversational agent to ground its discussion on a dynamic environment.", "ELI5 ELI5 (Fan et al., 2019) involves long-form question answering grounded on multiple retrieved documents in order to answer common questions which people ask on the popular ELI5 subreddit.", "As such, the answers are in a conversational form applicable to a dialogue agent.", "Ubuntu Lowe et al. (2015) built a dataset that involves in-depth discussions in solving Ubuntu problems.", "This studies the ability of an agent on a very focused single topic, and is also a standard benchmark in the field.", "Twitter We use a variant of Twitter discussions (text-only), which have been used in many existing studies, e.g. Sordoni et al. (2015); See et al. (2019).", "This data naturally involves everyday discussions about topics that people care about.", "The public forum makes them different from the more personal discussions of some of the other tasks.", "This is the second largest dataset in the collection, and we thus measure in experiments its ability to help performance on other tasks.", "pushshift.io Reddit We use a variant of Reddit discussions (text-only), which has also been used in several existing studies, see e.g. Yang et al. (2018); Mazare et al. (2018); Keskar et al. (2019).", "Following Humeau et al. (2019), we use a previously existing Reddit dataset extracted and obtained by a third party and made available on pushshift.io, training to generate a comment conditioned on the full thread leading up to the comment, spanning 2200M training examples.", "This is the largest dataset in the collection much larger than the others.", "The subreddits cover a vast range of topics, and hence this is a strong candidate for helping improve performance on other tasks via pre-training and multi-tasking.", "Note this dataset does not overlap with ELI5.", "Image Chat Shuster et al. (2018) collected a crowdsourced dataset of human-human conversations about an image with a given personality, where the goal is to engage the other speaker.", "As such, it covers natural conversational responses, including displays of emotion and humor.", "Image Grounded Conversations (IGC) IGC (Mostafazadeh et al., 2017) similarly involves two speakers discussing an image, here focusing on questions and responses.", "It only includes a validation and test set, and so we converted most of the validation set to form a small training set.", "Metrics For all tasks, we use the following metrics: perplexity (PPL), BLEU, ROUGE-1,-2 and -L and F1, and also pick the metric most used in the literature as that subtask's Score' to compare to existing work.", "Multi-tasking As we are interested in building a single conversational agent, we measure the ability of multi-tasked models that can perform all twelve tasks at once.", "Single-Task Fine-tuning We can still compare such multi-tasked models to single-task fine-tuned baselines to assess if we have gained or lost performance.", "Like other works (Liu et al., 2015; Raffel et al., 2019) we also consider a multi-task followed by finetune setup in order to see if this produces better models.", "The latter tests if multi-tasking still proves useful in the single-task setting.", "Zero-shot Transfer Finally, we consider a leave-one-out zero-shot setting whereby training is constrained to be on all the training data except for the task being evaluated .", "This evaluates the performance on truly new unseen tasks, an important behavior given there are always new tasks.", "3.1 Existing Models and Results", "Where possible, we have tried to track the best existing results for each task and provided a comparison in our final results table.", "As ConvAI2 was a competition, a number of competitors built strong models on it.", "The best results were obtained by large pre-trained transformers (Dinan et al., 2020).", "In particular, Wolf et al. (2019b) pre-trained via the method of Radford et al. (2018) using the BooksCorpus dataset, resulting in the best perplexities and F1 scores.", "Since then, results have gotten even better with the ad-vent of better and larger pretraining (Lewis et al., 2019), which we compare to here; the same work also reports strong results on ELI5.", "He et al. (2019) recently obtained strong results on the DailyDialog and Cornell Movie tasks in terms of perplexity by pre-training on 10% of CC-NEWS (Bakhtin et al., 2019), thus using 100 million sentences (2.7 billion words) and then fine-tuning a transformer based model with a multi-task strategy.", "Overall, large pre-trained transformers indeed provide strong existing results on many of the tasks.", "Several large language modeling projects have been undertaken in order to show prowess in multi-tasking ability (Radford et al., 2019; Keskar et al., 2019), and transformer-based approaches have been adapted to language and vision tasks as well (Lu et al., 2019; Tan and Bansal, 2019; Li et al., 2019a; Shuster et al., 2018).", "As well as citing the relevant papers' results where possible in the experiments section, we also train a BERT-based (Devlin et al., 2019) generative model as an additional baseline.", "In the interests of feasibility, there are tasks we did not include in dodeca Dialogue.", "For example, there are additional knowledge tasks (Qin et al., 2019; Moghe et al., 2018; Gopalakrishnan et al., 2019) and image-based datasets (Das et al., 2017) one could use.", "There are also a large number of QA tasks we did not include, e.g. Rajpurkar et al. (2016); Choi et al. (2018).", "In general, our choices were made based on tasks that after training might produce an engaging dialogue agent that humans naturally would want to talk to which means either natural datasets or crowdsourced datasets where crowdworkers were encouraged to engage one another.", "As computational resources and ambitions scale, it would be interesting to add more tasks as well, while retaining the twelve we have chosen here in order to continue to evaluate their success, whilst extending the scope of the entire system.", "All the subtasks in the collection we use here already exist.", "Other research projects have also built such collection-based tasks before as well.", "In particular, the NLP decathlon (McCann et al., 2018), from which the name of this paper is inspired, collects together a diverse set of NLP tasks from sentiment detection to parsing.", "Talmor and Berant (2019) collect a set of 10 QA datasets and build MULTIQA.", "Recently, (Raffel et al., 2019) also similarly multi-tasked a large set of NLP tasks, on an even bigger scale.", "Our work differs from these in that it is focused on dialogue tasks which naturally group together to form a conversational agent.", "BERT baseline.", "We implement a generative baseline using BERT via adapting the model using a standard auto-regressive loss.", "We concatenate both the context and current generation and provide these as input to the model, using BERT's sentence embeddings to distinguish the roles in the network.", "Although BERT is trained to predict masked tokens, we find that fine-tuning can easily adjust its behavior to predicting the next token.", "Our BERT baseline is roughly equivalent to the model of Wolf et al. (2019b), but does not have a classification loss term.", "The implementation relies on HuggingFace Transformers (Wolf et al., 2019a).", "We thus finetune this model for each of our tasks, except Image Chat and IGC which require images as input.", "Image+Seq2Seq.", "We use a modification of a transformer Seq2Seq architecture (Vaswani et al., 2017), additionally adding image features to the encoder.", "Our model is a 8 layer encoder, 8 layer decoder with 512 dimensional embeddings and 16 attention heads, and is based on the ParlAI implementation (Miller et al., 2017).", "We use BPE following Humeau et al. (2019) via lower-cased Wikipedia, Toronto Books, and Open Subtitles with 30k merges, giving 54,940 terms.", "Reported perplexities are computed with this dictionary.", "For image features, we use the pre-trained image features from the ResNeXt-IG-3.5B model, a ResNeXt 32 x 48d architecture (Xie et al., 2017) trained on 3.5 billion Instagram images following the procedure BERTb a s e d S i ng l e T a s k (fr o m s c r a t c h ) S i ng l e T a s k (f a s t T e x ti n it ) T w itt e r + S i ng l e T a s k R e dd it O n l y R e dd it + S i ng l e T a s k MTA ll T a s k s + FTS i ng l e T a s k A ll T a s k s MT L ea v e -O n e -O u t Z e r o -S ho t ConvAI2 19.4 43.3 38.9 28.7 18.3 11.4 11.2 11.3 16.4 DailyDialog 15.2 37.8 32.8 20.8 18.2 10.4 10.2 11.8 15.5 Wiz.", "described by Mahajan et al. (2018).", "This model was previously used successfully for the Image Chat task in Shuster et al. (2018).", "The final encoding from the ResNeXt model is a vector of size 2048; we then use a linear layer to project into the same size as the text encoding, and add it as an extra token at the end of the transformer's encoder output, then feed them all into the decoder.", "During fine-tuning we train the text transformer, but leave the image encoding fixed, apart from fine-tuning the linear projection.", "The text transformer is fine-tuned with a standard auto-regressive negative log-likelihood (NLL) loss, following usual sequence to sequence training schemes.", "Our best models are available at https:// parl.ai/projects/dodecadialogue .", "Task Training We employ the ParlAI framework (Miller et al., 2017) for training on single tasks and for multi-tasking, as many of the tasks are already implemented there, along with a (multi-task) training and evaluation framework for such models.", "Pre-training As pushshift.io Reddit and (to some extent) Twitter are much larger than our other tasks, we try pre-training the Seq2Seq module of our Image+Seq2Seq networks with those datasets, before multi-tasking on all of the tasks, or for evaluating single task fine-tuning.", "For Reddit, the model was trained to generate Model C onv AI 2 W i z .", "a comment conditioned on the full thread leading up to the comment.", "Comments containing URLs or that were under 5 characters in length were removed from the corpus, as were all child comments.", "Comments were truncated to 1024 BPE tokens.", "The model was trained with a batch size of 3072 sequences for approximately 3M updates using a learning rate of 5e-4, and an inverse square root scheduler.", "This took approximately two weeks using 64 NVIDIA V100s.", "We note that our transformer pre-training only includes text, while our image encoder was pre-trained separately in previous work (Mahajan et al., 2018).", "Learning how to combine these sources occurs during fine-tuning.", "It is important to note that, while compute-heavy, pre-training was conducted exactly once, and all of the subsequent fine-tuning is significantly faster to run.", "Transfer Performance between Tasks We first perform a preliminary study on a subset of the tasks: Reddit, ConvAI2, Wizard of Wikipedia and Empathetic Dialogues, and report the transfer ability of training on some of them, and testing on all of them (using the validation set), reporting perplexity.", "The results are reported in Table 3.", "They show that training on pushshift.io Reddit alone, a huge dataset, is effective at transfer to other tasks, but never as effective as fine-tuning on the task itself.", "Moreover, fine-tuning on most of the smaller tasks actually provides improvements over pushshift.io Reddit training alone at transfer, likely because the three tasks selected are more similar to each other than to pushshift.io Reddit.", "Finally, training on all four tasks is the most effective strategy averaged over all tasks compared to any other single model, although this does not beat switching between different fine-tuned models on a per-task basis.", "Comparison of Pre-training + Fine-tuning strategies Across all 12 tasks, we compare several pre-training strategies: using BERT, no pretraining at all, only initializing via fastText (Joulin et al., 2017), and using Twitter and pushshift.io Reddit pre-training with our Image+Seq2Seq architecture.", "For each variant we tune the learning rate, layers, number of heads and embedding size, with less pre-training typically requiring smaller capacity models.", "We then only fine-tune on a single task in these experiments, and report perplexity for that task alone, over all 12 tasks.", "The results are given in Table 2, reporting results on the validation set 1 .", "The results show a clear reduction in perplexity with more pre-training, as expected.", "This is most easily seen by the dodeca Score (last row) that is the mean perplexity over all 12 tasks, which decreases from 49.5 (from scratch models) down to 17.1 with pushshift.io Reddit pre-training.", "FastText (45.7) and Twitter (35.6) initializations help, but nowhere near as much.", "BERT fares better, but still is clearly 1 We choose not to use the test set here as we report so many numbers, we do not want to overuse it.", "worse than pushshift.io Reddit pre-training.", "The hypothesis here is that pushshift.io Reddit yields much more effective transfer as it is a dialogue task like our others, whereas non-dialogue corpora such as Wikipedia are not.", "This was previously observed for retrieval models in Humeau et al. (2019).", "Note that we do not report results for the image dialogue tasks for BERT as that architecture does not deal with images.", "Finally, as pushshift.io Reddit is so effective, we also compare to pushshift.io Reddit training only, with no fine-tuning at all across all tasks, similar to our initial study in Table 3.", "The performance is impressive, with some tasks yielding lower perplexity than BERT pre-training + single task fine-tuning.", "However, it still lags significantly behind fine-tuning applied after pushshift.io Reddit pretraining.", "Image and Knowledge Grounding Some of our tasks involve grounding on knowledge or images.", "To show such grounding helps, we report results with and without grounding on those tasks in Table 4, reporting perplexity.", "Particularly for Wizard of Wikipedia (knowledge) and Image Chat (images) such grounding has a clear effect.", "Multi-Task Results Next, we perform multitask training across all tasks, which is our ultimate goal in order to obtain an open-domain conversational agent.", "We optimize over the same set of Existing Approaches (independent) MT + FT All Tasks MT Approach PPL Score (Metric) PPL Score PPL Score ConvAI2 (Lewisetal.,2019) *11.9 *20.7 F1 11.1 21.6 10.8 21.7 DailyDialog (Heetal.,2019) 11.1 F1 10.4 18.2 12.0 16.2 Wiz.", "hyperparameters as before, including multi-tasking weights for tasks, where one samples during training with differing probabilities, and we choose the best model by performing early stopping on the average performance across all tasks.", "In this way, we treat all 12 tasks as a single task, and thus during test time it is the model's responsibility to understand how to respond from the context (im-age/dialogue) itself.", "In the end we did not obtain clear improvements beyond pre-training with pushshift.io Reddit and then equally sampling from all tasks.", "We report that final model's validation performance in terms of perplexity in Table 2 (second to last column, All Tasks MT).", "It achieves a dodeca Score of 19.1, superior to all pre-train fine-tune approaches except pushshift.io Reddit pre-training followed by fine-tuning, and is also superior to a single pushshift.io Reddit model.", "However, comparing across tasks, while most are close to the corresponding best fine-tuned model, many are just slightly worse.", "This is an expected result and is often reported in multitask systems (Raffel et al., 2019).", "We look upon this result as both positive we can obtain a single model doing well on all tasks, which a fine-tuned model cannot whilst also remaining a challenge to the community: can one find architectures that leverage multi-tasking even better?", "Multi-Task followed by Fine-Tuning As also performed in Liu et al. (2015); Raffel et al. (2019) we can try to train in a multi-task manner on all tasks, before fine-tuning on a single task, and build a separate model performing this procedure for all tasks, in an attempt to improve single task results further.", "Using this approach, one is free to perform hyperparameter search differently for each task.", "Here, we found that applying relative task up-weighting during multi-tasking training made a clear difference to the final quality of the fine-tuned target task model, see Table 5.", "Generally, better results come from assigning most of the multi-task weight towards the task itself to be fine-tuned.", "Using such an approach we can get marginally better results than fine-tuning alone, although the differences are generally small.", "The final best models per task are shown compared to other approaches in Table 2 (third to last column, MT All Tasks + FT Single Task).", "The final validation dodeca Score is 16.8, only slightly below 17.1 for fine-tuning.", "Decoding Strategies So far, we have only been measuring perplexity, but we are actually interested in generation, which requires us to decode.", "We consider several standard approaches: greedy, beam search (with beam size, and minimum and maximum output length 2 hyperparameters), beam search with beam blocking (blocking n -grams, we use n = 3 ) (Paulus et al., 2018) and nucleus sampling (with parameter p ) (Holtzman et al., 2019).", "We show the effect of these choices in Table 6 for ConvAI2 and Wizard of Wikipedia (WoW).", "Final Systems The final test performance for our best multi-task and fine-tuned (via multi-task followed by fine-tuning) systems are reported in Table 7 (right), with more detailed results with all decoding-based metrics, and validation as well as test performance in Appendix A. Here, for the multi-task model we have fine-tuned the decoding hyperparameters per task.", "For results with a single set of decoding hyperparameters, see also 2 The length parameters are important for ELI5.", "Appendix A. We generally find across all metrics a similar story as before when comparing the fine-tuning with multi-tasking: multi-tasking is successful, but the challenge is still to do better.", "Comparison to Existing Systems We compare to existing state-of-the-art results previously published for each task.", "Results are given in Table 7.", "As existing works report different metrics per task, we report perplexity where possible (but note, they may be computed on a different dictionary), and choose the sequence decoding-based metric that is commonly reported per task (listed in column Met-ric'), where the 'Score' column reports its value.", "We compare these to our best fine-tuned and multi-tasked models.", "Our multi-task model outperforms all available existing results, with 2 of the 12 tasks having no previous result.", "It is only surpassed by our fine-tuned model which also outperforms all available existing results.", "Overall, our methods set a strong challenge to future approaches.", "Human Evaluation In addition to automatic metrics, we perform human evaluation on two of the tasks to assess the abilities of our All Tasks MT conversational agent: the knowledge grounding task Wizard of Wikipedia (WoW) and the image grounding task Image Chat.", "We follow the same evaluation protocols as in Dinan et al. (2019); Shuster et al. (2018), comparing our method to the existing approaches referenced in Table 7.", "This involves collecting 100 human-bot conversations for WoW using crowdworkers, involving 810 turns each, across seen topics (seen in the training set) and unseen topics, and 500 image-based responses for Image Chat.", "A separate set of crowdworkers are then used to compare models pairwise following the ACUTE-Eval procedure of (Li et al., 2019b), where they are asked to choose which is the more engaging response for Image Chat (1500 trials) and Who would you prefer to talk to for a long conversation? for WoW (400 trials).", "The results, given in Figure 1, show our method outperforming the existing state of the art generative models on all three comparisons: Image Chat, WoW seen topics and WoW unseen topics.", "All three results are statistically significant (binomial test, p < . 05 ).", "Additional details and results breakdown are given in Appendix Section B. Example Outputs We show some example outputs of our multi-task model for some of the tasks in Appendix C. Our model is able to leverage im-Figure 1: Human evaluations on Image Chat and Wizard of Wikipedia (WoW), comparing existing state of the art models with our All Tasks MT conversational agent.", "Engagingness win rates are statistically significant in all three matchups (binomial test, p < . 05 ).", "ages, knowledge, and given personality attributes to produce engaging dialogue with a large amount of variety, depending on the situation.", "Leave-One-Out Zero-Shot Performance Last, but not least, we evaluate the performance of a multi-task model at zero-shot transfer to a new dialogue task.", "This is performed by training on all but one of the tasks, and reporting performance on the left out one, repeating this experiment for all tasks.", "Our best performing models in that regard are reported in Table 2 (last column).", "First, it is reassuring that the overall scores are reasonable, outperforming a pushshift.io Reddit only model on every task except pushshift.io Reddit itself.", "This means that multi-tasking across many tasks helps transfer learning.", "However, the gap between zero-shot performance and multi-task or fine-tuning performance means there is still a significant challenge in improving these results.", "Finally, we believe that reporting results in this regime in addition to multitasking results may help avoid the temptation to cheat at multi-tasking by trying to detect the task and then apply a separate fine-tuned classifier, as presumably that approach will not truly leverage reasoning and skills between tasks, which transfer may help measure.", "We have introduced the dodeca Dialogue task, and provide strong baseline results leveraging multimodal Image+Seq2Seq transformers trained across all tasks.", "The goal of introducing this task is not just as another challenge dataset, but to further motivate building and evaluating conversational agents capable of multiple skills one of the core goals of AI.", "We believe current systems are closer to that goal than ever before but we also still have a long way to go.", "Recently reported results show systems can be reasonably competitive compared to humans in particular domains for short conversations (Li et al., 2019b; Shuster et al., 2018).", "This work tries to bridge the gap to avoid agents with niche skills, to move towards evaluating an open-domain set of skills.", "Still, despite leveraging 12 tasks, there are many skills not included in our set.", "For example, longer conversations involving memory (Moon et al., 2019), or mixing open-domain conversation with task oriented goals.", "Future work should consider adding these tasks to the ones used here, while continuing the quest for improved models." ]
[ "method", "method", "result", "result", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "result", "abstain", "objective", "abstain", "result", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "result", "result", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain" ]
[ "Most neural machine translation models only rely on pairs of parallel sentences, assuming syntactic information is automatically learned by an attention mechanism.", "In this work, we investigate different approaches to incorporate syntactic knowledge in the Transformer model and also propose a novel, parameter-free, dependency-aware self-attention mechanism that improves its translation quality, especially for long sentences and in low-resource scenarios.", "We show the efficacy of each approach on WMT English German and English Turkish, and WAT English Japanese translation tasks.", "Research in neural machine translation (NMT) has mostly exploited corpora consisting of pairs of parallel sentences, with the assumption that a model can automatically learn prior linguistic knowledge via an attention mechanism (Luong et al., 2015).", "However, Shi et al. (2006) found that these models still fail to capture deep structural details, and several studies (Sennrich and Haddow, 2016; Eriguchi et al., 2017; Chen et al., 2017, 2018) have shown that syntactic information has the potential to improve these models.", "Nevertheless, the majority of syntax-aware NMT models are based on recurrent neural networks (RNNs; Elman 1990), with only a few recent studies that have investigated methods for the Transformer model (Vaswani et al., 2017).", "Wu et al. (2018) evaluated an approach to incorporate syntax in NMT with a Transformer model, which not only required three encoders and two decoders, but also target-side dependency relations (precluding its use to low-resource target lan-guages).", "Zhang et al. (2019) integrate source-side syntax by concatenating the intermediate representations of a dependency parser to word embeddings.", "Work done while at Tokyo Institute of Technology.", "In contrast to ours, this approach does not allow to learn sub-word units at the source side, requiring a larger vocabulary to minimize out-of-vocabulary words.", "Saunders et al. (2018) interleave words with syntax representations which results in longer sequences requiring gradient accumulation for effective training while only leading to +0 .", "5 BLEU on WAT Ja-En when using ensembles of Transformers.", "Finally, Currey and Heafield (2019) propose two simple data augmentation techniques to incorporate source-side syntax: one that works well on low-resource data, and one that achieves a high score on a large-scale task.", "Our approach, on the other hand, performs equally well in both settings.", "While these studies improve the translation quality of the Transformer, they do not exploit its properties.", "In response, we propose to explicitly enhance the its self-attention mechanism (a core component of this architecture) to include syntactic information without compromising its flexibility.", "Recent studies have, in fact, shown that self-attention networks benefit from modeling local contexts by reducing the dispersion of the attention distribution (Shaw et al., 2018; Yang et al., 2018, 2019), and that they might not capture the inherent syntactic structure of languages as well as recurrent models, especially in low-resource settings (Tran et al., 2018; Tang et al., 2018).", "Here, we present parent-scaled self-attention (PASCAL ): a novel, parameter-free local attention mechanism that lets the model focus on the dependency parent of each token when encoding the source sentence.", "Our method is simple yet effective, improving translation quality with no additional parameter or computational overhead.", "Our main contributions are: introducing PASCAL : an effective parameter-free local self-attention mechanism to incorporate source-side syntax into Transformers; adapting LISA (Strubell et al., 2018) to sub-word representations and applying it to NMT; D d model WV WQ WK d d model TX d TV Q K d T TS softmax() p 23353 Themonkeyeatsabanana Input: The monkey eats a banana * -1/2 T T T TN T d M T dist() Figure 1: Parent-Scaled Self-Attention (PASCAL ) head for the input sequence The monkey eats a banana.", "similar to concurrent work (Pham et al., 2019), we find that modeling linguistic knowledge into the self-attention mechanism leads to better translations than other approaches.", "Our extensive experiments on standard En De, En Tr and En Ja translation tasks also show that", "(a) approaches to embed syntax in RNNs do not always transfer to the Transformer, and", "(b) PASCAL consistently exhibits significant improvements in translation quality, especially for long sentences.", "In order to design a neural network that is effi-cient to train and that exploits syntactic information while producing high-quality translations, we base our model on the Transformer architecture (Vaswani et al., 2017) and upgrade its encoder with parent-scaled self-attention (PASCAL ) heads at layer l s .", "PASCAL heads enforce contextualization from the syntactic dependencies of each source token, and, in practice, we replace standard self-attention heads with PASCAL ones in the first layer as its inputs are word embeddings that lack any contextual information.", "Our PASCAL sub-layer has the same number H of attention heads as other layers.", "Source syntax Similar to previous work, instead of just providing sequences of tokens, we supply the encoder with dependency relations given by an external parser.", "Our approach explicitly exploits sub-word units, which enable open-vocabulary translation: after generating sub-word units, we compute the middle position of each word in terms of number of tokens.", "For instance, if a word in position 4 is split into three tokens, now in positions 6 , 7 and 8 , its middle position is 7 .", "We then map each sub-word of a given word to the middle position of its parent.", "For the root word, we define its parent to be itself, resulting in a parse that is a directed graph.", "The input to our encoder is a sequence of T tokens and the absolute positions of their parents.", "Figure 1 shows our parent-scaled self-attention sublayer.", "Here, for a sequence of length T , the input to each head is a matrix X RT d model of token embeddings and a vector p RT whose t -th entry p t is the middle position of the t -th token's dependency parent.", "Following Vaswani et al. (2017), in each attention head h , we compute three vectors (called query, key and value) for each token, resulting in the three matrices K h RT d , Q h RT d , and V h RT d for the whole sequence, where d = d model /H .", "We then compute dot products between each query and all the keys, giving scores of how much focus to place on other parts of the input when encoding a token at a given position.", "The scores are divided by d to alleviate the vanishing gradient problem arising if dot products are large: S h = Q h K h (cid:62) / d.", "(1) Our main contribution is in weighing the scores of the token at position t , s t , by the distance of each token from the position of t 's dependency parent: n htj = s htj d ptj , for j = 1 , ..., T, (2) where n ht is the t -th row of the matrix N h RT T representing scores normalized by the proximity to t 's parent.", "d ptj = dist ( p t , j ) is the ( t, j ) th entry of the matrix D p RT T containing, for each row d t , the distances of every token j from the middle position of token t 's dependency parent p t .", "In this paper, we compute this distance as the value of the probability density of a normal distribution centered at p t and with variance 2 , N (cid:0) p t , 2 (cid:1) : dist ( p t , j ) = f N (cid:0) j (cid:12)(cid:12) p t , 2 (cid:1) = 1 2 2 e ( j pt )2 2 2 .", "Finally, we apply a softmax function to yield a distribution of weights for each token over all the tokens in the sentence, and multiply the resulting matrix with the value matrix V h , obtaining the final representations M h for PASCAL head h .", "One of the major strengths of our proposal is being parameter-free: no additional parameter is required to train our PASCAL sub-layer as D p is obtained by computing a distance function that only depends on the vector of tokens' parent positions and can be evaluated using fast matrix operations.", "Parent ignoring Due to the lack of parallel corpora with gold-standard parses, we rely on noisy annotations from an external parser.", "However, the performance of syntactic parsers drops abruptly when evaluated on out-of-domain data (Dredze et al., 2007).", "To prevent our model from overfitting to noisy dependencies, we introduce a regularization technique for the PASCAL sub-layer: parent ignoring .", "In a similar vein as dropout (Srivastava et al., 2014), we disregard information during the training phase.", "Here, we ignore the position of the parent of a given token by randomly setting each row of D p to 1 RT with some probability q .", "Gaussian weighing function The choice of weighing each score by a Gaussian probability density is motivated by two of its properties.", "First, its bell-shaped curve: It allows us to focus most of the probability density at the mean of the distribution, which we set to the middle position of the sub-word units of the dependency parent of each token.", "In our experiments, we find that most words in the vocabularies are not split into sub-words, hence allowing PASCAL to mostly focus on the actual parent.", "In addition, non-negligible weights are placed on the neighbors of the parent token, allowing the attention mechanism to also attend to them.", "This could be useful, for instance, to learn idiomatic expressions such as prepositional verbs in English.", "The second property of Gaussian-like distributions that we exploit is their support: While most of the weight is placed in a small window of tokens around the mean of the distribution, all the values in the sequence are actually multiplied by non-zero factors; allowing a token j farther away from the parent of token t , p t , to still play a role in the representation of t if its score s h tj is high.", "PASCAL can be seen as an extension of the local attention mechanism of Luong et al. (2015), with the alignment now guided by syntactic information.", "Yang et al. (2018) proposed a method to learn a Gaussian bias that is added to, instead of multiplied by, the original attention distribution.", "As we will see next, our model significantly outperforms this.", "Data We evaluate the efficacy of our approach on standard, large-scale benchmarks and on low-resource scenarios, where the Transformer was shown to induce poorer syntax.", "Following Bastings et al. (2017), we use News Commentary v11 (NC11) with En-De and De-En tasks to simulate low resources and test multiple source languages.", "To compare with previous work, we train our models on WMT16 En-De and WAT En-Ja tasks, removing sentences in incorrect languages from WMT16 data sets.", "For a thorough comparison with concurrent work, we also evaluate on the large-scale WMT17 En-De and low-resource WMT18 En-Tr tasks.", "We rely on Stanford CoreNLP (Man-ning et al., 2014) to parse source sentences.", "1 Training We implement our models in PyTorch on top of the Fairseq toolkit.", "2 Hyperparameters, including the number of PASCAL heads, that achieved the highest validation BLEU (Papineni et al., 2002) score were selected via a small grid search.", "We report previous results in syntax-aware NMT for completeness, and train a Transformer model as a strong, standard baseline.", "We also investigate the following syntax-aware Transformer approaches: 1 +P ASCAL : The model presented in 2.", "The variance of the normal distribution was set to 1 (i.e., an effective window size of 3 ) as 99 .", "99% of the source words in our training sets are at most split into 7 sub-words units.", "+LISA: We adapt LISA (Strubell et al., 2018) to NMT and sub-word units by defining the parent of a given token as its first sub-word (which represents the root of the parent word).", "+M ULTI-TASK : Our implementation of the multi-task approach by Currey and Heafield (2019) where a standard Transformer learns to both parse and translate source sentences.", "+S&H: Following Sennrich and Haddow (2016), we introduce syntactic information in the form of dependency labels in the embedding matrix of the Transformer encoder.", "1 For a detailed description, see Appendix A. 2 https://github.com/e-bug/pascal .", "Table 1 presents the main results of our experiments.", "Clearly, the base Transformer outperforms previous syntax-aware RNN-based approaches, proving it to be a strong baseline in our experiments.", "The table shows that the simple approach of Sennrich and Haddow (2016) does not lead to notable advantages when applied to the embeddings of the Transformer model.", "We also see that the multi-task approach benefits from better parameterization, but it only attains comparable performance with the baseline on most tasks.", "On the other hand, LISA, which embeds syntax in a self-attention head, leads to modest but consistent gains across all tasks, proving that it is also useful for NMT.", "Finally, PASCAL outperforms all other methods, with consistent gains over the Transformer baseline independently of the source language and corpus size: It gains up to +0 .", "9 BLEU points on most tasks and a substantial +1 .", "75 in RIBES (Isozaki et al., 2010), a metric with stronger correlation with human judgments than BLEU in En Ja translations.", "On WMT17, our slim model compares favorably to other methods, achieving the highest BLEU score across all source-side syntax-aware approaches.", "3 Overall, our model achieves substantial gains given the grammatically rigorous structure of English and German.", "Not only do we expect performance gains to further increase on less rigorous sources and with better parses (Zhang et al., 2019), but also higher robustness to noisier syntax trees obtained from back-translated with parent ignoring.", "Performance by sentence length As shown in Figure 2, our model is particularly useful when translating long sentences, obtaining more than +2 BLEU points when translating long sentences in all low-resource experiments, and +3 .", "5 BLEU points on the distant En-Ja pair.", "However, only a few sentences ( 1% ) in the evaluation datasets are long.", "3 Note that modest improvements in this task should not be surprising as Transformers learn better syntactic relationships from larger data sets (Raganato and Tiedemann, 2018).", "Qualitative performance Table 2 presents examples where our model correctly translated the source sentence while the Transformer baseline made a syntactic error.", "For instance, in the first example, the Transformer misinterprets the adverb only as an adjective of tendency: the word only is an adverb modifying the verb agreed.", "In the second example, don't is incorrectly translated to the past tense instead of present.", "PASCAL layer When we introduced our model, we motivated our design choice of placing PASCAL heads in the first layer in order to enrich the representations of words from their isolated embeddings by introducing contextualization from their parents.", "We ran an ablation study on the NC11 data in order to verify our hypothesis.", "As shown in Table 3a, the performance of our model on the validation sets is lower when placing Pascal heads in upper layers; a trend that we also observed with the LISA mechanism.", "These results corroborate the findings of Raganato and Tiedemann (2018) who noticed that, in the first layer, more attention heads solely focus on the word to be translated itself rather than its context.", "We can then deduce that enforcing syntactic dependencies in the first layer effectively leads to better word representations, which further enhance the translation accuracy of the Transformer model.", "Investigating the performance of multiple syntax-aware layers is left as future work.", "Gaussian variance Another design choice we made was the variance of the Gaussian weighing function.", "We set it to 1 in our experiments motivated by the statistics of our datasets, where the vast majority of words is at most split into a few tokens after applying BPE.", "Table 3b corroborates our choice, showing higher BLEU scores on the NC11 validation sets when the variance equals 1 .", "Here, parent-only is the case where weights are only placed to the middle token (i.e. the parent).", "Sensitivity to hyperparameters Due to the large computational cost required to train Transformer models, we only searched hyperparameters in a small grid.", "In order to estimate the sensitivity of the proposed approach to hyperparameters, we trained the NC11 De-En model with the hyperparameters of the En-De one.", "In fact, despite being trained on the same data set, we find that more PASCAL heads help when German (which has a higher syntactic complexity than English) is used as the source language.", "In this test, we only find 0 .", "2 BLEU points with respect to the score listed in Table 1, showing that our general approach is effective regardless of extensive fine-tuning.", "This study provides a thorough investigation of approaches to induce syntactic knowledge into self-attention networks.", "Through extensive evaluations on various translation tasks, we find that approaches effective for RNNs do not necessarily transfer to Transformers (e.g. +S&H).", "Conversely, dependency-aware self-attention mechanisms (LISA and PASCAL ) best embed syntax, for all corpus sizes, with PASCAL consistently outperforming other all approaches.", "Our results show that exploiting core components of the Transformer to embed linguistic knowledge leads to higher and consistent gains than previous approaches.", "We are grateful to the anonymous reviewers, Desmond Elliott and the CoAStaL NLP group for their constructive feedback.", "The research results have been achieved by Research and Development of Deep Learning Technology for Advanced Multilingual Speech Translation, the Commissioned Research of National Institute of Information and Communications Technology (NICT), Japan." ]
[ "abstain", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "objective", "other", "objective", "result", "other", "result", "result", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "abstain", "method", "method", "abstain", "method", "method", "method", "method", "other", "result", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "result", "result", "method", "result", "abstain", "result", "other", "other" ]
[ "In this paper, we present an adaptive convolution for text classification to give stronger flexibility to convolutional neural networks (CNNs).", "Unlike traditional convolutions that use the same set of filters regardless of different inputs, the adaptive convolution employs adaptively generated convolutional filters that are conditioned on inputs.", "We achieve this by attaching filter-generating networks, which are carefully designed to generate input-specific filters, to convolution blocks in existing CNNs.", "We show the efficacy of our approach in existing CNNs based on our performance evaluation.", "Our evaluation indicates that adaptive convolutions improve all the baselines, without any exception, as much as up to 2.6 percentage point in seven benchmark text classification datasets.", "Text classification assigns topics to texts by understanding the semantics of the texts.", "It is one of the fundamental tasks in natural language processing (NLP) which has a broad range of applications, including web search (Broder et al., 2007), contextual advertising (Lee et al., 2013), and user profiling (Kazai et al., 2016).", "Traditional approaches to text classification use sparse representations of text, such as bag-of-words (Lodhi et al., 2002).", "To date, neural network-based text embedding techniques, particularly convolutional neural networks (CNNs) (Kim, 2014; Zhang et al., 2015; Wang et al., 2018b) have shown remarkable results in text classification.", "One of the driving forces of CNNs is a convolution operation.", "It screens local information which appear in inputs (either input texts or outputs from the previous convolution block) by convolving a set of filters with inputs.", "In the context of text classification, this operation is analogous to questions and answers.", "Convolutional filters are like questions asking for the intensity of particular patterns in receptive fields.", "Outputs of convolution operations are the answers from the inputs to the questions.", "CNNs derive the right class with stacked convolution blocks 1 .", "On this point, CNNs can be likened to players of the twenty questions who guess the answers by iteratively asking questions and receiving information.", "However, differences exist between humans and traditional CNNs in the manner in which they play this game.", "Humans adaptively ask questions by fully utilizing information they have obtained.", "If players have narrowed the answer down to the name of a person, they would not want to ask questions such as, Does that have four legs?.", "Rather, they would prefer questions related to the target's profession or origin that are practical for inferring the answer.", "In contrast, typical CNNs use the same set of filters in any circumstances (Kim, 2014; Johnson and Zhang, 2017; Wang et al., 2018b).", "This may hamper CNNs from leveraging the information they have as intermediate hidden representations of inputs processed in consecutive convolution operations, and focusing their capacity on disentangling uncertainty.", "Motivated by this, we propose an adaptive convolution to give stronger flexibility to networks and allow networks to simulate human capabilities of utilizing the information they have.", "The adaptive convolution performs convolution operations with filters (questions) dynamically generated conditioned on inputs (outputs from the previous convolution block).", "We achieve this by attaching filter-generating networks, carefully 1 Pooling can be interleaved with convolution.", "designed modular networks for generating filters, to convolutional blocks in CNNs.", "Each attached filter-generating network produces filters from the input and pass the filters to its convolution block.", "Generated filters are reflections of the information contained in the inputs and allow the networks to focus on extracting informative features.", "We further propose a hashing technique to substantially compress the size of the filter-generating networks, and prevent a considerable increase in the number of parameters when applying the adaptive convolution.", "Our adaptive convolution can easily be applied to existing CNNs, because of the modularity of the filter-generating networks.", "We demonstrate that significant gains can be realized by applying adaptive convolutions to baseline CNNs (Kim, 2014; Johnson and Zhang, 2017; Wang et al., 2018b), based on a performance evaluation.", "Our adaptive convolutions improve performance of all the baseline CNNs as much as up to 2.6 percentage point, without any exception, in seven text classification benchmark datasets.", "To summarize, our technical contributions are three fold: We propose an adaptive convolution which can give stronger flexibility to existing CNNs.", "We design a hashing technique to apply the adaptive convolution without a considerable increase in the number of required parameters.", "We show the effectiveness of our approach based on an evaluation on seven text classification benchmark datasets.", "The remainder of this paper is organized as follows.", "Section 2 discusses related works and Section 3 describes the proposed methodology.", "We present our evaluation in Section 4 and conclude the paper in Section 5.", "Ever since single layer CNNs were successfully applied to text classification with pre-trained word embeddings (Kim, 2014), many researches have sought effective utilization of CNNs in text classification.", "Kalchbrenner et al. (2014) introduced a dynamic k -max pooling to handle variable length sequences.", "Zhang et al. (2015) classified texts wholly based on the characters in the texts.", "Lai et al. (2015) and Xiao et al. (2016) incorporated recurrent neural networks (RNNs) into CNNs.", "Conneau et al. (2017) and Johnson et al. (2017) investigated deepening CNNs.", "Wang et al. (2018b) used dense connections to reuse features from upstream layers at downstream layers.", "Interests of these researches were concentrated to network architectures, pooling operations or input structures, accepting the nature of the convolution operation.", "Our work is different from them in that we focus on the convolution operation.", "Generating parameters in neural networks have been examined in various researches.", "Noh et al. (2016) used embedded questions to adaptively create parameters for a fully connected layer in visual question answering.", "Bertinetto et al. (2016) predicted parameters of a predictor network from an exemplar in a one-shot learning framework.", "Van den Oord et al. (2016) generated feature-wise biases from descriptive labels or tags that were directly added to layer outputs for conditional image generation.", "Liu et al. (2017) introduced a specifically designed meta network to produce weights for compositional matrices in tree structured neural networks.", "Several researches have adopted parameter generation for conditional normalization (CN).", "In these studies, parameters in the normalization layer were substituted with learned functions of conditioning information, the outputs from which were then used as normalizing parameters.", "Different types of CN include conditional instance normalization (Dumoulin et al., 2017) for style transfer, dynamic layer normalization (Kim et al., 2017) for speech recognition and conditional batch normalization (De Vries et al., 2017) for visual question answering.", "Perez et al. (Perez et al., 2018) relaxed CN by modulating inputs with affine transformation without normalization.", "Studies that are most similar to our own involve generating convolutional filters in the field of computer vision.", "Klein et al. (2015) proposed a dynamic convolution layer for weather prediction.", "They generated filters for subsequent frames with the previous image.", "De Brabandere et al. (2016) expanded the dynamic convolution layer by allowing position-specific filters.", "Niklaus et al (2017) estimated convolutional filters with images in receptive patches to interpolate their corresponding output pixels.", "Kang et al. (2017) produced convolutional filters with side information such as camera perspective for crowd counting.", "These approaches can be generalized by the hypernetworks (Ha et al., 2017) in which all weights for the main networks are generated with original inputs.", "Although this idea can be directly applied for text processing (Shen et al., 2018a), our approach is different from theirs in that we generate filters with outputs from previous convolution blocks, to fully utilize intermediate information obtained from networks as they process inputs.", "This section introduces our adaptive convolution.", "Instead of convolving the same set of filters with inputs, an adaptive convolution uses dynamically generated filters conditioned on the outputs from the previous convolution block.", "In Section 3.1, we explain how the filters are generated in the filter-generating network.", "In Section 3.2, we show how the adaptive convolution operates with the generated filters, and how it can be applied to existing CNNs.", "Figure 1 schematically shows the overall architecture of our filter-generating network.", "The filter-generating network takes an input to a convolution block I = [ x 1 , x 2 , , x m ] .", "m is the length which can be the number of words in each text or reduced number of it as a result of a pooling operation.", "Entry x i R d is a d -dimensional vector in the i th position in the input.", "d is equal to the number of filters of the previous convolution block, except for the first convolution block which uses word embedding dimension.", "It outputs a set of k convolutional filters F = [ f 1 , f 2 , , f k ] .", "Each filter f i R h d consists of a filter size h by d weights.", "The filter-generating network generates filters in two phases: context vector generation and filter generation.", "During context vector generation, the variable size input I is encapsulated into a fixed size g -dimensional context vector c R g .", "During filter generation, the filter-generating network adaptively produces filters from the context vector.", "information inherent in inputs.", "An input to each convolution block is an intermediate hidden representation of a text.", "Clearly, text has a sequential nature.", "This property is preserved when text is processed in typical CNNs because they do not shuffle position information between convolution blocks.", "Operations in CNNs produce an entry for each position from the corresponding position of the filters.", "As words within a text are dependent on each other, entries within an input are related.", "To gain some dependencies between entries in a sequence, we use the Gated Recurrent Unit (GRU) (Cho et al., 2014).", "We found from our preliminary experiments that GRU shows comparable performance to the Long-Short Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) with fewer parameters and lower computational cost.", "We obtain a hidden state h t R g for entry x t by concatenating two hidden states of bidirectional GRU at time step t : h t = GRU ( x t , h t 1 ) , h t = GRU ( x t , h t +1 ) , h t = [ h t ; h t ] (1) We then have m number of hidden states H = [ h 1 , h 2 , , h m ] .", "We encapsulate H into a fixed size context vector c by the weighted sum of hidden states : c = m (cid:88) j =1 a j h j (2) where a j is a scalar weight of each hidden state h j calculated as follows: a j = exp ( q (cid:62) h j ) (cid:80) mk =1 exp ( q (cid:62) h k ) (3) where q is a trainable query vector.", "This is a special case of self-attention (Lin et al., 2017) with a hop size of one and without a hidden state projection.", "The context vector c is an effective summarization of the input, and convolutional filters can be readily generated from the fixed size context vector.", "Filter Generation : Once we obtain the context vector c by attending hidden states of the bidirectional GRU, we generate convolutional filters F by a function of c .", "Although any kinds of function can be applied, we are interested in adding filter-generating networks to existing CNNs.", "In order for existing CNNs to be trained in an end-to-end fashion even after filter-generating networks are added to them, a differentiable architecture is preferred so that gradients can be backpropagated.", "We use a fully-connected layer for its simplicity.", "With this layer, convolutional filters can be generated in two different approaches: full generation and hashed generation .", "For clarity, we explain how each filter f i is generated.", "All filters in F are produced in the same manner.", "Full Generation : The most straightforward way to generate a convolutional filter is to use an output of a fully-connected layer directly as a convolutional filter.", "The layer takes the context vector c and yields filter f i as follows: f i = W i c (5) where W i R ( h d ) g is the weight matrix for generating i th filters.", "Hashed Generation : Full generation requires numerous parameters because the size of the matrix W i increases quadratically between the size of the context vector and the convolutional filter.", "This may cause a memory issue in very deep CNNs.", "To address this issue, we employ a hashing trick.", "The hashing trick allows the filter to be generated with only a fraction of the required number of parameters for full generation.", "Our hashing trick is motivated by the recently proposed hash embeddings (Svenstrup et al., 2017), which constructs word embeddings with the weighted sum of n component vectors from a shared pool.", "Based on this idea, we generate each f i by a linear combination of component filters from a shared pool.", "The shared pool E R b ( h d ) contains b component filters.", "We select n component filters for f i from the shared pool using predefined hash functions.", "The filter f i is generated by the linear combination as follows: f i = n (cid:88) j =1 p i,j H j ( f i ) (6) where p i,j is the importance parameter which determines the weight for the linear combination, and H j is a function that outputs a component filter from an ID of the filter, which is denoted as f i .", "H j is implemented by H j = ED j ( f i ) , where D j is a hash function.", "More specifically, D j takes f i , and outputs a bucket index in { 1 , , b } .", "The component filter is extracted by taking a row of the bucket index in E .", "To obtain input-specific filters, we control p ki as follows: p i,j = w i,j (cid:62) c (7) where w i,j R g is a vector for generating p i,j from the context vector.", "Because we require n numbers of p i,j , n g parameters are needed to generate each filter.", "The number of the importance parameters n can be chosen to be quite small (we typically use five), and it can provide a huge reduction in the number of parameters compared to the full generation, which uses h d g parameters.", "The additional parameters for the hashed generation come from the shared pool E .", "Yet, their portion is relatively small, because E is shared across the filters and the size of the shared pool b can be moderate (we typically use 20).", "We achieve the adaptive convolution by adding a filter-generating network to each convolution block.", "The filter-generating network yields filters from its input (output from the previous convolution block).", "The adaptive convolution involves the input-specifically generated filters, which are applied to inputs to produce new outputs.", "More formally, a feature o i,j corresponding to the filter f i and j th position Dataset AG DBPedia Yelp.p Yelp.f Yahoo Amazon.p Amazon.f # of training data 120k 560k 560k 650k 1400k 3600k 3000k # of test data 7.6k 70k 38k 50k 60k 400k 650k # of classes 4 14 2 5 10 2 5 # of average words 44 54 155 157 108 90 92 # of vocabulary 27k 129k 63k 68k 161k 223k 202k Table 1: Statistics of the datasets Algorithm 1 Generalized forward propagation of CNNs applied with the adative convolution.", "of window in inputs is computed as follows:", "where x j : j + h 1 denotes the concatenation of inputs [ x j , x j +1 , , x j + h 1 ] , and is an activation function such as ReLU (Nair and Hinton, 2010).", "A position feature o j is produced by concatenating features from all filters f 1 , f 2 , , f k , which are applied to the j th position of a window.", "The output of an adaptive convolution is a sequence of position features O = [ o 1 , o 2 , , o m h +1 ] .", "This output becomes the input to the next convolution block, after operations predefined in the network structure (such as a pooling) are applied.", "This procedure is repeated for all convolution blocks in the network.", "Algorithm 1 details our approach.", "Models adopted with our adaptive convolution can be trained in a typical backpropagation, as all components in adaptive convolution are differentiable.", "classification tasks compiled by Zhang et al. (2015).", "Statistics are summarized in Table 1.", "AG',DBPedia' and Yahoo' are news, ontology, and topic classification datasets, respectively.", "The others are sentiment classification datasets, where .p'(polarity) in the dataset name indicates that labels are binary and .f' (full) means that the labels refer to the number of stars.", "We tokenize each text using Stanford's CoreNLP (Manning et al., 2014) after converting all uppercase letters to lowercase letters.", "In building a vocabulary, we retain words that appear more than five times in a corpus.", "We replace remaining words with the special UNK' tokens.", "Baselines We select three baseline CNNs to which we apply our adaptive convolution.", "First one is CNN (Kim, 2014), the basic form of CNNs consists of a single convolution block.", "The others are recently proposed DPCNN (Johnson and Zhang, 2017) and DenseCNN (Wang et al., 2018b) which employ multiple convolution blocks.", "We reproduce these three models and apply adaptive convolutions to assess the efficacy of our methodology.", "All of them are word-level CNNs.", "We do not apply adaptive convolutions to character-level CNNs (Zhang et al., 2015; Conneau et al., 2017) because of their relatively poor performance compared to word-level CNNs (Johnson and Zhang, 2016).", "We also compare the performance of our methodology with ACNN (Shen et al., 2018a).", "Similar to our approach, ACNN employs dynamically generated filters for convolutions.", "Different from our approach, however, it generates filters with original inputs from a single subnetwork.", "Note that ACNN is a specifically designed network architecture, so its filter generation approach can not readily be applied to other existing CNNs.", "Models other than CNNs, such as RNNs (Yang et al., 2016; Lin et al., 2017; Wang et al., 2017) and word embedding-based models (Joulin et al., 2017; Shen et al., 2018b; Wang et al., 2018a) are also included as baseline models.", "We do not include Models AG DBPedia Yelp.p Yelp.f Yahoo Amazon.p Amazon.f CharCNN (Zhang et al., 2015) * 91.45 98.45 95.12 62.05 71.20 95.07 59.57 VDCNN (Conneau et al., 2017) * 91.3 98.7 95.7 64.7 73.4 95.7 63.0 FastText (Joulin et al., 2017) * 92.5 98.6 95.7 63.9 72.3 94.6 60.2 WSEM (Shen et al., 2018b) * 92.66 98.57 95.81 63.79 73.53 LEAM (Wang et al., 2018a) * 92.45 99.02 95.31 64.09 77.42 HAN (Yang et al., 2016) 93.28 98.99 97.08 67.92 75.88 95.94 63.54 (75.8) (63.6) KnnLSTM (Wang et al., 2017) * 94.2 99.1 94.5 61.9 74.4 95.3 60.3 Self-Attention (Lin et al., 2017) 91.5 98.3 94.9 63.4 59.8 ACNN (Shen et al., 2018a) 93.82 99.01 96.51 65.98 74.95 95.54 62.03 (98.93) (96.21) CNN (Kim, 2014) 93.15 98.92 96.33 65.52 74.2 95.24 61.09 (93.05) (98.88) (96.54) (65.79) (73.94) (95.73) (62.49) (Ours) AC CNNfull generation 94.07 99.17 97.18 68.12 76.15 96.23 63.60 (Ours) AC CNNhashed generation 93.81 99.13 97.16 68.07 76.01 96.25 63.74 DPCNN (Johnson and Zhang, 2017) 92.87 98.98 96.77 67.01 75.33 96.07 63.30 (Ours) AC DPCNNfull generation 94.03 99.13 97.05 67.91 76.36 96.31 63.94 (Ours) AC DPCNNhashed generation 93.70 99.10 97.12 67.98 76.07 96.20 63.72 DenseCNN (Wang et al., 2018b) 93.30 99.00 96.48 66.02 74.91 95.95 62.71 (93.6) (99.2) (96.5) (66.0) (63.0) (Ours) AC DenseCNN full generation 94.35 99.12 97.01 67.63 76.43 96.30 63.91 (Ours) AC DenseCNN hashed generation 93.63 99.09 96.96 67.74 75.95 96.14 63.59 Table 2: Test accuracies [%] on the seven text classification datasets.", "the models where transfer learning is applied, such as ULMFiT (Howard and Ruder, 2018) to compare the capacity of models by themselves, not the effectiveness of transfer.", "Training Details We implement all of the models with PyTorch (Paszke et al., 2017) framework.", "For all the models and datasets, we use 300 dimensional GloVe (Pennington et al., 2014) vectors trained on 840 billion words for word embedding initialization and initialize out-of-vocabulary words with Gaussian distribution with the standard deviation of 0.6.", "We do not use the text region embedding (Johnson and Zhang, 2017), for fair comparisons with other comparative models.", "We optimize parameters using Adam (Kingma and Ba, 2015) with initial learning rate of 0.01 and batch size of 128.", "Gradients are clipped at 5.0 by norm.", "We use ReLU activation after convolution operations.", "Model-specific configurations are as follows: CNN : We use the total of 300 filters, with 100 filters each having window size of 3,4 and 5.", "DPCNN : We use 100 filters with a size of 3, for each convolution operation.", "We set the depth to 11 for all the datasets except for the AG' dataset in which depth is set to 9.", "DenseCNN : We use 75 filters with a size of 3 for each convolution block.", "Input texts are padded or truncated to a fixed length.", "We set the fixed length to 300, except for the AG' in which 100 is used.", "For AG' dataset, we use six convolution blocks and seven for all the other datasets.", "These configurations are set on the validation set held out by 10% from the training data.", "If not specified, the same configurations are used in all the datasets.", "Once we fit model settings, we apply our adaptive convolution to those settings.", "We use 600 for the context vector size (i.e. 300 for GRU hidden states).", "In the hashed generation, we use 20 for the hash (shared) pool size and five for the number of importance parameters.", "Table 2 shows the evaluation results on the datasets.", "In the table, CharCNN and VDCNN are character level CNNs.", "FastText , SWEM and LEAM are word embedding-based models.", "HAN , KnnLSTM and Self-Attention are RNN variants.", "AC ' in the model names indicate that adaptive convolution is applied in the model.", "Both full generation' and hashed generation' are filter-generating methods.", "As shown in the table, adaptive convolutions improve all baseline CNNs in all datasets, with no exception.", "The performance improvements are relatively small for datasets in which known performances are already nearly 100%.", "In DBPedia dataset, performance improves by as much as 0.15 percentage point (%p) over the baselines.", "However, for datasets with a potential for considerable performance improvement, such as Yelp.f and Yahoo datasets, adaptive convolutions produce significant results.", "The performance improvements are up to 2.6%p for Yelp.f dataset.", "Without adaptive convolutions, RNNs show better performance than CNNs on most datasets.", "However, when adaptive convolutions are applied, our baselines perform better than RNN variants on every dataset.", "Also our approaches perform", "better than word embedding-based models except for LEAM model on Yahoo dataset.", "Furthermore, our adaptive convolutions beat ACNN on all the datasets.", "This suggests that generating filters block by block with outputs from the previous convolution block is much more efficient than generating filters with original inputs in a single subnetwork.", "Adaptive convolutions are found to be effective both in the full and hashed generation.", "The performance differences between the hashed generation and the full generation are within 0.3%p except for a few cases (AG and Yahoo datasets in DenseCNN ).", "However, the hashed generation is much more efficient than the full generation in terms of the parameter size.", "Overall, the total number of parameters for models with the hashed generation is less than 3% that of the full generation (Table 3).", "Analysis on model settings To further demonstrate the effectiveness of our adaptive convolutions, we compare the performance with varying filter sizes and depths.", "We select CNN and AC CNN with filter sizes ranging from 2 to 150 to check the performance with different filter sizes.", "To analyze the effect of depths, we choose DPCNN and DenseCNN , and their counterparts to which adaptive convolutions are applied.", "We evaluate the performance with depths ranging from 2 to 13.", "The results are illustrated in Figure", "2. Note that validation accuracies are lower than test accuracies because only 90% of the training set is used to save remaining 10% for the validation set.", "As can be seen in the figure, models adopted with our adaptive convolutions show stable performance.", "In case of filter sizes, CNN drastically drops performance when the filter size gets smaller.", "Its performance with the filter size of two is 8.4%p lower than that of the filter size of 100.", "However, the performance of AC CNN with the filter size of two declines by 0.6%p from the model with the filter size of 100, which is only 10 percent of the performance loss of CNN .", "This tendency is also found in the analysis on depths.", "Performance of DPCNN and DenseCNN with depths of two or three are 0.6%p lower than that of the baselines with the best performing depths.", "Contrarily, models with adaptive convolutions perform only 0.12%p lower with depths of two or three than models with the best performing depths.", "These results demonstrate that adaptive convolutions effectively generate filters for capturing important information that need to be disambiguated given current inputs.", "Only few filters and shallow depths are enough for the adaptive convolution to extract such information.", "This suggests that the required effort to tune hyperparameters can be mitigated by applying adaptive convolutions.", "Additional noteworthy fact is that increasing filter sizes and depths beyond a certain level does not lead to performance improvements in all the baseline CNNs.", "In case of CNN , no change in performance is observed when the filter size exceeds 100.", "In case of DenseCNN , increasing depths more than seven rather results in a performance drop and increasing depths of DPCNN exceeding eleven has no effect on the performance.", "This implies that performance gain is caused by the effectiveness of the proposed adaptive convolution, instead of the increased number of parameters (Table. 3).", "Analysis on hashed generation We investigate the effect of the hashed generation settings on the performance with different importance parameters AC CNN AC DPCNN AC DenseCNN with GRU 75.86 75.87 75.72 w/o GRU 74.76 75.37 75.36 Table 4: Validation accuracies on Yahoo dataset for the hashed generation-based models with different context generation settings.", "and hash (shared) pool sizes.", "The number of importance parameters is ranging from 2 to 9 and the hash (shared) pool size is in the range from 10 to 100.", "The results are shown in Figure", "3. As illustrated, increasing the sizes of hash pool and importance parameters beyond certain threshold does not guarantee performance gain.", "The number of importance parameter is optimal at five.", "Higher value of it has no effect on performance and in some cases, negatively affect performance.", "In case of the hash pool size, 20 is enough for containing candidate filters.", "This observation supports the previous finding (Denil et al., 2013) that many redundant parameters exist in deep neural networks.", "Our results reveal that the networks can be parameterized with a set of candidate weights, and their size can be sufficiently small to significantly reduce the number of required parameters in the network with little performance loss.", "Effect of GRU We perform an ablation test to validate the usage of GRUs in generating the context vector.", "Table 4 shows the results of the ablation test.", "In all models, utilizing GRUs in generating the context vector significantly improves performance as much as up to 1.1%p.", "This clearly indicates the existence of dependencies between entries in each layer.", "These can be effectively captured and incorporated into the context vector with GRUs and attention-based context vector generation scheme.", "Filter visualization To better understand generated filters by adaptive convolutions with different inputs, we visualize filters with t -SNE (Maaten and Hinton, 2008).", "We compare filters trained with the baseline CNN as well as filters generated by AC CNN from different input texts.", "The corresponding results are shown in Figure", "4. As clearly seen, filters from CNN are dispersed in the projected space.", "By contrast, filters generated by AC CNN with a positively and a negatively labeled sample are concentrated on the upper right Figure 4: Filters visualized with t -SNE.", "and the lower left part of the space, respectively.", "This demonstrates that the generated filters in adaptive convolutions are focused to disambiguate uncertainty in given information.", "Filters trained in CNN are not specified to given inputs, but are generally tuned to solve given tasks.", "In this paper, we have introduced the adaptive convolution to endow flexibility to convolution operations.", "Further, we have proposed the hashing technique which can drastically reduce the number of parameters for adaptive convolutions.", "We have validated our approach based on the performance evaluation with seven datasets, and investigated the effectiveness of adaptive convolutions through analysis.", "We believe that our methodology is applicable to other NLP tasks with text pairs, such as textual entailment, question answering.", "We plan to apply the proposed approach to those tasks in the future.", "This research was supported by the National Research Foundation of Korea (NRF) funded by the Ministry of Science and ICT (MIST) (No.2018R1A2A1A05078380).", "This research was also in part supported by the Information Technology Research Center (ITRC) support program supervised by the Institute for Information & communications Technology Promotion (IITP) (IITP-2019-2016-0-00464)." ]
[ "method", "abstain", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "result", "abstain", "abstain", "abstain", "objective", "method", "abstain", "result", "objective", "method", "result", "method", "abstain", "method", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "result", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "method", "objective", "other", "other" ]
[ "The performance of fine-tuning pre-trained language models largely depends on the hyperparameter configuration.", "In this paper, we investigate the performance of modern hyperparameter optimization methods (HPO) on fine-tuning pre-trained language models.", "First, we study and report three HPO algorithms' performances on fine-tuning two state-of-the-art language models on the GLUE dataset.", "We find that using the same time budget, HPO often fails to outperform grid search due to two reasons: insufficient time budget and overfitting.", "We propose two general strategies and an experimental procedure to systematically troubleshoot HPO's failure cases.", "By applying the procedure, we observe that HPO can succeed with more appropriate settings in the search space and time budget; however, in certain cases overfitting remains.", "Finally, we make suggestions for future work.", "Our implementation can be found in https://github.c om/microsoft/FLAML/tree/main/flaml /nlp/ .", "In the recent years, deep learning and pre-trained language models (Devlin et al., 2019; Liu et al., 2019; Clark et al., 2020; He et al., 2021) have achieved great success in the NLP community.", "It has now become a common practice for researchers and practitioners to fine-tune pre-trained language models in down-stream NLP tasks.", "For example, the HuggingFace transformers library (Wolf et al., 2020) was ranked No.1 among the most starred NLP libraries on GitHub using Python 1 .", "Same as other deep learning models, the performance of fine-tuning pre-trained language models largely depends on the hyperparameter configuration.", "A different setting in the hyperparam-1 https://github.com/EvanLi/Github-Ranking/blob/master/Top100/Python.md eters may cause a significant drop in the performance, turning a state-of-the-art model into a poor model.", "Methods for tuning hyperparameters can be categorized as (1) traditional approaches such as manual tuning and grid search, and (2) automated HPO methods such as random search and Bayesian optimization (BO).", "Manual tuning often requires a large amount of manual efforts; whereas grid search often suffers from lower efficiency due to the exponential increase in time cost with the number of hyperparameters.", "Automated HPO methods were proposed to overcome these disadvantages.", "Recently, automated HPO methods also become increasingly popular in the NLP community (Zhang and Duh, 2020; Dodge et al., 2019).", "For example, Bayesian optimization (BO) (Zhang and Duh, 2020) and Population-based Training (Jaderberg et al., 2017) both prove to be helpful for improving the performance of the transformer model (Vaswani et al., 2017) for neural machine translation.", "The HuggingFace library has also added native supports for HPO in a recent update (version 3.1.0, Aug 2020).", "With improved supports, users can now easily access a variety of HPO methods and apply them to their fine-tuning tasks.", "However, the effectiveness of this step is less understood.", "To bridge this gap, in this paper, we propose an experimental study for fine-tuning pre-trained language models using the HuggingFace library.", "This study is motivated by the following research questions: First, can automated HPO methods outperform traditional tuning method such as grid search?", "Second, on which NLP tasks do HPO methods work better?", "Third, if HPO does not work well, how to troubleshoot the problem and improve its performance?", "To answer these questions, we start from a simple initial study (Section 4) by examining the performance of three HPO methods on two state-of-the-art language models on the GLUE dataset.", "The time budget for HPO in the initial study is set to be the same as grid search.", "Results of the initial study show that HPO often fails to match grid search's performance.", "The reasons for HPO's failures are two folds: first, the same budget as grid search may be too small for HPO; second, HPO overfits the task.", "With these observations, we propose two general strategies for troubleshooting the failure cases in HPO as well as an overall experimental procedure (Figure 1).", "By applying the procedure (Section 5), we find that by controlling overfitting with reduced search space and using a larger time budget, HPO has outperformed grid search in more cases.", "However, the overfitting problem still exists in certain tasks even when we only search for the learning rate and batch size.", "Finally, we make suggestions for future work (Section 7).", "The main contributions of this work are: We empirically study the performance of three HPO methods on two pre-trained language models and on the GLUE benchmark; We design an experimental procedure which proves useful to systematically troubleshoot the failures in HPO for fine-tuning; We report and analyze the execution results of the experimental procedure, which sheds light on future work; 2 Definition of HPO on Language Model Fine-Tuning Given a pre-trained language model, a fine-tuning task, and a dataset containing D train , D val , D test , the goal of a hyperparameter optimization algorithm is to find a hyperparameter configuration c , so that when being trained under configuration c , the model's performance on a validation set D val is optimized.", "Formally, the goal of HPO is to find c = arg max c S f ( c , D train , D val ) where S is called the search space of the HPO algorithm, i.e., the domain where the hyperparameter values can be chosen from.", "The function f ( , , ) is called the evaluation protocol of HPO, which is defined by the specific downstream task.", "For example, many tasks in GLUE define f as the validation accuracy.", "If a task has multiple protocols, we fix f as one of them 2 .", "After finding c , the performance of HPO will be evaluated using the performance of the model trained with c on the test set D test .", "To fairly compare the performances of different HPO algorithms, the above optimization problem is defined with a constraint in the maximum running time of the HPO algorithm, which we call the time budget for the algorithm, denoted as B .", "Under budget B , the HPO algorithm can try a number of configurations c 1 , c 2 , , c n .", "The process of fine-tuning with configuration c i is called a trial .", "Finally, we call the process of running an HPO algorithm A once one HPO run .", "In this paper, we conduct an empirical study to answer the research questions in Section", "1. First, can automated HPO methods outperform grid search?", "The answer to this question depends on multiple factors, i.e., the NLP task on which HPO and grid search are evaluated, the pre-trained language model for fine tuning, the time budget, the search space for grid search and HPO algorithm, and the choice of HPO algorithm.", "To provide a comprehensive answer, we need to enumerate multiple settings for these factors.", "However, it is infeasible to enumerate all possible settings for each factor.", "For instance, there exist unlimited choices for the search space.", "To accomplish our research within reasonable computational resources 3 , for each factor, we only explore the most straight-foward settings.", "For example, the search space for grid search is set as the default grid configuration recommended for fine-tuning (Table 1), and the search space for HPO is set as a straightforward relaxation of the grid configuration.", "We explain the settings for each factor in details below.", "2 There are 3 GLUE tasks with multiple validation scores: MRPC, STS-B, and QQP (not studied).", "For MRPC we optimize the validation accuracy, and for STS-B we optimize the Pearson score on the validation set.", "3 Our experiments were run on two GPU servers, server 1 is equipped with 4xV100 GPUs (32GB), and server 2 is a DGX server equipped with 8xV100 GPUs (16GB).", "To avoid incomparable comparisons, all experiments on QNLI and MNLI are run exclusively on server 2, and all other experiments are run exclusively on server", "1. To speed up the training, we use fp16 in all our experiments.", "To guarantee the comparability between different HPO methods, all trials are allocated exactly 1 GPU and 1 CPU.", "As a result, all trials are executed in the single-GPU mode and there never exist two trials sharing the same GPU.", "NLP Tasks .", "To study HPO's performance on multiple NLP tasks, we use the 9 tasks from the GLUE (General Language Understanding Evaluation) benchmark (Wang et al., 2018).", "Time Budget .", "We focus on a low-resource scenario in this paper.", "To compare the performance of grid search vs. HPO, we first allocate the same time budget to HPO as grid search in our initial comparative study (Section 4).", "If HPO does not outperform grid search, we increase the time budget for HPO.", "We require that each HPO run takes no more than 8 GPU hours with the NVIDIA Tesla V100 GPU under our setting.", "We prune a task if the time for grid search exceeds two hours.", "A complete list of the time used for each remaining task can be found in Table", "2. NLP task Electra epoch RoBERTa epoch WNLI 420 3 660 10 RTE 1000 10 720 10 MRPC 420 3 720 10 CoLA 420 3 1200 10 STS-B 1200 10 1000 10 SST 1200 3 7800 QNLI 1800 3 QQP 7800 3 -MNLI 6600 3 -Table 2: The running time of grid search for each task (in seconds) and the corresponding number of epochs.", "focus on two pre-trained language models: the Electra-base model (Clark et al., 2020), and the", "RoBERTa-base model (Liu et al., 2019).", "Electra and RoBERTa are among the best-performing models on the leaderboard of GLUE as of Jan 2021 4 .", "Another reason for choosing the two models is that they both provide a simple search space for grid search, and we find it helpful to design our HPO search space on top of them.", "We use both models' implementations from the transformers library (Wolf et al., 2020) (version = 3.4.0).", "Among all the different sizes of RoBERTa and Electra (large, base, small), we choose the base size, because large models do not fit into our 2-hour budget 5 .", "With the 2-hour time constraint, we prune tasks where grid search takes longer than two hours.", "For Electra, QQP is pruned, whereas for RoBERTa, SST, QNLI, QQP, MNLI are pruned.", "Search Space for Grid Search and HPO .", "It is generally difficult to design an HPO search space from scratch.", "In our problem, this difficulty is further amplified with the limited computational resources.", "Fortunately, most papers on pre-trained language models recommend one or a few hyperparameter configurations for fine-tuning.", "We use them as the configurations for grid search.", "For HPO, the performance depends on the search space choice, e.g., it takes more resources to explore a large space than a smaller space close to the best configuration.", "Due to the time budget limits, we focus on a small space surrounding the recommended grid search space, as shown 4 www.gluebenchmark.com 5 Our empirical observation shows that the large models take 1.5 to 2 times the running time of the base models.", "in Table", "1. 6 More specifically, we convert the learning rate , warmup ratio , attention dropout , and hidden dropout to a continuous space by expanding the grid space.", "For weight decay , since the recommended configuration is 0, we follow Ray Tune's search space and set the HPO space to (0, 0.3) (Kamsetty, 2020).", "For epoch number , most existing work uses an integer value between 3 and 10 (Clark et al., 2020; Liu et al., 2019; Dai et al., 2020), resulting in a large range of space we can possibly search.", "To reduce the exploration required for HPO, we skip expanding the search space for epoch number and fix it to the grid configuration.", "HPO Algorithms .", "We compare the performance between grid search and three HPO algorithms: random search (Bergstra and Bengio, 2012), asynchronous successive halving (ASHA) (Li et al., 2020), and Bayesian Optimization (Akiba et al., 2019)+ASHA.", "We use all HPO methods' implementations from the Ray Tune library (Liaw et al., 2018) (version 1.2.0).", "We use BO (with TPE sampler) together with the ASHA pruner, because with the small time budget, BO without the pruner reduces to random search.", "As fine-tuning in NLP usually outputs the checkpoint with the best validation accuracy, we also let the HPO methods output the best checkpoint of the best trial.", "This choice is explained in more details in Appendix A.1.", "As the performance of HPO depends on the time budget, to compare between grid search and HPO, we first conduct an initial study by setting the time budget of HPO to the same as grid search.", "For the rest of this paper, we use a GST to denote that the time budget= a the running time for grid search.", "Table 3 shows the experimental results on Electra and RoBERTa using 1GST.", "For each (HPO method, NLP task) pair, we repeat the randomized experiments 3 times and report the average scores.", "We analyze the results in Section 4.1.", "6 The grid search spaces in Table 1 are from Table 7 of Electra and Table 10 of RoBERTa.", "For Electra, we fix the hyperparameters for Adam; we skip the layer-wise learning rate decay because it is not supported by the HuggingFace library.", "While Electra's original search space for learning rate is [3e-5, 5e-5, 1e-4, 1.5e-4], we have skipped the learning rate 5e-5 in our experiment.", "Electra .", "By comparing the performance of grid search and HPO in Table 3 we can make the following findings.", "First, HPO fails to match grid search's validation accuracy in the following tasks: RTE, STS-B, SST and QNLI.", "In certain tasks such as QNLI and RTE, grid search outperforms HPO by a large margin.", "Considering the fact that grid search space is a subspace of the HPO space, this result shows that with the same time budget as grid search (i.e., approximately 3 to 4 trials), it is difficult to find a configuration which works better than the recommended configurations.", "Indeed, with 3 to 4 trials, it is difficult to explore the search space.", "Although ASHA and BO+ASHA both search for more trials by leveraging early stopping (Li et al., 2020), the trial numbers are still limited (the average trial numbers for experiments in Table 3 can be found in Table 6 of the appendix).", "Second, among the tasks where HPO outperforms grid search's validation accuracy, there are 2 tasks (WNLI, MRPC) where the test accuracy of HPO is lower than grid search.", "As a result, the HPO algorithm overfits the validation dataset.", "Overfitting in HPO generally happens when the accuracy is optimized on a limited number of validation data points and cannot generalize to unseen test data (Feurer and Hut-ter, 2019).", "(Zhang et al., 2021) also found that fine-tuning pre-trained language models is prone to overfitting when the number of trials is large, though they do not compare HPO and grid search.", "Finally, by searching for more trials, ASHA and BO+ASHA slightly outperform random search in the validation accuracy, but their test accuracy is often outperformed by random search.", "RoBERTa .", "By observing RoBERTa's results from Table 3, we can see that the average validation accuracy of HPO outperforms grid search in all tasks except for CoLA.", "It may look like HPO is more effective; however, most of the individual runs in Table 3 overfit.", "As a result, HPO for fine-tuning RoBERTa is also prone to overfitting compared with grid search.", "The complete lists of the overfitting cases in Table 3 can be found in Table 8 and Table 9 of Appendix A.3.", "Since Table 3 shows HPO cannot outperform grid search using 1GST, and is prone to overfitting, we propose two general strategies to improve HPO's", "performance.", "First, we increase the time budget for HPO so that HPO can exploit the space with more trials.", "Second, to control overfitting, we propose to reduce the search space.", "More specifically, we propose to fix the values of certain hyperparameters to the default values in the grid configuration (Ta-ble 3).", "The reason is that overfitting can be related to certain hyperparameter settings of the model.", "For example, it was shown in ULMFit (Howard and Ruder, 2018) that using a non-zero warmup step number can help reduce overfitting.", "Intuitively, a larger search space is more prone to overfitting.", "For example, by using a warmup search space = (0, 0.2), the warmup steps in the best trial found by HPO may be much smaller or larger than the steps used by grid search.", "Other hyperparameters which are related to overfitting of fine-tuning include the learning rate (Smith and Le, 2017), batch size (Smith et al., 2017), and the dropout rates (Sri-vastava et al., 2014; Loshchilov and Hutter, 2019, 2018).", "Our proposed procedure for troubleshooting HPO failures is depicted in Figure", "1. Starting from the full search space and 1GST, we test the HPO algorithm for a few times.", "If any overfitting is observed, we reduce the search space and go back to testing the HPO algorithm again.", "On the other hand, if no overfitting is observed and HPO also does not outperform grid search, we increase the time budget and also go back to testing the HPO algorithm again.", "We continue this procedure until any of the following conditions is met: first, HPO successfully outperforms grid search; second, the search space cannot be further reduced, thus HPO overfits the task; third, the time budget cannot be further increased under a user-specified threshold, thus whether HPO can outperform grid search is to be determined for this specific task.", "In this section, we evaluate the effectiveness of our proposed procedure in Figure", "1. To apply the procedure, we need to further consolidate two components: first, what time budget should we use; second, which hyperparameter to fix for reducing the search space.", "For the first component, we use a relatively small list for time budget options { 1GST, 4GST } .", "For the second component, it is difficult to guarantee to reduce overfitting by fixing a specific hyperparameter to its grid search values.", "When choosing the hyperparameter to fix, we refer to the configurations of the best trials which cause the HPO results to overfit.", "Electra .", "To decide which hyperparameter to fix, we examine the best trial's configuration for the overfitting HPO runs (compared with the grid search performance).", "If there is a pattern in a certain hyperparameter of all these configurations (e.g., warmup ratio below 0.1 for Electra), by fixing such hyperparameters to the values of grid search, we can exclude the other values which may be related to overfitting.", "We apply this analytical strategy to the initial Electra results in Table 3.", "Among the 72 runs, 9 runs overfit compared with grid search.", "For each run, we list the hyperparameter configurations of the best trial in Table 8 of Appendix A.3.", "For Electra, we have skipped showing weight decay in Table 8, because the HPO configuration is never smaller than the grid configuration, thus does not affect the result of the analysis.", "For comparative purpose, we also list the hyperparameter values of the best trial in grid search.", "To improve the readability of Table 8, we use 4 different colors (defined in Appendix A.3) to denote the comparison between values of the best trial in HPO and values of the best trial in grid search.", "From Table 8, we observe that the warmup ratio s are often significantly lower than 0.1.", "We skip the analysis on learning rate because its search space (log((2.99e-5,1.51e-4))) cannot be further reduced without losing coverage of the grid configurations or continuity; we also skip weight decay because any trial's value cannot be smaller than 0.", "Following this empirical observation, we hypothesize that fixing the warmup ratio to 0.1 can help reduce overfitting in Electra.", "We use S full to denote the original search space and S wr to denote the search space by fixing the warmup ratio to 0.1.", "If HPO overfits in both S full and S wr , the procedure will reduce the search space to the minimal continuous space S min containing the grid search space, which searches for the learning rate only.", "RoBERTa .", "We apply the same analytical strategy to the RoBERTa results in Table 3 and show the hyperparameters of the best trials in Table", "9. For RoBERTa, we propose to fix the values of two hyperparameters at the same time: the warmup ratio and the hidden dropout.", "We denote the search space after fixing them as S wr hdo .", "If HPO overfits in both S full and S wr hdo , the procedure will reduce the search space to S min which contains the learning rate and batch size only.", "In this section, we apply the troubleshooting procedure on the initial HPO results from Table 3 and observe the execution paths.", "In Table 10 and Table 11 of Appendix A.4, we list the full execution results of the procedure for random search and random search + ASHA.", "Table 10&11 have included only the tasks where the HPO does not succeed in the initial study.", "In Table 10&11, we show the validation and test accuracy for the three repetitions of HPO runs as well as their average score.", "An Example of Executing the Procedure .", "In Figure 4, we show an example of applying the procedure on random search for Electra on RTE.", "In round 0, the validation and test accuracies of all three repetitions are lower than grid search.", "That implies RS needs more time budget, therefore we increase the budget (marked as res) for RS from 1GST to 4GST.", "After the increase, overfitting is detected in the 1st repetition of round 1 (valida-tion accuracy = 84.5, test accuracy = 74.6).", "We thus reduce the search space (marked as space) from S full to S wr .", "In round 2, the 1st repetition still shows (weak) overfitting: RS has the same round 0 round 1 round 2 round 3 val test val test val test val test grid 84.176.8 res space space rep1 81.9 76.1 84.5 74.6 84.1 76.1 84.8 75.3 rep2 81.6 75.1 83.8 74.5 83.0 74.0 84.1 75.7 rep3 83.0 75.7 83.4 74.7 82.3 73.1 83.8 75.2 Avg 82.2 75.6 83.9 74.6 83.1 74.4 84.2 75.4 Table 4: An example of executing the experimental procedure applied to random search for Electra on RTE.", "validation accuracy as grid search (84.1), a smaller test accuracy (76.1), and a smaller validation loss (RS's validation loss = 0.8233, grid search's validation loss = 0.9517).", "We thus continue reducing the search space to S min , and overfitting is detected again in the 1st repetition of round 3 (validation accuracy = 84.8, test accuracy = 75.3).", "After round 3, the search space cannot be further reduced, so we classify this case as 'HPO overfits task'.", "We analyze the execution results in Table 10 and 11 jointly as follows.", "Effects of Reducing the Search Space .", "From the two tables we can observe that reducing the search space can be effective for controlling overfitting.", "In WNLI (Electra), both algorithms outperform grid search after reducing the search space once.", "In WNLI (RoBERTa), ASHA outperforms grid search after reducing the search space twice.", "We can observe a similar trend in MRPC (Electra), SST (Electra), RTE (RoBERTa), and CoLA (RoBERTa).", "However, for these cases, overfitting still exists even after we reduce the search space twice, i.e., using the minimal search space.", "observing cases of increased budget in Table 10 and 11, we can see that this strategy is generally effective for improving the validation accuracy.", "After increasing the time budget, in STS-B (Electra) all HPO methods outperform grid search's validation and test accuracy; in SST (Electra-RS) and CoLA (RoBERTa) HPO outperforms grid search in only the validation accuracy.", "In RTE (Electra) and QNLI (Electra), however, this increase is not enough for bridging the gap with grid search, thus HPO remains behind.", "For RTE (Electra), SST (Electra), QNLI (Electra), and CoLA (RoBERTa), overfitting happens after increasing the time budget from 1GST to 4GST.", "After reducing the search space, we still observe overfitting in most cases.", "Comparisons between RS and ASHA .", "By comparing the results between random search and ASHA in Table 10 and 11, we find that before increasing the budget, RS rarely outperforms ASHA in the validation accuracy; however, after the budget of both RS and ASHA increases to 4GST, the best validation accuracy of RS has consistently outperformed ASHA, i.e., in all of RTE (Electra), STS-B (Electra), SST (Electra), and QNLI (Elec-tra).", "That is, the increase in the time budget has led to more significant (validation) increase in RS than ASHA.", "This result may be caused by two reasons.", "First, at 1GST, ASHA already samples a larger number of trials (Appendix A.2), which may be sufficient to cover its search space; on the other hand, RS cannot sample enough trials, thus increasing the time budget is more helpful.", "Second, ASHA may make mistake by pruning a good trial that shows a bad performance at the beginning.", "In Table 5, we list the final execution results for each task in Electra and RoBERTa.", "Our main findings can be summarized as follows.", "After increasing the time budget and reducing the search space, HPO outperforms grid search in the following cases: (1) in 3 cases (i.e., CoLA (Elec-tra), STS-B (Electra) and MNLI (Electra)), HPO outperforms grid search by using the full search space, where STS-B needs more budget; (2) in 4 cases (i.e., WNLI (Electra), WNLI (RoBERTa), MRPC (RoBERTa) and STS-B (RoBERTA)), HPO succeeds after reducing the search space; (3) in the other 7 cases, HPO cannot outperform grid search even after increasing the time budget and reducing the search space.", "This result shows that when searching in a continuous space surrounding the recommended grid configurations, it can be difficult for existing automated HPO methods (e.g., Random Search, ASHA, Bayesian optimization) to outperform grid search (with manually tuned grid configurations recommended by the language model) within a short amount of time; even if we can identify a configuration with good validation score, most likely the test score is still worse than task Execution Results WNLI All HPO succeed w/ 1GST, S wr RTE RS overfits ASHA and BO+ASHA TBD MRPC All HPO overfit CoLA All HPO succeed w/ 1GST, S full STS-B All HPO succeed w/ 4GST, S full SST All HPO overfit QNLI All HPO TBD MNLI All HPO succeed w/ 1GST, S full task Execution Results WNLI ASHA succeeds w/ 1GST, S wr hdo RS and BO+ASHA overfit RTE All HPO overfit MRPC ASHA succeeds w/ 1GST, S wr hdo RS and BO+ASHA overfit CoLA All HPO overfit STS-B RS succeeds w/ 1GST, S wr hdo ASHA and BO+ASHA succeed w/ 1GST, S min Table 5: Final results of executing the troubleshooting procedure on Electra (top) RoBERTa (bottom).", "The Total Running Time for the Procedure .", "The execution for all experiments in Table 10 and 11 took 6.8 4V100 GPU days.", "This is in contrast to the cost if we enumerate all 5 factors in Section 3, which is 164V100 GPU days.", "A Caveat on Results in Table 5 .", "For all study results in this paper (i.e., Table 3, Table 10 and Table 11), we have repeated each HPO run three times.", "Therefore if a case succeed in Table 5, it is because no overfitting is detected in the 3 repetitions, if we ran more repetitions, the risk of overfitting can increase.", "In addition, all results are evaluated under transformers version=3.4.0 and Ray version=1.2.0.", "If these versions change, results in Table 5 may change.", "Overfitting and Train/Validation/Test split .", "As overfitting indicates a negative correlation between the validation and test accuracy, one hypothesis is that overfitting is caused by the different distribution of the validation and test set.", "We thus compare HPO runs using the original GLUE spilt and a new split which uniformly partition the train/validation/test data.", "The results can be found in Appendix A.5.", "Optimization Hyperparameter optimization methods for generic machine learning models have been studied for a decade (Feurer and Hutter, 2019; Bergstra et al., 2011; Bergstra and Bengio, 2012; Swersky et al., 2013).", "Prior to that, grid search was the most common tuning strategy (Pedregosa et al., 2011).", "It discretizes the search space of the concerned hyperparameters and tries all the values in the grid.", "It can naturally take advantage of parallelism.", "However, The cost of grid search increases exponentially with hyperparameter dimensions.", "A simple yet surprisingly effective alternative is to use random combinations of hyperparameter values, especially when the objective function has a low effective dimension, as shown in (Bergstra and Ben-gio, 2012).", "Bayesian optimization (BO) (Bergstra et al., 2011; Snoek et al., 2012) fits a probabilistic model to approximate the relationship between hyperparameter settings and their measured performance, uses this probabilistic model to make decisions about where next in the space to acquire the function value, while integrating out uncertainty.", "Since the training of deep neural networks is very expensive, new HPO methods have been proposed to reduce the cost required.", "Early stopping methods (Karnin et al., 2013; Li et al., 2017, 2020) stop training with unpromising configurations at low fidelity (e.g., number of epochs) by comparing with other configurations trained at the same fidelity.", "Empirical study of these methods is mostly focused on the vision or reinforcement learning tasks, there has been few work focusing on NLP models.", "ASHA was evaluated on an LSTM model proposed in 2014 (Zaremba et al., 2014).", "In (Wang et al., 2015), the authors empirically studied the impact of a multi-stage algorithm for hyperparameter tuning.", "In (Zhang and Duh, 2020), a look-up table was created for hyperparameter optimization of neural machine translation systems.", "In BlendSearch (Wang et al., 2021), an economical blended search strategy was proposed to handle heterogeneous evaluation cost in general and demonstrates its effectiveness in fine-tuning a transformer model Turing-NLRv2.", "7 Some existing work has addressed overfitting in HPO (Levesque, 2018) or neural architecture search (Zela et al., 2020).", "For HPO, cross validation can help alleviate the overfitting when tuning SVM (Levesque, 2018), which is rarely applied in deep learning due to high computational cost.", "For neural architecture search (Zela et al., 2020), the solution also cannot be applied to our case due to the difference between the two problems.", "Models As fine-tuning pre-trained language models has become a common practice, existing works have studied how to improve the performance of the fine-tuning stage.", "Among them, many has focused on improving the robustness of fine-tuning.", "For example, ULMFit (Howard and Ruder, 2018) shows that an effective strategy for reducing the catastrophic forgetting in fine-tuning is to use the slanted triangular learning rate scheduler (i.e., using a nonzero number of warmup steps).", "Other strategies for controlling overfitting in fine-tuning include freezing a part of the layers to reduce the number of parameters, and gradually unfreezing the layers (Peters et al., 2019), adding regularization term to the objective function of fine-tuning (Jiang et al., 2020), multi-task learning (Phang et al., 2018).", "Applying these techniques may reduce overfitting in our experiments; however, our goal is to compare grid search and HPO, if these techniques are helpful, they are helpful to both.", "To simplify the comparison, we thus focus on fine-tuning the original model.", "Meanwhile, the performance of fine-tuning can be significantly different with different choices of the random seeds (Dodge et al., 2020).", "To remove the variance from random seed, we have fixed all the random seeds to 42, although HPO can be used to search for a better random seed.", "(Zhang et al., 2021) identifies the instability of fine-tuning BERT model in few-sample cases of GLUE (i.e., RTE, MRPC, STS-B, and CoLA).", "Similar to our work, they also found that overfitting increases when searching for more trials.", "However, they have not compared grid search with HPO.", "There are also many discussions on how to control overfitting by tuning hyperparameters (in manual tun-ing), e.g., learning rate (Smith and Le, 2017), batch 7 msturing.org size (Smith et al., 2017), dropout rates (Srivastava et al., 2014; Loshchilov and Hutter, 2019, 2018), which may help with designing a search space for HPO that overfits less.", "Our study suggests that for the problem of fine-tuning pre-trained language models, it is difficult for automated HPO methods to outperform manually tuned grid configurations with a limited time budget.", "However, it is possible to design a systematic procedure to troubleshoot the performance of HPO and improve the performance.", "We find that setting the search space appropriately per model and per task is crucial.", "Having that setting automated for different models and tasks is beneficial to achieve the goal of automated HPO for fine-tuning.", "For example, one may consider automatically mining the pattern from Table 8&9 to identify the hyperparameters that likely cause overfitting.", "Further, for the tasks remaining to be unsuitable for HPO, other means to reduce overfitting is required.", "One possibility is to use a different metric to optimize during HPO as a less overfitting proxy of the target metric on test data.", "Previous work has shown that random seed is crucial in the performance of fine-tuning (Dodge et al., 2020).", "Fine-tuning also benefits from en-sembling or selecting a few of the best performing seeds (Liu et al., 2019).", "It would be interesting to study HPO's performance by adding the random seed to the search space for future work.", "In our study, the simple random search method stands strong against more advanced BO and early stopping methods.", "It suggests room for researching new HPO methods specialized for fine-tuning.", "A method that can robustly outperform random search with a small resource budget will be useful.", "It is worth mentioning that although we find HPO sometimes underperforms grid search, the grid search configurations we study are the default ones recommended by the pre-trained language models for fine tuning, therefore they may be already extensively tuned.", "We may not conclude that HPO is not helpful when manual tuning has not been done.", "How to leverage HPO methods in that scenario is an open question." ]
[ "abstain", "objective", "objective", "result", "objective", "result", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "result", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "result", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "other", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain" ]
[ "Many state-of-the-art (SOTA) language models have achieved high accuracy on several multi-hop reasoning problems.", "However, these approaches tend to not be interpretable because they do not make the intermediate reasoning steps explicit.", "Moreover, models trained on simpler tasks tend to fail when directly tested on more complex problems.", "We propose the Explainable multi-hop Verbal Reasoner (EVR) to solve these limitations by", "(a) decomposing multi-hop reasoning problems into several simple ones, and", "(b) using natural language to guide the intermediate reasoning hops.", "We implement EVR by extending the classic reasoning paradigm General Problem Solver (GPS) with a SOTA generative language model to generate subgoals and perform inference in natural language at each reasoning step.", "Evaluation of EVR on Clark et al. (2020)'s synthetic question answering (QA) dataset shows that EVR achieves SOTA performance while being able to generate all reasoning steps in natural language.", "Furthermore, EVR generalizes better than other strong methods when trained on simpler tasks or less training data (up to 35.7 % and 7.7 % absolute improvement respectively).", "1 1 Introduction Large pretrained language models such as BERT (Devlin et al., 2019) and RoBERTa (Liu et al., 2019) have been successfully used in multi-hop reasoning problems (Banerjee et al., 2020; Asai et al., 2019; Yadav et al., 2019).", "Usually, these pretrained language models solve multi-hop reasoning problems in a discriminative end-to-end manner: these models take the question and all the relevant evidence as the input, and produce the final answer to the question.", "This raises two problems.", "First, this direction lacks interpretability, i.e., it is hard to 1 The code is available at: https://github.", "( Input Facts :) Alan is blue.", "Alan is rough.", "Alan is young.", "Bob is big.", "Bob is round.", "Charlie is big.", "Charlie is blue.", "Charlie is green.", "Dave is green.", "Dave is rough.", "( Input Rules :)", "Big people are rough.", "If someone is young and round then they are kind.", "If someone is round and big then they are blue.", "All rough people are green.", "Q1 : Bob is green.", "True/false?", "[ Answer : T] Q2 : Bob is kind.", "True/false?", "[ Answer : F] Figure 1: An example taken from (Clark et al., 2020).", "know which individual reasoning steps are taken in each iteration and why.", "Second, the trained models usually suffer from the compositionality generalization problem, meaning that they tend to fail when the number of reasoning steps are much larger in the evaluation set than in the training set (Hupkes et al., 2020; Hahn, 2020; Clark et al., 2020).", "Newell (1994) categorized cognitive processes based on their time scales: unconscious activities take around 50 ms, whereas conscious actions can vary from 100 ms to hours.", "Importantly, Newell (1994) argued that conscious actions are sequences of simple conscious/unconscious actions.", "Extrapolating from cognitive science to natural language processing (NLP), in this paper we ask the question: can we design an interpretable multi-hop reasoning system that sequentially applies neural networks trained on simpler tasks?", "Further, motivated by the finding from cognitive science that people might use internal monologues to guide their reasoning, we want to explore whether it is possible to use natural language to guide this sequential process.", "In this paper, we propose a solution for these important questions.", "We provide a neural implementation for a classic planning/reasoning paradigm that is designed to mimic the human reasoning process: the General Problem Solver (GPS) (Newell et al., 1959).", "We augment GPS with a SOTA sequence-to-sequence (Seq2Seq) model and apply this model recursively to achieve high interpretability and better generalization of compositionality for a synthetic QA task (Clark et al., 2020).", "The contributions of our paper are the following: (1) We extend (Clark et al., 2020)'s dataset with natural language intermediate goals/statements necessary to answer each question.", "(2) We propose a neural GPS to address this QA task while generating all intermediate goals/statements in natural language.", "(3) Evaluation on the above task shows that our proposed method achieves SOTA performance.", "Importantly, our method generalizes better when trained only on simpler tasks (26.5 % to 35.7 % absolute im-provement), and less training data (7.7 % absolute improvement) compared with other strong reasoning methods.", "We build our approach from the multi-hop reasoning problem proposed in (Clark et al., 2020), which we summarize first.", "Figure 1 shows an example from this dataset.", "Each reasoning problem consists of the context C , the question Q and the answer A = { T rue, F alse } .", "C includes facts F and rules R .", "To answer a question, multiple statements in C need to be combined.", "The proofs of the questions are provided by the creators of the dataset, and each question is proved by one of three available strategies: proof, inv-proof and fail-to-prove.", "Proof directly proves a statement is true using the facts and the rules; inv-proof proves a statement is false using the facts and rules; and, lastly, fail-to-prove means the statement could not be explicitly proved to be true or false given the rules and facts.", "In the latter case, a positive statement is considered to be False , and a negative statement is considered to be True .", "For example, assuming we are given the facts and rules in Figure 1, the reasoning chain provided to prove Bob is green is using the proof strategy.", "Conversely, Alan is not green is false by inv-proof, because Alan is green can be proved by All rough people are green Alan is rough.", "Finally, Alan is nice is false and Alan is not nice is true due to fail-to-prove.", "The dataset is synthesized using hand-crafted rules and formal language, then translated to natural language.", "Some language variation is inserted (e.g., in Figure 1 the rules are expressed differ-ently).", "Depending on the number of rules and facts needed, there are 5 partitions in the dataset: DU0 , DU1 , DU2 , DU3 and DU5 , where DU stands for Depth Upto.", "DU0 means the reasoning depth of the questions is 0, i.e., the questions can be answered by just looking at the facts without applying any rules.", "DU5 means the questions may require applying the rules for upto 5 times (but DU5 also has questions that require applying the rules for 0 to 4 times).", "Additionally, a birds-electricity dataset is also provided to test the model's generalization ability.", "The F , R , Q are generated by similar templates of DU0 to DU5, but with different enti-ties/predicates/attributes that do not appear in the DU0 to DU5 partitions.", "Summing up all partitions, the dataset has approximately 500K questions, and the train/dev/test ratio is 70/10/20.", "More details can be found in (Clark et al., 2020).", "We will compare our approach against two strong baselines.", "The first baseline is a RoBERTa classi-fier from the original paper of Clark et al. (2020).", "In this approach, the questions are solved in a text classification manner.", "That is, for each question, the model takes C and Q as the input, and calculates the probability of A being true or false.", "We abbreviate this baseline as RT in our paper.", "The second baseline is PROVER (Saha et al., 2020), which handles the reasoning problem as a graph problem.", "This approach takes the input C and Q to produce both the final answer { T rue, F alse } , and a graph that indicates the reasoning path.", "We abbreviate this baseline is as PR.", "Shortcoming #1: Limited interpretability RT models this reasoning problems as a text classification task over a bag of evidences.", "That is, RT takes all context and produces an answer in a single forward inference process.", "Although its predictions can often achieve high accuracy, there are several Figure 2: (A) GPS's working cycle.", "problems with this and similar directions, which limit their interpretability.", "First, it is unclear which part of the context was used by the answering engine.", "Second, although some methods such as PR improve over RT by producing an explanation consisting of multiple supporting facts at the end, it is still hard to explain the underlying reasoning process in human understandable terms.", "Finally, it has been shown theoretically and empirically that neural networks suffer from the compositionality generalization problem (Hupkes et al., 2020; Hahn, 2020).", "That is, neural networks have limited ability to learn recursive patterns, and they fail to generalize to recursive patterns that are much deeper than the ones seen in training.", "Shortcoming #2: Differences from the human cognitive process People do not usually solve very complex problems at once.", "Instead, people constantly generate subgoals and solve complex problems incrementally (Newell, 1966).", "Second, verbal strategies are sometimes used to guide one's reasoning (Bacon et al., 2003).", "This is different from the approaches taken by both RT and PR.", "A desired explainable multi-hop verbal reasoner: Motivated by these shortcomings, we propose several desired characteristics for an ideal problem solver.", "First, the method should be able to decompose complex problems into simple ones that are easy to answer.", "This should not only in-crease the interpretability of the reasoning process, but also help reduce the compositionality generalization problem, because the unseen distributions (the complex problem) can be reduced to a series of seen distributions (simple problems).", "Second, each reasoning step should be guided by natural language, so that each step is easily explainable to the human end user.", "In this section, we first review a classic plan-ning/reasoning paradigm that is designed to mimic the human reasoning process, the General Problem Solver (GPS) (Newell et al., 1959), then propose our neural implementation of it.", "GPS works in cycles (Figure 2 (A)): in each cycle, the operator proposer P reads the current goal G to propose an operator O = P ( G ) .", "Then the proposed operator is used by the executor E to update the goal G = E ( G, O ) .", "The cycle stops when the goal is satisfied, or no new operators are proposed.", "Figure 2 (B) shows a toy example from the block world, where the agent starts from the goal state and searches for a sequence of actions to reach the initial state.", "Although GPS has been widely used to mimic the human reasoning process, it has shortcomings.", "First, the representations of the goals are usually in a formal language, which has limited expressiveness and readability compared to natural language.", "Second, the proposer uses human-crafted rules to match and propose operators, which may not be flexible enough to handle situations that diverge from the training examples.", "Due to these drawbacks, we propose to add neural components to GPS (a working cycle is shown in Figure 3).", "More specifically, the neural GPS has the following characteristics: Goal (Extended to Working Memory): First, the goal is represented in natural language instead of formal language to enable better readability and expressiveness.", "Second, the goal is extended to a working memory buffer, which contains not only the goal but also other information that might be useful to the reasoning process.", "proposer is no longer using explicit rules.", "Instead, we use a Seq2Seq neural network to directly map Figure 3: (A) A working cycle for the proposed neural GPS.", "the text in the working memory to a sequence of operators.", "The operators are later used by the executor, which also has neural components, to update the working memory buffer.", "In this section we explain in detail the working flow of the neural GPS (referred as Explainable Verbal Reasoner or EVR later) and the design of each of its components for multi-hop verbal reasoning.", "A Walk-Through Example: Figure 4 shows an example of how our method solves the problem in Figure 1. Every two consecutive blocks form a working cycle (e.g., patterns 1 & 2 or 3&4 ).", "In each cycle, the odd pattern is the operator proposing stage, and the even pattern is the executing stage.", "buffers available for the operator proposer, with the first buffer storing some general knowledge about the problem, and the second describing the goal.", "The operator proposer, a Seq2Seq neural model, first concatenates the two episodic buffers, then proposes GENERATE_SUBGOALS as the operator.", "At the executing stage, the executor (another Seq2Seq neural model), takes the two episodic buffers in the working memory and the GENERATE_SUBGOALS operator to produce the subgoals: judge whether the facts can prove the goal or the rules can prove the goal.", "Finally, the newly generated subgoals replace the old goal in the episodic buffer (i.e., the goals in pattern 3 and 7 are different from pattern 1, because the goal in pattern 1 is replaced) and one working cycle is finished.", "At pattern 10, the EVR discovers that the new goal is to prove bob is rough, so another recursive search process starts (largely repeating the process of pattern 1 to 10).", "Working Memory: Working memory is a global memory space with several storage fields, where each field is indexed by a textual key (Figure 3 (B)).", "In this verbal reasoning problem, three types of information can be stored in the working memory: episodic buffer (indexed by the key EPISODIC_BUFFER), fact buffer (indexed by FACT_BUFFER_[ i ], since there are probably more than one fact buffers), and rule buffer (indexed by RULE_BUFFER_[ i ]).", "The episodic buffer stores three types of information:", "(a) general statements about the reasoning task that are useful throughout the reasoning process (e.g., row 1 in Figure 3 (B));", "(b) goal (row 2, Figure 3 (B)).", "Similar to GPS, a description of goal should be included in the working memory.", "We use natural language to describe the goals, and goals are updated periodically;", "(c) inferred knowledge during the reasoning process (row 3, Figure 3 (B)).", "Note that the working memory is not equivalent to the input text in the patterns in Figure 4.", "The input text in Figure 4 is obtained from the working memory, but some information in the working memory might not be needed by many patterns.", "For example, fact buffers and rule buffers are only the input for pattern 6 and 10, whereas other patterns do not use them.", "Ideally, at each cycle, both the proposer and the executor need to determine what information to use from the working memory, and the executor also needs to determine what information to modify in the working memory.", "This is a hard problem: assume there are n pieces of information in the working memory, there will be 2 n ways to read from/modify the memory.", "Therefore we make the following simplifications for this verbal reasoning problem.", "First, the size of episodic buffer is fixed to 2 , and the first episodic buffer slot (i.e., there are X fact buffers and Y rule buffers) can not be modified; the second episodic buffer slot is constantly modified as the new subgoals are generated.", "Second, the fact buffers and the rule buffers could not be modified.", "Third, the proposer and the executor only use the two episodic buffers as the input by default.", "The fact buffer and rule buffer can be used as input, but only when explicit commands are generated (e.g., pattern 5 and 6 in Figure 4).", "Operator Proposer: We use a SOTA Seq2Seq language model, Google T5 (Raffel et al., 2020), as the operator proposer.", "The proposer concatenates the two episodic buffer slots as a single piece of text, and uses this text as the input to produce a sequence of operators.", "Executor: The executor has three functions: (1) parse the operators, (2) call the correct neural module given the operator, to get the answer/get the subgoal, and (3) update the working memory.", "Since the operators are fairly simple, we just use several if-else conditions to determine what actions need to be taken.", "Some examples are given in Table 1. As discussed above, the changing of working memory is restricted to changing the second episodic buffer slot.", "The major part of the executor is the neural module, which is responsible for generating subgoals (pattern 2 ,4 ,8 in Figure 4), answering questions (pattern 6 in Figure 4) and deriving statements (pattern 10 in Figure 4).", "Again we use Google T5 as this neural module to read from the working memory and produce textual output.", "More details can be found in Appendix A.3.", "Finally, to train the working flow we proposed in Section 3.3, 12 patterns of training data need to be generated (only 10 are shown in Figure 4).", "The generation strategies for some critical patterns are shown in Table 2. In summary, we write rules to generate the input text and the output text for each pattern.", "Around 1M training samples are generated for the 12 patterns in total.", "2 We implemented three variants of EVR to learn these 12 tasks: EVR1: This is the EVR baseline.", "For this baseline we use three distinct T5 models: one to learn pattern 6 data, one to learn pattern 10 data, and one to learn the rest of the patterns.", "The fact buffer size is set to 5 and the rule buffer size is set to 3 (5 facts per fact buffer and 3 rules per rule buffer).", "EVR2: The fact buffer size and rule buffer size are the same as the EVR1.", "However, we use a single T5 model to learn all patterns of data.", "This is to test whether multi-task learning helps or harms the performance of EVR.", "EVR3: We use three T5 models like EVR1, but the fact buffer size is set to 20 and the rule buffer 2 The data generation code can be found at: https://github.com/clulab/releases/tree/master/naacl2021-evr Operator Example Description AND/OR Pattern 2, 4, 8 in Figure 4 Conjunction/disjunction operator to connect two branches.", "size to 10.", "Therefore for all problems there will be only one fact buffer and one rule buffer.", "Since there are more facts and rules in the buffer, the input text to the T5 will be longer.", "And since there are more rules in the rule buffer, the number of matched rules could be more, so the target text could be potentially longer.", "We conjecture the longer input and output will make the model harder to train.", "We use T5 small for all experiments, with the learning rate set to 1e-4.", "In each epoch, the models are trained on 24,000 training examples, and evaluated on the first 2,000 dev samples.", "Edit distance between the generated text and target text is used to D 0 1 2 3 4 5 all Cnt 6299 4434 2915 2396 2134 2003 20192 DU1RT 100.0 99.0 36.8 23.1 11.4 12.3 63.5 PR 73.7 EVR1 99.7 98.8 97.8 95.5 91.7 90.7 97.0 EVR2 99.5 98.3 96.8 93.1 89.3 88.1 95.9 EVR3 99.8 99.5 99.0 98.9 98.2 97.9 99.2 DU5RT 100.0 98.4 98.4 98.8 99.2 99.8 99.2 PR 100.0 99.0 98.8 99.1 98.8 99.3 99.3 EVR1 99.5 98.2 96.5 92.8 88.3 86.2 95.5 Table 3: Answer accuracy of EVR variants trained on DU1 (top) and DU5 (bottom), and evaluated on all data depths (DU5).", "evaluate the models performance on the dev set.", "The training is stopped when the edit distance on the dev set starts to increase.", "In addition to the accuracy of the model's prediction of the final answer (T / F), we also report the quality of the generated proofs.", "We extracted the critical facts/rules from EVR's reasoning process and reconstructed the reasoning chain in the same format as the provided proofs in the dataset.", "The proof is considered correct as long as the generated reasoning chain matches one of the provided reasoning chains.", "We use depth{ 0 , 1 , 2 , 3 , 4 , 5 } data (i.e., all depths) from DU5 to test all methods.", "Table 3 lists the performance of the three EVR variants for QA.", "We compare EVR against RT and PR trained on the same DU1 data (top part of the ta-ble), and all the data (DU5) (bottom part).", "EVR outperforms the other two baseline methods trained on DU1 on nearly all splits.", "The best performing EVR is EVR3, which successfully maintains a 97.9 accuracy on depth-5 testing data, and a 99.2 accuracy on all testing splits.", "EVR3 trained on DU1 approaches the performance of the other methods trained on DU5.", "This indicates that when the training data are abundant, longer input or output does not harm the performance of our method.", "Table 4 shows the quality of the generated proofs.", "Only the samples that can be proved (either by proof or inv-proof) are compared with our method's proofs.", "The number of samples subject to this comparison is indicated by Cnt.", "The results in the table demonstrate that EVR obtains high-quality proofs most of the time, regardless of the proof depth.", "Table 5 shows the performance of EVR when tested under a zero-shot learning scenario on the birds-electricity dataset.", "The results show that EVR yields good generalization ability and outperforms the other two baseline methods in general.", "Notably, EVR1 trained on DU1 considerably outperforms the baseline methods trained on DU5.", "A surprising result is that RT trained on DU1 yields a better results than that trained on DU5.", "The RT creators explained that it is because some extremely rare cases in the training data are not well learned by their DU5 model.", "Surprisingly, EVR1 trained on DU5 yields a low performance in the evaluation (e.g., 61.6 on all datasets).", "We inspected several outputs of the reasoning steps generated by EVR1, and observed D 0 1 2 3 4 5 all Cnt 6299 4434 2915 2396 2134 2003 20192 10kPR 87.1 EVR1 99.9 99.1 96.8 90.4 84.3 82.9 94.8 30kPR 97.8 EVR1 99.6 98.5 96.8 93.4 89.6 88.5 96.0 70kPR 100.0 99.0 98.8 99.1 98.8 99.3 99.3 EVR1 99.7 98.8 97.8 95.5 91.7 90.7 97.0 Table 7: Evaluation of the predicted answers of EVR1, when trained on less data.", "that the major reason for this failure is because pattern 4 is not successfully learned due to the bias in DU5 data.", "In DU5 training data, there are at least 7 facts for each question (i.e., at least 2 fact buffers when the fact buffer size is set to 5).", "In this case, the target output for pattern 4 would be I want to judge whether fact buffer 1 . . . OR I want to judge whether fact buffer 2 . . . .", "In contrast, in the birds-electricity dataset, some questions have only 3 supporting facts, so there is only 1 fact buffer.", "In this case, T5 should generate I want to judge whether fact buffer 1 can prove [query]. without further disjunctions.", "However, since such examples never appear in the DU5 training data, the T5 still generates instructions to loop over multiple fact buffers, which causes the reasoning program to fail (due to the attempt to access non-existent fact buffers).", "To verify the above observation, we trained pattern 6 and 10 using DU5, and all other patterns on DU1.", "We report this model's performance (in-dicated by EVR c 1 ) in Table 5 and 6.", "These results indicate that our intuition was correct, as EVR c 1 yields good generalization performance on the birds-electricity dataset.", "Table 6 shows EVR generates high-quality proofs overall, but the proofs on B1 and B2 are poor.", "The reason is that the B1 and B2 datasets contain some examples that have proofs unsupported by our model (e.g., to directly prove a positive statement is False by contradiction).", "Tables 7 and 8 show that EVR1 yields stable performance and proofs when trained with considerably less data (10k and 30k examples).", "In the lowest data configuration (10k), EVR outperforms PR considerably, even when PR is trained on DU5.", "This D 0 1 2 3 4 5 all Cnt 1934 1934 1934 1934 1934 1934 11604 10k 96.4 92.1 86.0 73.6 70.4 62.5 80.2 30k 96.4 91.5 84.9 75.0 72.6 64.8 80.9 70k 96.4 91.9 86.3 75.7 73.1 64.5 81.3 Table 8: Evaluation of the generated proofs of EVR1, when trained on less data.", "experiment supports our claim that EVR suffers less from the compositionality generalization problem by recursively decomposing complex problems and reasoning over simple ones.", "Neural Symbolic Methods: One branch of neural-symbolic reasoning methods is to design different components in the network, but keeping the whole network differentiable.", "Typical works include the Differential Neural Computer (DNC) (Graves et al., 2016), End-to-end Memory Networks (Sukhbaatar et al., 2015), Dynamic Memory Networks (DMN) (Kumar et al., 2016) and Compositional Attention Networks (MAC) (Hudson and Manning, 2018).", "Another direction are the neural modular networks, where what components to use are determined dynamically for each question (Gupta et al., 2019; Jiang and Bansal, 2019).", "However, it is hard to prove the components are actually fulfilling the designed functionality after training due to the distributed nature of the intermediate representations.", "In contrast, we explicitly evaluate the performance of each component of EVR after training, achieving better faithfulness (Subramanian et al., 2020).", "Formal Theorem Prover: Neural components have been used to augment formal theorem proving in several ways.", "Polu and Sutskever (2020) apply a Seq2Seq neural network for mathematical theorem proving by training the neural network to generate the proof at each step.", "Some works seek to use distributed representations to augment the rule-based backward chaining (Weber et al., 2019; Dong et al., 2018).", "However, these works still highly rely on the formal representations and they do not generate the natural language subgoals at each step.", "Problem Solver and Cognitive Architectures: Our work is also largely inspired by cognitive architectures such as ACT-R (Anderson et al., 1997) and SOAR (Laird, 2012), which originate from Newell's GPS.", "These cognitive architectures employ symbolic systems to simulate the human general cognitive processes, but have not been used on complex reasoning problems in NLP.", "Internal Monologue: Internal monologue is the subjective experience of language without overt articulation, and it plays important roles in cognition (Alderson-Day and Fernyhough, 2015).", "The role of internal monologue in problem solving/reasoning is mixed.", "Studies have been shown that internal monologue might not be crucial to visual reasoning (Phillips, 1999), whereas in verbal reasoning tasks, some subjects indeed rely more on the internal monologue than the visual imagery (Bacon et al., 2003).", "However, there is still not a wide concensus on the form/grammar of the internal monologue.", "Question Decomposition: Our work is also different from several existing works about question decomposition, where the strategy of decomposition is largely reflected by the question itself (Min et al., 2019; Wolfson et al., 2020).", "In contrast, the expressions of our questions are already simple, and don't reflect the decomposition strategies.", "Does EVR solve the problems raised in Section 2.4?", "We believe our neural GPS at least partially solves the issues mentioned in Section 2.4.", "First, EVR decomposes a hard problem into several simple ones, thus resembling the human thinking process more.", "In addition, this modular strategy also enables EVR to suffer less from the compositionality generalization problem (as shown in Section 4).", "Second, during each step of reasoning, all the subgoals and the derived statements are expressed in natural language, thus making the reasoning proofs interpretable.", "Could this task be addressed with a pure rule-based approach?", "Due to the language variations introduced (by the authors) to this dataset, a pure rule-based method is probably not easily applicable.", "For example, during the generation of pattern 10 training data, we use the provided meta data (formal language, no variation, not available at test time) to help us produce the necessary supervision.", "We found it otherwise hard to compose the rules directly from the natural language representation.", "Can the operator proposer be replaced by a rule-based one?", "Due to the synthetic natural of the dataset, it is possible to replace the operator proposer in Section 3.3 with some rule-based algorithm (e.g., patterns 1, 3, 5, 7, 9, 11 can be processed with pure rules).", "However, this is not the goal of our work.", "In this paper, we study how well a Seq2Seq model can learn under more realistic conditions, i.e., in real-world scenarios, the agent might only have access to input-output training pairs (rather than rules).", "Limited language variation: Although the data used here contains some variations in language, it is still considerably simpler than real-world natural language.", "Thus, it remains unanswered whether the Seq2Seq components can achieve a robust performance under actual natural language.", "Can other complex problems be reduced to simple ones?", "It is unclear whether most of the real-world multi-hop reasoning problems can be reduced to a (not too large) set of basic cognitive processes that can be learned.", "management: Due to the recursive nature of the problem we solve in this paper, the memory ac-cess/modification can be simplified.", "However, it is unknown whether recursive and context-free patterns are the only way in which human think.", "If not, the access and modification of working memory will become a challenging problem.", "Acquisition of training data: Finally, the acquisition of high-quality training data is not always easy in real world.", "It is possible that low-quality training data introduce dangerous cascading errors.", "In this paper we propose the Explainable multihop Verbal Reasoner (EVR) to solve a synthetic question answering problem that requires multihop reasoning (Clark et al., 2020).", "EVR answers a question by reducing a complex one to several simple ones, and guiding all reasoning steps with natural language for better interpretability.", "Evaluation of EVR shows it achieves high accuracy, suffer less from the compositionality generalization problem, and generalizes well when training data are not abundant." ]
[ "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "objective", "method", "abstain", "abstain", "objective", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "abstain", "objective", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "result", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain" ]
[ "The problem of answering questions using knowledge from pre-trained language models (LMs) and knowledge graphs (KGs) presents two challenges: given a QA context (question and answer choice), methods need to", "(i) identify relevant knowledge from large KGs, and", "(ii) perform joint reasoning over the QA context and KG.", "Here we propose a new model, QA-GNN , which addresses the above challenges through two key innovations:", "(i) relevance scoring, where we use LMs to estimate the importance of KG nodes relative to the given QA context, and", "(ii) joint reasoning, where we connect the QA context and KG to form a joint graph, and mutually update their representations through graph-based message passing.", "We evaluate QA-GNN on the CommonsenseQA and OpenBookQA datasets, and show its improvement over existing LM and LM+KG models, as well as its capability to perform interpretable and structured reasoning, e.g. , correctly handling negation in questions.", "A. hair brush B. bathroom C. art supplies* D. shower E. hair salon painting AtLocation roundbrush hairbrush R e l a t e d T o AtLocation UsedFor U s e d F o r ChoiceEntity Answer QuestionEntity QA context Knowledge graph art supply QA context Node hair Graph Connection (3.1) Dev Acc.", "L = 3 75.53 L = 4 76.34 L = 5 ( final ) 76.54 L = 6 76.21 L = 7 75.96 Graph Connection (3.1) Dev Acc.", "No edge between Z and KG nodes 74.81 Connect Z to all KG nodes 76.38 Connect Z to QA entity nodes ( final system ) 76.54 Contextualization (3.2) Dev Acc.", "Question answering systems must be able to access relevant knowledge and reason over it.", "Typically, knowledge can be implicitly encoded in large language models (LMs) pre-trained on unstructured text (Petroni et al., 2019; Bosselut et al., 2019), or explicitly represented in structured knowledge graphs (KGs), such as Freebase (Bollacker et al., 2008) and ConceptNet (Speer et al., 2017), where entities are represented as nodes and relations between them as edges.", "Recently, pre-trained LMs have demonstrated remarkable success in many question answering tasks (Liu et al., 2019; Raffel et al., 2020).", "However, while LMs have a broad coverage of knowledge, they do not empirically perform well on structured reasoning ( e.g. , handling negation) (Kassner and Schtze, 2020).", "No edge between Z and KG nodes 74.11 Connect Z to all KG nodes 76.38 Connect Z to QA entity nodes ( final ) 76.54 Relevance scoring (3.2) Dev Acc.", "Nothing 75.15 w/ contextual embedding 76.31 w/ relevance score ( final ) 76.54 w/ both 76.52 GNN Attention & Message (3.3) Dev Acc.", "Node type, relation, score-aware ( final ) 76.54 type-aware 75.11 relation-aware 75.23 score-aware 75.15 GNN Layers (3.3) Dev Acc.", "Node type, relation, score-aware ( final system ) 76.54 type-aware 75.41 relation-aware 75.61 score-aware 75.56 GNN Layers (3.3) Dev Acc.", "L = 3 75.53 L = 4 76.34 L = 5 ( final system ) 76.54 L = 6 76.21 L = 7 75.96 Graph Connection (3.1) Dev Acc.", "No edge between Z and KG nodes 74.81 Connect Z to all KG nodes 76.38 Connect Z to QA entity nodes ( final ) 76.54 Relevance scoring (3.2) Dev Acc.", "Nothing 75.56 w/ contextual embedding 76.31 w/ relevance score ( final ) 76.54 w/ both 76.52 GNN Attention & Message (3.3) Dev Acc.", "Node type, relation, score-aware ( final ) 76.54 type-aware 75.41 relation-aware 75.61 score-aware 75.56 GNN Layers (3.3) Dev Acc.", "L = 3 75.53 L = 4 76.34 L = 5 ( final ) 76.54 L = 6 76.21 L = 7 75.96 If it is not used for hair , a round brush is an example of what?", "How to reason effectively with both sources of knowledge remains an important open problem.", "Combining LMs and KGs for reasoning (hence-forth, LM+KG) presents two challenges: given a QA context ( e.g. , question and answer choices; Figure 1 purple box), methods need to", "(i) identify informative knowledge from a large KG (green box); and", "(ii) capture the nuance of the QA context and the structure of the KGs to perform joint reasoning over these two sources of information.", "Previous works (Bao et al., 2016; Sun et al., 2018; Lin et al., 2019) retrieve a subgraph from the KG by taking topic entities (KG entities mentioned in the given QA context) and their few-hop neighbors.", "However, this introduces many entity nodes that are semantically irrelevant to the QA context, especially when the number of topic entities or hops increases.", "Additionally, existing LM+KG methods for reasoning (Lin et al., 2019; Wang et al., 2019a; Feng et al., 2020; Lv et al., 2020) treat the QA context and KG as two separate modalities.", "They individually apply LMs to the QA context and graph If it is not used for hair , a round brush is an example of what?", "A. hair brush B. bathroom C. art supplies* D. shower E. hair salon painting AtLocation roundbrush hairbrush R e l a t e d T o AtLocation UsedFor U s e d F o r ChoiceEntity Answer QuestionEntity QA context Knowledge graph art supply QA context Node hair Figure 1: Given the QA context (question and answer choice; purple box), we aim to derive the answer by performing joint reasoning over the language and the knowledge graph (green box).", "No contextualization 75.56 w/ contextual embedding 76.31 w/ relevance score ( final system ) 76.54 w/ both 76.52 GNN Attention & Message (3.3) Dev Acc.", "be noisy (Bordes et al., 2013; Guu et al., 2015).", "neural networks (GNNs) to the KG, and do not mutually update or unify their representations.", "This separation might limit their capability to perform structured reasoning, e.g. , handling negation.", "Here we propose QA-GNN , an end-to-end LM+KG model for question answering that addresses the above two challenges.", "We first encode the QA context using an LM, and retrieve a KG subgraph following prior works (Feng et al., 2020).", "Our QA-GNN has two key insights:", "(i) Relevance scoring : Since the KG subgraph consists of all few-hop neighbors of the topic entities, some entity nodes are more relevant than others with respect to the given QA context.", "We hence propose KG node relevance scoring: we score each entity on the KG subgraph by concatenating the entity with the QA context and calculating the likelihood using a pretrained LM.", "This presents a general framework to weight information on the KG;", "(ii) Joint reasoning : We design a joint graph representation of the QA context and KG, where we explicitly view the QA context as an additional node ( QA context node ) and connect it to the topic entities in the KG subgraph as shown in Figure 1.", "This joint graph, which we term the working graph , unifies the two modalities into one graph.", "We then augment the feature of each node with the relevance score, and design a new attention-based GNN module for reasoning.", "Our joint reasoning algorithm on the working graph simultaneously updates the representation of both the KG entities and the QA context node, bridging the gap between the two sources of information.", "We evaluate QA-GNN on two question answering datasets that require reasoning with knowledge: CommonsenseQA (Talmor et al., 2019) and OpenBookQA (Mihaylov et al., 2018), using the ConceptNet KG (Speer et al., 2017).", "QA-GNN outperforms strong fine-tuned LM baselines as well as the existing best LM+KG model (with the same LM) by up to 5.7% and 3.7% respectively.", "In particular, QA-GNN exhibits improved performance on some forms of structured reasoning ( e.g. , correctly handling negation and entity substitution in questions): it achieves 4.6% improvement over fine-tuned LMs on questions with negation, while existing LM+KG models are +0.6% over fine-tuned LMs.", "We also show that one can extract reasoning processes from QA-GNN in the form of general KG subgraphs, not just paths (Lin et al., 2019), suggesting a general method for explaining model predictions.", "We aim to answer natural language questions using knowledge from a pre-trained LM and a structured KG.", "We use the term language model broadly to be any composition of two functions, f head ( f enc ( x )) , where f enc , the encoder, maps a textual input x to a contextualized vector representation h LM , and f head uses this representation to perform a desired task (which we discuss in 3.2).", "In this work, we specifi-cally use masked language models ( e.g. , RoBERTa) as f enc , and let h LM denote the output representation of a [CLS] token that is prepended to the input sequence x , unless otherwise noted.", "We define the knowledge graph as a multi-relational graph G = ( V , E ) .", "Here V is the set of entity nodes in the KG; E V RV is the set of edges that connect nodes in V , where R represents a set of relation types.", "Given a question q and an answer choice a C , we follow prior work (Lin et al., 2019) to link the entities mentioned in the question and answer choice to the given KGG .", "We denote V q V and V a V as the set of KG entities mentioned in the question ( question entities ; blue entities in Figure1) and answer choice ( answer choice entities ; red entities in Figure1), respectively, and use V q,a := V q V a to denote all the entities that appear in either the question or answer choice, which we call topic entities .", "We then extract a subgraph from G for a question-choice pair, G q,a sub = ( V q,a sub , E q,a sub ) , 1 which comprises all nodes on the k -hop paths between nodes in V q,a .", "Some entities are more relevant than others given the context.", "Entity relevance estimated.", "Graph Connection (3.1) Dev Acc.", "No edge between Z and KG nodes 74.11 Connect Z to all KG nodes 76.38 Connect Z to QA entity nodes ( final system ) 76.54 Contextualization (3.2) Dev Acc.", "No contextualization 75.15 w/ contextual embedding 76.31 w/ relevance score ( final system ) 76.54 w/ both 76.52 GNN Attention & Message (3.3) Dev Acc.", "Node type, relation, score-aware ( final system ) 76.54 type-aware 75.11 relation-aware 75.23 score-aware 75.15 GNN Layers (3.3) Dev Acc.", "L = 3 75.53 L = 4 76.34 L = 5 ( final system ) 76.54 L = 6 76.21 L = 7 75.96 Inference (3.4) Dev Acc.", "Final states of Z and KG ( final system ) 76.54 Z 74.91 KG 75.15 Figure 3: Relevance scoring of the retrieved KG: we use a pre-trained LM to calculate the relevance of each KG entity node conditioned on the QA context (3.2).", "3 Approach: QA-GNN As shown in Figure 2, given a question and an answer choice a , we concatenate them to get the QA context [ q ; a ] .", "To reason over a given QA context using knowledge from both the LM and the KG, QA-GNN works as follows.", "First, we use the LM to obtain a representation for the QA context, and retrieve the subgraph G sub from the KG.", "Then we introduce a QA context node z that represents the QA context, and connect z to the topic entities V q,a so that we have a joint graph over the two sources of knowledge, which we term the working graph , GW (3.1).", "To adaptively capture the relationship between the QA context node and each of the other nodes in GW , we calculate a relevance score for each pair using the LM, and use this score as an additional feature for each node (3.2).", "We then propose an attention-based GNN module that does message passing on the GW for multiple rounds (3.3).", "Finally, we make the final prediction using the LM representation, QA context node representation and a pooled working graph representation (3.4).", "Since this joint graph intuitively provides a reasoning space (working memory) over the QA context and KG, we term it working graph GW = ( VW , EW ) , where VW = V sub { z } and EW = E sub { ( z,r z,q ,v ) | v V q }{ ( z,r z,a ,v ) | v V a } .", "Each node in the GW is associated with one of the four types: T = { Z , Q , A , O } , each indicating the context node z , nodes in V q , nodes in V a , and other nodes, respectively (corresponding to the node color, purple , blue , red , gray in Figure1 and 2).", "We denote the text of the context node z (QA context) and KG node v V sub (entity name) as text ( z ) and text ( v ) .", "We initialize the node embedding for z using the LM representation of the QA context ( z LM = f enc ( text ( z )) ), and each node on the G sub using the entity embedding from Feng et al. (2020).", "In the subsequent sections, we will reason over the working graph in order to score a given (question, answer choice) pair.", "As an example shown in Figure 3, the retrieved KG subgraph G sub with few-hop neighbors of the V q,a may include nodes that are uninformative for the reasoning process, e.g. , nodes holiday and river bank are off-topic; human and place are generic.", "These irrelevant nodes may result in overfitting or introduce unnecessary difficulty in reasoning, an issue especially when V q,a is large.", "For instance, we empirically find that using the ConceptNet KG (Speer et al., 2017), we will retrieve a KG with |V sub | > 400 nodes on average if we consider 3-hop neighbors.", "To design a joint reasoning space for the two sources of knowledge, we explicitly connect them in a common graph structure.", "We introduce a new QA context node z which represents the QA context, and connect z to each topic entity in V q,a on the KG subgraph G sub using two new relation types r z,q and r z,a .", "These relation types capture the relationship between the QA context and the relevant entities in the KG, depending on whether the entity is found in the question portion or the answer portion of the QA context.", "conditioned on the QA context.", "For each node v , we concatenate the entity text ( v ) with the QA context text ( z ) and compute the relevance score : v = f head ( f enc ([ text ( z ); text ( v )])) , (1) where f head f enc denotes the probability of text ( v ) computed by the LM.", "This relevance score v captures the importance of each KG node relative to the given QA context, which is used for reasoning or pruning the working graph GW .", "To perform reasoning on the working graph GW , our GNN module builds on the graph attention framework (GAT) (Velickovic et al., 2018), which induces node representations via iterative message passing between neighbors on the graph.", "Specifically, in a L -layer QA-GNN, for each layer, we update the representation h ( (cid:96) ) t RD of each node t V W by h ( (cid:96) +1) t = f n (cid:32) (cid:88) s N t { t } st m st (cid:33) + h ( (cid:96) ) t , (2) where N t represents the neighborhood of node t , m st RD notes the message from each neighbor node s to t , and st is an attention weight that scales each message m st from s to t .", "The sum of the messages is then passed through a 2-layer MLP, f n : RD RD , with batch normalization (Ioffe and Szegedy, 2015).", "For each node t V W , we set h (0) t using a linear transformation f h that maps its initial node embedding (described in 3.1) to RD .", "Crucially, as our GNN message passing operates on the working graph, it will jointly leverage and update the representation of the QA context and KG.", "We further propose an expressive message ( m st ) and attention ( st ) computation below.", "Node type & relation-aware message.", "As GW is a multi-relational graph, the message passed from a source node to the target node should capture their relationship, i.e. , relation type of the edge and source/target node types.", "To this end, we first obtain the type embedding u t of each node t , as well as the relation embedding r st from node s to node t by u t = f u ( u t ) , r st = f r ( e st , u s , u t ) , (3) where u s , u t { 0 , 1 } |T | are one-hot vectors indicating the node types of s and t , e st { 0 , 1 } |R| is a one-hot vector indicating the relation type of edge ( s,t ), f u : R |T | R D/ 2 is a linear transformation, and f r : R |R| +2 |T | RD is a 2-layer MLP.", "We then compute the message from s to t as m st = f m ( h ( (cid:96) ) s , u s , r st ) , (4) where f m : R 2 .", "Node type, relation, and score-aware attention.", "Attention captures the strength of association between two nodes, which is ideally informed by their node types, relations and node relevance scores.", "We first embed the relevance score of each node t by t = f ( t ) , (5) where f : R R D/ 2 is an MLP.", "To compute the attention weight st from node s to node t , we obtain the query and key vectors q , k by q s = f q ( h ( (cid:96) ) s , u s , s ) , (6) k t = f k ( h ( (cid:96) ) t , u t , t , r st ) , (7) where f q : R 2 D RD and f k : R 3 D RD are linear transformations.", "The attention weight is then st = exp( st ) (cid:80) t (cid:48) N s { s } exp( st (cid:48) ) , st = q (cid:62) s k t D .", "(8) 3.4 Inference & Learning Given a question q and an answer choice a , we use the information from both the QA context and the KG to calculate the probability of it being the answer p ( a | q ) exp( MLP ( z LM , z GNN , g )) , where z GNN = h ( L ) z and g denotes the pooling of { h ( L ) v | v V sub } .", "In the training data, each question has a set of answer choices with one correct choice.", "We optimize the model (both the LM and GNN components end-to-end) using the cross entropy loss.", "We analyze the time and space complexity of our method and compare with prior works, KagNet (Lin et al., 2019) and MHGRN (Feng et al., 2020) in Table 1.", "As we handle edges of different relation types using different edge embeddings instead of designing an independent graph networks for each relation as in RGCN (Schlichtkrull et al., 2018) or MHGRN, the time complexity of our method is constant with respect to the number of relations and linear with respect to the number of nodes.", "We achieve the same space complexity as MHGRN (Feng et al., 2020).", "We evaluate QA-GNN on two question answering datasets: CommonsenseQA (Talmor et al., 2019) and OpenBookQA (Mihaylov et al., 2018).", "CommonsenseQA is a 5-way multiple choice QA task that requires reasoning with commonsense knowledge, containing 12,102 questions.", "The test set of CommonsenseQA is not publicly available, and model predictions can only be evaluated once every two weeks via the official leaderboard.", "Hence, Model Time Space G is a dense graph L -hop KagNet O (cid:0) |R| L |V| L +1 L (cid:1) O (cid:0) |R| L |V| L +1 L (cid:1) L -hop MHGRNO (cid:0) |R| 2 |V| 2 L (cid:1) O ( |R||V| L ) L -layer QA-GNNO (cid:0) |V| 2 L (cid:1) O ( |R||V| L ) G is a sparse graph with maximum node degree (cid:28)|V| L -hop KagNet O (cid:0) |R| L |V| L L (cid:1) O (cid:0) |R| L |V| L L (cid:1) L -hop MHGRNO (cid:0) |R| 2 |V| L (cid:1) O ( |R||V| L ) L -layer QA-GNNO ( |V| L ) O ( |R||V| L ) Table 1: Computation complexity of different L -hop reasoning models on a dense/sparse graph G = ( V , E ) with the relation set R .", "we perform main experiments on the in-house (IH) data split used in Lin et al. (2019), and also report the score of our final system on the official test set.", "OpenBookQA is a 4-way multiple choice QA task that requires reasoning with elementary science knowledge, containing 5,957 questions.", "We use the official data split.", "We use ConceptNet (Speer et al., 2017), a general-domain knowledge graph, as our structured knowledge source G for both of the above tasks.", "Given each QA context (question and answer choice), we retrieve the subgraph G sub from G following the pre-processing step described in Feng et al. (2020), with hop size k =2 .", "Henceforth, in this section (4) we use the term KG to refer to G sub .", "We set the dimension ( D = 200 ) and number of layers ( L = 5 ) of our GNN module, with dropout rate 0.2 applied to each layer (Srivastava et al., 2014).", "The parameters of the model are optimized by RAdam (Liu et al., 2020), with batch size 128, gradient clipping 1.0 (Pascanu et al., 2013), and learning rate 1e-5 and 1e-3 for the LM and GNN components respectively.", "Each model is trained using two GPUs (GTX Titan X), which takes 20 hours on average.", "The above hyperparameters were tuned on the development set.", "Fine-tuned LM.", "To study the role of KGs, we compare with a vanilla fine-tuned LM, which does not use the KG.", "We use RoBERTa-large (Liu et al., 2019) for CommonsenseQA , and RoBERTa-large and AristoRoBERTa 2 (Clark et al., 2019) for 2 OpenBookQA provides an extra corpus of scientific facts in a textual form.", "Existing LM+KG models.", "We compare with existing LM+KG methods, which share the same high-level framework as ours but use different modules to reason on the KG in place of QA-GNN (yel-low box in Figure2): (1) Relation Network (RN) (Santoro et al., 2017), (2) RGCN (Schlichtkrull et al., 2018), (3) GconAttn (Wang et al., 2019a), (4) KagNet (Lin et al., 2019), and (5) MHGRN (Feng et al., 2020).", "(1),(2),(3) are relation-aware GNNs for KGs, and (4),(5) further model paths in KGs.", "MHGRN is the existing top performance model under this LM+KG framework.", "For fair comparison, we use the same LM in all the baselines and our model.", "The key differences between QA-GNN and these are that they do not perform relevance scoring or joint updates with the QA context (3).", "Table 2 and Table 4 show the results on CommonsenseQA and OpenBookQA, respectively.", "On both datasets, we observe consistent improvements over fine-tuned LMs and existing LM+KG models, e.g. , on OpenBookQA, +5.7% over RoBERTa, and +3.7% over the prior best LM+KG system, additional input to the QA context.", "Acc.", "KG node relevance scoring (top right table): We find the relevance scoring of KG nodes (3.2) provides a boost: 75.56% 76.54%.", "As a variant of the relevance scoring in Eq.", "1, we also experimented with obtaining a contextual embedding w v for each node v V sub and adding to the node features: w v = f enc ([ text ( z ); text ( v )]) .", "However, we find that it does not perform as well (76.31%), and using both the relevance score and contextual embedding performs on par with using the score alone, suggesting that the score has a sufficient information in our tasks; hence, our final system simply uses the relevance score.", "MHGRN.", "The boost over MHGRN suggests that QA-GNN makes a better use of KGs to perform joint reasoning than existing LM+KG methods.", "We also achieve competitive results to other systems on the official leaderboards (Table 3 and 5).", "Notably, the top two systems, T5 (Raffel et al., 2020) and UnifiedQA (Khashabi et al., 2020), are trained with more data and use 8x to 30x more parameters than our model (ours has 360M parameters).", "Excluding these and ensemble systems, our model is comparable in size and amount of data to other systems, and achieves the top performance on the two datasets.", "Graph connection (top left table): The first key component of QA-GNN is the joint graph that connects the z node (QA context) to QA entity nodes V q,a in the KG (3.1).", "Without these edges, the QA context and KG cannot mutually update their representations, hurting the performance: 76.5% 74.8%, which is close to the previous LM+KG system, MHGRN.", "If we connected z to all the nodes in the KG (not just QA entities), the performance is comparable or drops slightly (-0.16%).", "GNN architecture (bottom tables): We ablate the information of node type, relation, and relevance score from the attention and message computation in the GNN (3.3).", "The results suggest that all these features improve the model performance.", "For the number of GNN layers, we find L = 5 works the best on the dev set.", "Our intuition is that 5 layers allow various message passing or reasoning patterns between the QA context ( z ) and KG, such as z 3 hops on KG nodes z .", "We aim to interpret QA-GNN's reasoning process by analyzing the node-to-node attention weights induced by the GNN.", "Figure 4 shows two examples.", "In", "(a), we perform Best First Search (BFS) on the working graph to trace high attention weights from the QA context node ( Z ; purple) to Q uestion entity nodes (blue) to O ther (gray) or A nswer choice entity nodes (orange), which reveals that the QA context z attends to elevator and basement in the KG, elevator and basement both attend strongly to building, and building attends to office building, which is our final answer.", "In", "(b), Where would you find a basement that can be accessed with an elevator ?", "we use BFS to trace attention weights from two directions: Z Q O and Z A O , which reveals concepts (sea and ocean) in the KG that are not necessarily mentioned in the QA context but bridge the reasoning between the question entity (crab) and answer choice entity (salt water).", "While prior KG reasoning models (Lin et al., 2019; Feng et al., 2020) enumerate individual paths in the KG for model interpretation, QA-GNN is not specific to paths, and helps to find more general reasoning structures ( e.g. , a KG subgraph with multiple anchor nodes as in example", "(a)).", "Structured reasoning, e.g. , precise handling of negation or entity substitution ( e.g. , hair art in Figure 5b) in question, is crucial for making robust predictions.", "Here we analyze QA-GNN's ability to perform structured reasoning and compare with baselines (fine-tuned LMs and existing LM+KG models).", "Quantitative analysis.", "Table 7 compares model performance on questions containing negation words ( e.g. , no, not, nothing, unlikely), taken from the CommonsenseQA IHtest set.", "We find that previous LM+KG models (KagNet, MHGRN) provide limited improvements over RoBERTa on questions with negation (+0.6%); whereas QA-GNN exhibits a bigger boost (+4.6%), suggesting its strength Methods IHtest-Acc.", "in structured reasoning.", "We hypothesize that QA-GNN's joint updates of the representations of the QA context and KG (during GNN message passing) allows the model to integrate semantic nuances expressed in language.", "To further study this hypothesis, we remove the connections between z and KG nodes from our QA-GNN (Table 7 bottom): now the performance on negation becomes close to the prior work, MHGRN, suggesting that the joint message passing helps for performing structured reasoning.", "Qualitative analysis.", "Figure 5 shows a case study to analyze our model's behavior for structured reasoning.", "The question on the left contains negation not used for hair, and the correct answer is B. art supply.", "We observe that in the 1st layer of QA-GNN, the attention from z to question entities (hair, round brush) is diffuse.", "After multiples rounds of message passing on the working graph, z attends strongly to round brush in the final layer of the GNN, but weakly to the negated entity hair.", "The model correctly predicts the answer B. art sup-ply.", "Next, given the original question on the left, we", "(a) drop the negation or", "(b) modify the topic entity (hair art).", "In", "(a), z now attends strongly to hair, which is not negated anymore.", "The model predicts the correct answer A. hair brush.", "In", "(b), we observe that QA-GNN recognizes the same structure as the original question (with only the entity swapped): z attends weakly to the negated entity (art) like before, and the model correctly predicts A. hair brush over B. art supply.", "Table 8 shows additional examples, where we compare QA-GNN's predictions with the LM baseline (RoBERTa).", "We observe that RoBERTa tends to make the same prediction despite the modifications we make to the original questions ( e.g. , drop/insert negation, change an entity); on the other hand, QA-GNN adapts predictions to the modifications correctly (except for double negation If it is not used for hair , a round brush is an example of what?", "[Original] If it is not used for hair, a round brush is an example of what?", "A. hair brush B. art supply A. hair brush ( ) B. art supply ( ) [Negation flip] If it is used for hair, a round brush is an example of what?", "A. hair brush ( just no change? ) A. hair brush ( ) [Entity change] If it is not used for art a round brush is an example of what?", "B. bored ( just no change? ) A. interested ( ) Table 8: Case study of structured reasoning , comparing predictions by RoBERTa and our model (RoBERTa + QA-GNN).", "Our model correctly handles changes in negation and topic entities.", "A. hair brush ( just no change? ) A. hair brush ( ) [Original] If you have to read a book that is very dry you may become what?", "not become what?", "B. bored ( just no change? ) A. interested ( ) Example (Original taken from CommonsenseQA Dev) RoBERTa Prediction Our Prediction [Original] If it is not used for hair, a round brush is an example of what?", "Various works have studied methods to augment NLP systems with knowledge.", "Existing works (Pan et al., 2019; Ye et al., 2019; Petroni et al., 2019; Bosselut et al., 2019) study pre-trained LMs' potential as latent knowledge bases.", "To provide more explicit and interpretable knowledge, several works integrate structured knowledge (KGs) into LMs (Mihaylov and Frank, 2018; Lin et al., 2019; Wang et al., 2019a; Yang et al., 2019; Wang et al., 2020b; Bosselut et al., 2021).", "Given an original question (left), we modify its negation (middle) or topic entity (right): we find that QA-GNN adapts attention weights and final predictions accordingly, suggesting its capability to handle structured reasoning.", "B. bored ( ) A. interested ( ) [Negation ver 2] If you have to read a book that is not dry you may become what?", "B. bored ( ) A. interested ( ) [Double negation] If you have to read a book that is not dry you may not become what?", "We find that KG node relevance scoring (3.2) is helpful when the retrieved KG ( G sub ) is large.", "Table 9 shows model performance on questions containing fewer ( 10) or more (>10) entities in the CommonsenseQA IHtest set (on average, the former and latter result in 90 and 160 nodes in G sub , respectively).", "Existing LM+KG models such as MHGRN achieve limited performance on questions with more entities due to the size and noisiness of retrieved KGs: 70.1% accuracy vs 71.5% accuracy on questions with fewer entities.", "KG node relevance scoring mitigates this bottleneck, reducing the accuracy discrepancy: 73.5% and 73.4% accuracy on questions with more/fewer entities, respectively.", "Question answering with LM+KG.", "In particular, a line of works propose LM+KG methods for question answering.", "Most closely related to ours are works by Lin et al. (2019); Feng et al. (2020); Lv et al. (2020).", "Our novelties are (1) the joint graph of QA context and KG, on which we mutually update the representations of the LM and KG; and (2) language-conditioned KG node relevance scoring.", "Other works on scoring or pruning KG nodes/paths rely on graph-based metrics such as PageRank, centrality, and off-the-shelf KG embeddings (Paul and Frank, 2019; Fadnis et al., 2019; Bauer et al., 2018; Lin et al., 2019), without reflecting the QA context.", "Other QA tasks.", "Several works study other forms of question answering tasks, e.g. , passage-based QA, where systems identify answers using given or retrieved documents (Rajpurkar et al., 2016; Joshi et al., 2017; Yang et al., 2018), and KBQA, where systems perform semantic parsing of a given question and execute the parsed queries on knowledge bases (Berant et al., 2013; Yih et al., 2016; Yu et al., 2018).", "Different from these tasks, we approach question answering using knowledge available in LMs and KGs.", "Knowledge representations.", "Several works study joint representations of external textual knowledge ( e.g. , Wikipedia articles) and structured knowledge ( e.g. , KGs) (Riedel et al., 2013; Toutanova et al., 2015; Xiong et al., 2019; Sun et al., 2019; Wang et al., 2019b).", "The primary distinction of our joint graph representation is that we construct a graph connecting each question and KG rather than textual and structural knowledge, approaching a complementary problem to the above works.", "Graph neural networks (GNNs).", "GNNs have been shown to be effective for modeling graph-based data.", "Several works use GNNs to model the structure of text (Yasunaga et al., 2017; Zhang et al., 2018; Yasunaga and Liang, 2020) or KGs (Wang et al., 2020a).", "In contrast to these works, QA-GNN jointly models the language and KG.", "Graph Attention Networks (GATs) (Velickovic et al., 2018) perform attention-based message passing to induce graph representations.", "We build on this framework, and further condition the GNN on the language input by introducing a QA context node (3.1), KG node relevance scoring (3.2), and joint update of the KG and language representations (3.3).", "We presented QA-GNN, an end-to-end question", "answering model that leverages LMs and KGs.", "Our key innovations include", "(i) Relevance scoring , where we compute the relevance of KG nodes conditioned on the given QA context, and", "(ii) Joint reasoning over the QA context and KGs, where we connect the two sources of information via the working graph, and jointly update their representations through GNN message passing.", "Through both quantitative and qualitative analyses, we showed QA-GNN's improvements over existing LM and LM+KG models on question answering tasks, as well as its capability to perform interpretable and structured reasoning, e.g. , correctly handling negation in questions.", "We thank Rok Sosic, Weihua Hu, Jing Huang, Michele Catasta, members of the SNAP research group, P-Lambda group and Project MOWGLI", "We gratefully acknowledge the support of DARPA under Nos.", "N660011924033 (MCS); Funai Foundation Fellowship; ARO under Nos.", "W911NF-16-1-0342 (MURI), W911NF-16-1-0171 (DURIP); NSF under Nos.", "OAC-1835598 (CINES), OAC-1934578 (HDR), CCF-1918940 (Expeditions), IIS-2030477 (RAPID); Stanford Data Science Initiative, Wu Tsai Neuro-sciences Institute, Chan Zuckerberg Biohub, Amazon, JP-Morgan Chase, Docomo, Hitachi, JD.com, KDDI, NVIDIA, Dell, Toshiba, and United Health Group.", "Hongyu Ren is supported by Masason Foundation Fellowship and the Apple PhD Fellowship.", "Jure Leskovec is a Chan Zuckerberg Biohub investigator.", "All code and data are available at https://github.com/michiyasunaga/qagnn .", "Experiments are available at https://worksheets.codalab.org/worksheets/0xf215deb05edf44a2ac353c711f52a25f ." ]
[ "abstain", "abstain", "abstain", "objective", "method", "method", "result", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "result", "abstain", "result", "abstain", "result", "result", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "result", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "other", "other", "other", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "other", "other", "other", "method", "other", "other", "other", "method", "other", "other", "method", "other", "other", "other", "other", "other", "method", "method", "abstain", "abstain", "method", "method", "result", "other", "other", "other", "other", "other", "other", "other", "other", "other" ]
[ "Generating factual, long-form text such as Wikipedia articles raises three key challenges: how to gather relevant evidence, how to structure information into well-formed text, and how to ensure that the generated text is factually correct.", "We address these by developing a model for English text that uses a retrieval mechanism to identify relevant supporting information on the web and a cache-based pre-trained encoder-decoder to generate long-form biographies section by section, including citation information.", "To assess the impact of available web evidence on the output text, we compare the performance of our approach when generating biographies about women (for which less information is available on the web) vs. biographies generally.", "To this end, we curate a dataset of 1,500 biographies about women.", "We analyze our generated text to understand how differences in available web evidence data affect generation.", "We evaluate the factuality, fluency, and quality of the generated texts using automatic metrics and human evaluation.", "We hope that these techniques can be used as a starting point for human writers, to aid in reducing the complexity inherent in the creation of long-form, factual text.", "Wikipedia has become one of the major sources of dissemination of knowledge across the globe.", "However, the knowledge contained in Wikipedia is not neutral it is biased in various ways (Hinnosaar, 2019; Schmahl et al., 2020).", "Many studies, including those from the Wikimedia Foundation itself, have emphasized that biographies in particular are overwhelmingly written about men.", "This leads to many subtle yet far-reaching effects, from students not writing their first book reports on a woman to bias in models trained on Wikipedia, as Wikipedia has long been used as a source of data.", "Many existing efforts, such as the Wikipedia Women in Red project, focus on encouraging article creation to mitigate this gender gap.", "However, Wikipedia articles remain painstakingly written and edited primarily by a network of human contributors.", "Despite advances in text generation and modeling architectures that retrieve information, the automatic creation of Wikipedia articles is incredibly challenging (Liu et al., 2018).", "Even the functionality of tools that aid human editors are limited.", "In this work, we strive to create a system that could write an entire Wikipedia article in English, focusing on the biography domain.", "We confront several major challenges.", "First, this is fundamentally a long-form generation task.", "Improvements driven by pretraining (Radford et al., 2019; Lewis et al., 2019) have improved generation fluency at the level of multiple sentences.", "However, Wikipedia biographies contain multiple paragraphs in a structured form with headings, as well as citations to indicate where the information originated from.", "Second, the task confronts obstacles around the factuality (Elazar et al., 2021) of generated content, as articles must be factually accurate.", "Third, Wikipedia articles are written using reference material, often found on the web (Piktus et al., 2021).", "Thus, models need to find and ingest web searches as a pre-requisite to writing accurate biographies.", "We develop a method for English Wikipedia that starts with the subject and occupation of the biography, then leverages web search to find relevant evidence.", "Given search results, we employ a retrieval-augmented generation architecture (Lewis et al., 2020; Guu et al., 2020) based on large-scale pretraining to identify relevant information and write the biography.", "We generate section by section, using a caching mechanism similar to Transformer-XL (Dai et al., 2019) to reference previous sections and achieve greater document-level context.", "Finally, after each section, we append a citation based on which web searches were retrieved.", "2004), entailment, and named entity coverage.", "Further, we study the strong dependency of our method on accurate retrieval, and design a specific evaluation dataset that highlights this challenge.", "The dataset consists of 1,527 Wikipedia biographies about women, where information on the internet is not as easily retrieved.", "We use this dataset to analyze the gap between model quality when retrieval is challenging (our novel evaluation dataset with biographies about women) and model quality when retrieval is more accurate (a random set of evaluation biographies).", "Finally, we conduct a large-scale human evaluation to measure the factuality and coverage of our generated biographies.", "We hope that our techniques can eventually be used as a starting point for human Wikipedia writers, for biographies and beyond.", "A large body of work in generation utilizes Wikipedia, often for data-to-text tasks that use Wikidata or DBpedia RDF triples (Gardent et al., 2017; Castro Ferreira et al., 2020; Kaffee et al., 2018b; Vougiouklis et al., 2018; Sha et al., 2018; Puduppully et al., 2019; Chen et al., 2020b; Wang et al., 2020; Agarwal et al., 2020; Parikh et al., 2020), as well as graphs (Jin et al., 2020) as input.", "Some have focused on long text, such as writing summaries (Chen et al., 2020a) or sections of articles (Kaffee et al., 2020), expanding stubs (Baner-jee and Mitra, 2015), and writing full articles (Liu et al., 2018).", "Some of these works utilize structure to learn templates (Sauper and Barzilay, 2009), Markov logic networks (Liu et al., 2010), or word graphs (Banerjee and Mitra, 2015), but we anticipate that pretraining and large neural network based techniques will vastly improve upon this quality.", "Closest to our work, Liu et al. (2018) use web evidence to write full length articles, but do not focus on biographies and use extractive summarisation techniques rather than a retrieval mechanism to identify relevant information.", "Further, their work generates the entire Wikipedia article at once, whereas we demonstrate that breaking down the article to generate section by section is more effective.", "We also include a mechanism for the model to generate citations, which was not included in existing work.", "Thus, our model can produce a full-form Wikipedia article that would look like what a human editor wrote.", "Finally, our work", "(i) leverages recent advances in large-scale pretraining, which improves generation fluency and", "(ii) investigates the impact of available web evidence on the generated texts.", "Other work has focused on automatic creation of biographies, such as generation from infoboxes (Le-bret et al., 2016) or Wikidata (Chisholm et al., 2017), as well as extracting biographical sentences (Biadsy et al., 2008).", "The majority of existing research focused on short biographies.", "Retrieval mechanisms have been used to support a variety of tasks, including dialogue (Moghe et al., 2018; Dinan et al., 2018; Shuster et al., 2021), fact verification (Thorne et al., 2018), and sentence generation (Guu et al., 2018).", "Most notably, retrieval has been heavily used in question answering (Chen et al., 2017; Kwiatkowski et al., 2019; Seo et al., 2019; Karpukhin et al., 2020).", "Recent innovations in incorporating retrieval mechanisms have increased the quality and scale of retrieval-augmented generative methods (Guu et al., 2020; Lewis et al., 2020; Izacard and Grave, 2020).", "Gender bias on Wikipedia is a well-known problem (Hinnosaar, 2019; Dinan et al., 2020; Schmahl et al., 2020), particularly in the case of biographies (Graells-Garrido et al., 2015; Stratigakos, 2016; Luo et al., 2018; Schmahl et al., 2020).", "This bias is compounded by geographical location, as information about certain areas of the world is far more prevalent (Kaffee et al., 2018a; Beyta, 2020).", "This bias exists not only in what articles are written, but also in articles targeted for deletion articles about certain marginalized groups are removed at higher rates (Worku et al., 2020).", "Wikipedia reflects biases present in society (De-Arteaga et al., 2019; Young et al., 2020; Schmahl et al., 2020), though numerous initiatives exist to de-bias Wikipedia.", "These range from training programs (Iglesias, 2020) to projects such as Women in Red 1 and WikiProject Women 2 .", "The success of these initiatives has been studied (Langrock and Gonzlez-Bailn, 2020) and found to be effective, but not at addressing the systemic challenges that create bias in the first place.", "In the natural language processing community, work has focused on combating gender bias in co-reference resolution (Zhao et al., 2018), dialogue (Dinan et al., 2019; Lee et al., 2019; Liu et al., 2020), detection of abusive language (Park et al., 2018), machine translation (Stanovsky et al., 2019), and word embeddings (Gonen and Goldberg, 2019).", "These works present a variety of strategies, including data augmentation, additional data collection efforts, modified generation, and fair evaluation (Yeo and Chen, 2020).", "A comprehensive survey can be found in Blodgett et al. (2020).", "However, most of these efforts are focused on specific tasks or models our work uniquely targets generation of full Wikipedia biographies to combat gender bias present on Wikipedia.", "Given a person's name, one or more occupation(s), and CommonCrawl as a source of evidence, the task is to generate a Wikipedia biography and to associate each generated section with adequate bibliographic references.", "We model this task by generating a biography section by section using section headers as additional information.", "A special section header called toplevel is used as the start of the article.", "The subsequent headers are automatically generated at the end of each section as input for the next.", "Thus for each section, the input includes a name, one or more occupations, a section header, and CommonCrawl as a retrieval corpus.", "Wikipedia biographies begin with an introductory paragraph followed by various subsections 3 .", "To account for this structure and generate long-form text based on retrieved web evidence, our system, illustrated in Figure 1, generates a biography section by section.", "Based on the subject, their occupation(s), and the section heading, the model first identifies a subset of relevant evidence from a set of web search results found using that triplet ( retrieval module ).", "It then conditions upon that evidence to generate the section, using a Sequence-to-Sequence model ( generation module ) which can access previous sections using a caching mechanism.", "Finally, the model indicates which evidence documents it used and outputs those as citations, mimicking a standard Wikipedia article ( citation module ).", "We focus 3 Many biographies contain infoboxes, which we do not generate.", "Given a query Q and a set of web documents D retrieved from the web based on this query, the task of the retrieval module is to retrieve the subset of D that is most relevant given Q .", "The challenge is sifting through the large quantity of potentially useful information.", "Query.", "The query Q consists of three parts: (1) the name of the person for which the biography is generated, (2) their , possibly multiple, occupation(s), and (3) a section heading.", "Including the occupation narrows the realm of potential relevant content, especially as proper names are often ambiguous (e.g. Jane Wang ).", "Similarly, the section header allows the model to retrieve different information for each section (e.g. Personal Life compared to Career ).", "Documents.", "The query Q is put through a search engine to retrieve web hits, which form the set of documents D that are candidates for retrieval.", "The web results are represented only as text, and all non-text information is discarded.", "Retrieval.", "To retrieve the relevant subset of D , each sentence in D is encoded with RoBERTa base trained with LayerDrop (Fan et al., 2019b; Liu et al., 2019; Devlin et al., 2018).", "The concatenation of the subject's name, occupation(s), and section header is also encoded.", "We then calculate the dot product to identify which encoded document sentences are most relevant given the currently encoded query Q , following the strategy used in other retrieval works (Karpukhin et al., 2020).", "The representation of the top k most relevant sentences are then passed onwards through the model.", "Note that compared to some other retrieval-augmented generation (Lewis et al., 2020), the RoBERTa encoder is not fixed, so the retrieval module learns based on the performance of the generation module.", "This is possible because our retrieval is far smaller scale, we limit the search to approximately 40 sentences (1,000 words) that could be used to generate each section.", "To generate the sections we use a Transformer-based Sequence-to-Sequence model initialized with BART-Large (Lewis et al., 2019).", "The input to BART is the concatenation of the subject's name, 8563 Figure 1: Model Architecture.", "occupation(s), the section header and the retrieved evidence.", "Note that the maximum number of input tokens for BART is 1024 words, which is why we cap the retrieval at 1000 words, as described in the previous section.", "The decoder conditions on the input information to generate the section.", "One challenge with this is that the sections would be generated completely independently, which might result in redundancy between generated sections.", "Thus, we equip the Sequence-to-Sequence model with a mechanism to refer to previous sections using the cache mechanism from Transformer-XL (Dai et al., 2019).", "This mechanism caches the previous section's hidden states at every layer, using it as memory to generate the current section.", "Recent work has focused on models that not only perform a task, but also produce an explanation (DeYoung et al., 2019).", "Much of this work has focused on question answering (Latcinnik and Be-rant, 2020; Lamm et al., 2020; Lakhotia et al., 2020; Gonzalez et al., 2020) and generating explanations in natural language (Camburu et al., 2019; Narang et al., 2020; Kumar and Talukdar, 2020; Hase et al., 2020).", "A similar requirement exists on Wikipedia not only to collate the information into an article, but to provide the original references for users to verify.", "Thus, to complete the generation of a full Wikipedia biography, we cite the information used, as in any real article.", "On Wikipedia itself, each sentence could contain citations.", "We simplify this, citing at the end of each section.", "To do this, we track the original document the retrieved evidence originates from, and reference that document at the end of the generated section.", "To write a full biography, models must generate the introductory paragraph followed by each section.", "For a new article, the introductory paragraph is given as a section heading called toplevel .", "For each subsequent section, we follow the process outlined above to retrieve evidence, then write a section, then add citations.", "At the end of each section, the model generates the section heading of the next section.", "This allows the model to generate an entire article section by section.", "A possible failure point for our method is the retrieval step as good biography generation requires access to sufficient relevant information.", "To study the impact of accurate retrieval on generation quality, we design a specific evaluation dataset that pushes this problem to the forefront.", "Specifically, we create a novel evaluation dataset which consists exclusively of biographies about women.", "Ongoing efforts to write biographies about women in the Wikipedia editor community, such as the Women in Red project, have identified insufficient online evidence as a major challenge for writing Wikipedia biographies about women.", "To study the importance of retrieval on model quality, we therefore create an evaluation dataset where the target Wikipedia articles are women bios.", "We collate candidate biographies, retrieve information about their occupation, and gather web sources using web search.", "The resulting dataset, summarized in Table 2, consists of 1,527 biographies, each linked to a set of retrieved web articles.", "Identifying Biographical Subjects.", "We first source various notable women on Wikipedia using internet lists (e.g. Famous Women you should know ) and existing efforts by collective groups of Wikipedia editors, such as the Women in Red project.", "Several recent efforts focus on Women in Science 4 , and so we specifically include scientists as a category.", "Overall, we collate almost two thousand candidate Wikipedia women biographies.", "We then narrow down by selecting articles that have previously Featured Article or Good quality.", "The final evaluation dataset contains 1,527 biographies in four groups: Women, Women in Science, Women in Asia, and Women in Africa (see Table 2).", "4 https://towardsdatascience.com/who-is-wikipedia-famous-within-natural-", "language-processing-fa0c8e91bdf6?gi=b910dd838c47 , https://www.newscientist.", "com/article/mg24532680-800-jess-wades-one-woman-mission-to-diversify-wikipedias-science-stories/ Biography Text and Occupation.", "After finalizing candidate Wikipedia biographies, we use the MediaWiki API 5 to query the text of the article.", "We use the Wikidata API 6 to retrieved the individuals, possibly multiple, occupations (e.g. Rachel Carson is an author and an environmental activist).", "As seen in Table 2, on average, articles have around 6 sections with 130 words each.", "The most common occupations include writers, teachers, and doctors (see Table 1), though the entire dataset contains almost 500 different occupations, with people having on average 2 occupations (see Table 2).", "Retrieving Web Evidence.", "Next, we identify web sources with reference evidence for each biography.", "We follow the construction of similar datasets, such as WikiSum (Liu et al., 2018) and ELI5 (Fan et al., 2019c), which searches through CommonCrawl.", "We query CommonCrawl based on the subject's name and occupation(s) and return the top 20 search results.", "We reject all CommonCrawl links from Wikipedia, to prevent querying the Wikipedia articles in our dataset.", "Statistics are presented in Table", "2. Out of a maximum of 20 possible hits, on average each biography returns around 18.", "We describe our training data, baselines, and automatic and human evaluation metrics.", "Training Data.", "We utilize the WikiSum (Liu et al., 2018) dataset of Wikipedia articles paired with web references.", "We filter to biographies using a combination of querying for occupations in Wikidata and using Named Entity Recognition 7 to recognize names.", "We query each article title in the WikiSum dataset to attempt to find an occupation and see the title is recognized as a named entity, to identify the bibliographical subset of WikiSum.", "This produces 677,085 biographies, each associated with a set of web articles.", "Evaluation Data.", "We utilize the WikiSum (Liu et al., 2018) dataset, filtered to biographies, for evaluation.", "Similar to the training dataset, we query to identify occupational information.", "To study the impact of retrieval and available evidence on model 5 https://www.mediawiki.org/wiki/API 6 https://query.wikidata.org/ 7 https://spacy.io/usage/linguistic-features/ 8565 Most Common Section Headings Career, Personal Life, Early Life, Biography, History Most Common Occupations Writer, Politician, University Teacher, Physician, Researcher Table 1: Example Section Headings and Occupations in Wikipedia Biographies.", "quality, we also evaluate on our constructed evaluation dataset about women (which has substantially less web-based evidence).", "As shown in Table 2, these two datasets differ in the length and quality of both the Wikipedia articles and the web-based evidence.", "Baseline.", "We compare our method described in Section 4 to a pretraining and finetuning generation baseline.", "We use the BART model (Lewis et al., 2019) and finetune on the Biography subset of the WikiSum data.", "Note that BART has a token limit of 1024, thus the entirety of the web retrieval is not available to this model.", "We take the web search hits ordered by the search engine, and provide the first 1000 available tokens.", "To compare this baseline with our method equitably, the baseline is also trained to generate section by section.", "However, it does not use the retrieval module (all evidence is given), the caching mechanism, or the citation module (as described in Section 4), meaning citations are not added to the generated text.", "Additional training details are in the Appendix.", "Generation.", "We generate from all models with beam search, setting the beam size to 5.", "We allow the model to generate an output of any length, with no restrictions.", "For human evaluations, we set the minimum and maximum length such that it matches the length of the gold target to minimize the effect of length on human interpretations.", "Automatic Evaluation.", "We evaluate the quality of generated biographies with three automatic metrics.", "First, we measure the ROUGE-L between the generated text and the Wikipedia reference text to assess the similarity.", "ROUGE-L is commonly used in multi-sentence summarization and is a measure of longest common substring overlap.", "Next, we use Natural Language Entailment as a high level proxy for quantifying a form of factuality: if two sentences entail each other in both directions, then they are semantically equivalent.", "We use a model pretrained and finetuned on MNLI, open sourced by Liu et al. (2019).", "To evaluate entailment, we split the generated biography and reference biography into sentences, then for each sentence in the generated biography we calculate if it is semantically equivalent to a sentence in the reference.", "We then compute the percentage of generated sentences that are semantically equivalent to at least one sentence in the reference biography, where entailment is evaluated bidirectionally.", "Finally, we assess the Coverage of information in the generated biography, constraining this to analyzing mentions of named entities.", "We report the percentage of named entities detected in the reference which are also detected in the generated text.", "We extract entities with BLINK , a BERT-based entity linking system (Wu et al., 2019).", "Human Evaluation Long-form text generation is very difficult to assess automatically (Thomson and Reiter, 2020; Howcroft et al., 2020), particularly for factuality (Goodrich et al., 2019; Maynez et al., 2020; Peshterliev et al., 2021) and hallucination (Zhou et al., 2020; Duek and Kasner, 2020).", "We conduct a detailed, large-scale human evaluation with the goal to assess Coverage (How much of the information in the reference section is in the generated section?) and Factuality (How much of the generated section is in the reference and, for the information added in the generated text, how much of that information is verifiable based on the web evidence?).", "To reduce the challenge of evaluation, the text is compared section by section, and the generated text is the same length as the reference by constraining the max length of beam search (to remove length as an evaluation artifact).", "First, each sentence of the generated section is shown next to the full reference section and the entire document cited in the generated section (recall our generated biographies cite the retrieved evidence).", "Evaluators are asked to decide (1) if the information in the generated sentence is present in the reference section (ground truth) and (2) if the information in the generated sentence is present in the cited document (web evi-dence).", "This question assesses if the information from the generated section is factual with respect to either the reference Wikipedia text or the retrieved web documents.", "Then, the evaluation is flipped to assess coverage with respect to the Wikipedia reference.", "Each sentence of the reference is shown next to the generated section, and evaluators are asked to decide (3) if the information in the reference sentence is present in the generated section.", "In total, human annotators evaluated 100 sections with length between 200 to 500 words.", "Each section is reviewed by one annotator.", "Additional details are in the Appendix.", "Automatic Evaluation.", "We examine the model's overall performance.", "Results are summarized in Table", "3. Compared to the pretraining+finetuning baseline, adding the retrieval module statistically significantly 8 increases results by 1.4 ROUGE-L.", "Adding a caching mechanism improves further by 0.5 ROUGE-L.", "This trend is reflected across the entailment and entity coverage metrics, indicating that retrieving the most relevant information to write a biography is critical.", "Next, we examine the impact of our modeling choices using ablation (Table 4).", "Compared to previous work on WikiSum (Liu et al., 2018; Fan et al., 2019a), we add an end-to-end retrieval mechanism based on RAG that substantially improves results.", "Further, instead of retrieving solely based on the 8 We use the confidence interval reported in the ROUGE package.", "subject name, as was previously done (Liu et al., 2018), we retrieve on a detailed query (the name, occupation, and section heading).", "Table 4 indicates that this enriched query improves the retrieval quality by almost 2 ROUGE-L.", "We conjecture it helps improve disambiguation and retrieve evidence that is relevant to the desired entity rather than to one of its homonyms.", "We also generate the biographical articles section by section, rather than an entire article at once.", "This allows the retrieval mechanism to be focused on the section information.", "As shown in Table 4, this also has a positive effect of +1.5 ROUGE-L.", "Human Evaluation.", "Next, we examine quality with human evaluation, as shown in Figure", "3. Models generating nonfactual or hallucinated content is an ongoing area of study (Tian et al., 2019; Nie et al., 2019; Liu et al., 2021).", "Our goal is to understand how much information in the generated text is present in the reference text or the web evidence, as a proxy for factuality and coverage.", "Overall, 68% of the information in generated sections is not present in the reference text.", "Conversely, 71% of information in the reference text is not in the generated text.", "This indicates that the generated text has far from perfect coverage.", "However, we found that 17% of the added information can be validated by examining the web evidence, which shows that some information added by the generative model is valid biographical information.", "We examine why there is low information overlap between the generated and reference text.", "First, information in the reference biography may not be available on the web 9 or may not be retrieved.", "In a manually examined subset of 250 sentences taken from reference biographies, we found that about 50% of the information was not contained in the web evidence.", "The other 50% was partially present in the web evidence but were not retrieved by the model.", "Second, annotators must compare sentences, but sentences contain partial information.", "For example, if Person is was born in Chicago in 1968 was in the generated text and Person was born in Chicago was in the reference text, this would count as the generation having information not in the reference.", "Annotators were very precise in sticking to the requested standard that the entire sentence should be factual to count as fully factual, which is reflected by annotators marking partial 9 Note that search hits from the Wikipedia domain are removed from web search results.", "hyman is best known for her work on the classification of invertebrates.", "she was the author of a six-volume set of reference books titled the invertebrate treatise, which was published by mcgraw-hill in the united states and in germany.", "she also wrote a series of laboratory manuals for the teaching of zoology classes nationwide.", "hyman's work has had a lasting influence on scientific thinking about a number of animal groups, and the only works that can be compared with hers are of composite authorship.", "factuality as not factual.", "Our stringent standard for factuality produces a clearer understanding of hallucinations at the sentence-level.", "In summary, our investigation suggests two explanations for the low coverage reported by human annotators: lack of information in the web evidence and difficulty assessing whether two sentences contain the same core knowledge.", "One major challenge of accurate Wikipedia article generation is when information is not available on the web or not easily retrieved.", "For example, information could simply not exist on the internet.", "Writing a Wikipedia biography about any randomly chosen person on the street would likely manifest this scenario.", "Other situations could include having a large number of search results returned but difficulty identifying which are relevant, having too few search results to write a good biographic article, or even having only noise returned in the search results.", "We discuss these challenges and possible mitigations in this section.", "The Evidence Gap.", "We compare the results on our evaluation set about women with those on the WikiSum test set.", "Compared to WikiSum, the un-igram overlap of the web hits with the biographical article is substantially lower for our evaluation dataset (see Table 2).", "As shown in Table 5, across the board, the quality of generated biographies is higher for the WikiSum Test set.", "This is especially prominent for Women in Asia and Africa, which are more than 2.5 ROUGE-L worse on average.", "Reducing the Dependency on Retrieval.", "One challenge is that there is a disconnect between the training dataset, where retrieval information is readily available, and the women-focused evaluation dataset, where retrieval information is noisy or missing.", "We investigate the potential of a straightforward strategy to mitigate differences in training data: that of training on biographical articles with less reliable web evidence.", "We mimic this by finetuning our model on a subset of our evalu-8568 Model WikiSum Test Women Scientists Women in Asia Women in Africa BART Pretraining 19.0 17.4 18.2 16.7 16.4 + Retrieval 21.4 18.8 19.3 17.9 17.1 + Caching 21.8 19.3 19.7 18.4 17.3 Table 5: ROUGE-L Performance broken down by sub-categories.", "ation dataset, and then testing on Women in Asia and Africa, the two categories that perform most poorly.", "As shown in Table 6, finetuning statistically significantly improves performance, though the improvement is not large (+0.5 ROUGE-L).", "Another phenomenon that arises with noisy web evidence is that retrieving more is not necessarily better.", "Perhaps only one website has really relevant information.", "In the retrieval module, all available web documents are encoded at the sentence level, and the model can select sentences across all documents.", "We next explore an approach where the model first scores documents, then selects sentences from the most relevant document.", "We found this had very similar performance, and thus conclude that the challenge of identifying relevant documents and then sentences is probably similar in difficulty to identifying relevant sentences directly.", "We developed a novel retrieval and cache-augmented generative model to generate long-form biographies based on evidence from the web.", "Experimental evidence reveals that an enriched query including occupations, caching, and backpropagation through the retrieval module contributes to improved performance.", "We investigate the dependency on high-quality web evidence, which manifests strongly in our constructed evaluation dataset of biographies about women.", "We discuss this challenge and possible mitigations.", "We thank the anonymous reviewers for their feedback.", "We gratefully acknowledge the support of the French National Research Agency and of Facebook AI Research Paris (for Claire Gardent; award ANR-20-CHIA-0003, XNLG \"Multi-lingual, Multi-Source Text Generation\").", "We thank Adina Williams, Emily Dinan, Ledell Wu, and Aleksandra Piktus for thoughtful discussions and feedback on this entire effort, as well as previous collaborations that influenced this work.", "We thank Sebastian Riedel, Douwe Kiela, Mona Diab, and Michael White for their suggestions to improve this work.", "We thank Mojtaba Komeili for developing the web query service we used to create the evaluation dataset.", "Finally, we thank all of the editors of Wikipiedia, particularly those in the Women in Red Project, for their hard work and dedication to creating, moderating, editing, and all that is necessary to keep Wikipedia running.", "We encourage readers to donate to Wikipedia to support this public project.", "In this section, we discuss several known limitations and ethical considerations of our work.", "We do not recommend any kind of text generation technology to be deployed on Wikipedia given this is an active area of research.", "Biographies, whether written as books or available online, reflect societal bias.", "While many Wikipedia editors rely on web-based references to create their articles, and we follow the same strategy in this work, relying on the web is flawed.", "The prominent reason is that the internet is full of bias in it of itself.", "For example, Donna Strickland, who received a Nobel Prize, did not have a Wikipedia article 10 10 https://wikimediafoundation.org/news/ 2018/10/04/donna-strickland-wikipedia/#:~:text=Donna%20Strickland%20is%20an% 8569 as there was not sufficient content about her on the web as a basis for her article.", "Thus, it is important to recognize that the availability of references is problematic, affecting the downstream ability to write accurate, comprehensive biographies.", "Further, information on the web can be contradictory, information can be affected by the passage of time, and not information on the web is necessarily factually correct.", "Our proposed modeling mechanism does not have a way to explicitly recognize or correct for these challenges, which also plagues text generation generally.", "Our work focuses on text generation in English only, which limits inclusivity purely on the basis of language.", "This is challenging as the content of the internet and Wikipedia itself is different in various languages.", "For example, articles about people from Germany may be more likely to be located on the German version of Wikipedia.", "Another factor is that the content of the references may be written in another language, and then used by a bilingual individual to write an article in English about that subject.", "This is often the case for many biographical subjects who may be more well known in a non-English speaking area.", "There are a very large number of marginalized groups in the world and numerous important intersectional aspects to consider.", "When discussing identity, a wide variety of factors and personal views influence individuals when thinking about how they describe themselves.", "Our evaluation dataset focuses on women alone, which leaves out many groups, including non-binary people.", "Further, Wikipedia may not reflect the up-to-date information names and gender are both mutable, for example and Wikipedia articles do not ask each subject to self-report their gender.", "Finally, we note that by grouping people into hard categories, there can potentially be harm such as limiting people from opportunities because of their gender or race.", "However, we strongly believe that it is important to recognize bias in its various forms as it exists, particularly in popular, default online sources of information such as Wikipedia.", "20optical,of%20a%20Sloan%20Research%20Fellowship.", "In this work, we focus on bias manifesting as unequal prevalence and length of biographical content on Wikipedia, focusing specifically on different intersectional groups of women.", "However, bias manifests in a number of other ways.", "Studies have indicated that the words used in biographies about women compared to biographies about men (Di-nan et al., 2019) also differs, and is reflective of gendered terminology.", "For example, many articles about women are actually written with a lot of information about men, such as their husband's careers, and articles about actresses describe more often their physical appearance.", "This is also a manifestation of bias, and we do not present any focused modeling techniques to address this type of bias explicitly.", "In the modern internet, a large number of events are recorded for the public record.", "These include events that people may personally prefer to forget, often termed right to be forgotten 11 .", "Automatically generating biographies about individuals may collate such information in an easily accessible public place, which can conflict with this personal right.", "This has a complex but important interaction with marginalized groups.", "For example, many celebrities who are women, transgender, or a part of another marginalized group are far more likely to have news articles written about intimate personal details such as plastic surgeries.", "Thus, it is important to consider the interaction of biographical data with individual privacy.", "This is a larger challenge of biographical information generally." ]
[ "abstain", "objective", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "result", "method", "abstain", "result", "abstain", "objective", "method", "method", "other", "other", "abstain", "method", "objective", "method", "method", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "other", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "objective", "abstain", "objective", "abstain", "other", "other", "other", "other", "other", "other", "other", "abstain", "method", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain" ]
[ "The learning trajectories of linguistic phenomena in humans provide insight into linguistic representation, beyond what can be gleaned from inspecting the behavior of an adult speaker.", "To apply a similar approach to analyze neural language models (NLM), it is first necessary to establish that different models are similar enough in the generalizations they make.", "In this paper, we show that NLMs with different initialization, architecture, and training data acquire linguistic phenomena in a similar order, despite their different end performance.", "These findings suggest that there is some mutual inductive bias that underlies these models' learning of linguistic phenomena.", "Taking inspiration from psycholinguistics, we argue that studying this inductive bias is an opportunity to study the linguistic representation implicit in NLMs.", "Leveraging these findings, we compare the relative performance on different phenomena at varying learning stages with simpler reference models.", "Results suggest that NLMs exhibit consistent developmental stages.", "Moreover, we find the learning trajectory to be approximately one-dimensional: given an NLM with a certain overall performance, it is possible to predict what linguistic generalizations it has already acquired.", "Initial analysis of these stages presents phenomena clusters (notably morphological ones), whose performance progresses in unison, suggesting a potential link between the generalizations behind them.", "Children present remarkable consistency in their patterns of language acquisition.", "They often acquire linguistic phenomena in a similar order (Kuhl et al., 1992; Ingram, 1989), and make similar generalizations and over-generalizations (Kuczaj II, 1977; Pinker, 1995).", "This consistency provides an important starting point for linguistic study.", "For example, arguments in favor of single or dual system accounts of morphological representation are often backed by computational models of children learning trajectories (e.g., Rumelhart and McClelland, 1986; Pinker and Prince, 1988; Kirov and Cotterell, 2018).", "In this paper, we embrace this program for the study of computational language models, investigating learning trajectories.", "1 The representations that language models (LM) acquire have been studied extensively, including studying their learning dynamics to improve training (see 6).", "However, very little work aimed at drawing connections between the training dynamics and the learned representations.", "In this work we adopt a behavioral approach, thus revealing that NLMs share learning trajectories and generalize in similar ways during training.", "This implies that studying trajectories of NLMs is worthwhile, in the sense that results on one architecture or size are expected to be reproducible by others.", "These findings call for a characterization of these trajectories, a new and promising territory for research.", "We take first steps to explore these directions, emphasizing their potential benefit to a better future understanding of what models learn.", "Specifically, we train NLMs on next-word prediction, but evaluate and compare them by tracking their performance on grammar learning in English, using the BLIMP dataset (See 2.1).", "BLIMP is a dataset that consists of 67K minimal pairs, where each pair includes a grammatically correct and a grammatically erroneous sentence.", "NLMs are tested for their ability to assign higher probability to the correct one.", "See example in Table 1, and details of our experimental methodology in 2.", "We begin (3) by establishing that NLMs learn grammatical phenomena in a consistent order.", "We evaluate NLMs at different time points along their training, showing that the performance on linguis-1 Code is supplied in https://github.com/borgr/ ordert 8281 Challenge Correct Erroneous Animate subject Galileo had talked to Bell.", "tic phenomena across initializations is highly correlated.", "We further find many similarities in the set of examples that they correctly classify.", "Still, models of different architectures learn at a different pace, and hence cannot be directly compared at identical time points.", "In 3.3, we overcome this by re-scaling the timeline.", "We then show that despite architectural differences, NLMs present highly correlated performance trajectories.", "In 3.4, we further demonstrate that even the choice of training data has minor influence on the results.", "Finally, in 3.5 we show that the learning dynamics essentially follows a single dimension.", "Namely, where the average performance is similar, success on linguistic phenomena is also similar.", "We proceed by analyzing the early stages of learning in 4.", "We find that, at first, NLMs rely mostly on local cues and not on word order.", "They thus resemble bag-of-words models over a window of the preceding tokens.", "Later stages seem to drift further away from bag-of-words models toward n -gram models, and with time seem to be more sensitive to structural cues.", "We also find evidence that some latent features that the model learns may not be related to linguistic phenomena.", "Finally, in 5 we take the first steps in categorizing linguistic phenomena by their learning trajectories.", "We identify links between their representations by finding phenomena that progress in unison.", "For example, we find that morphological phenomena are mostly learned at similar stages.", "Of particular interest are cases where performance decreases with time, which may suggest either overgeneralization or biases in the BLIMP challenges.", "We use BLIMP (Warstadt et al., 2019) to assess the extent to which generalizations are made by the NLMs.", "BLIMP includes 67 grammatical challenges categorized into 13 super-phenomena (e.g., island-related or quantifiers) comprising of 4 broad fields (e.g., Syntax, Semantics).", "Each challenge consists of 1K minimal pairs of sentences.", "A minimal pair contains a sentence and a near-duplicate distractor that incorporates an error on a particular linguistic phenomenon, i.e., only the phenomenon in question is changed between the sentences in a pair (see Table 1).", "Each challenge includes pairs with the same linguistic phenomenon.", "LM details: as training multiple GPT2 instances (Radford et al., 2019) is computationally demanding, we train smaller NLMs.", "Following Turc et al. (2019), we trained 1 instance of GPT2 small (width 768 , 12 layers, 8 attention heads) and 4 instances of GPT2 tiny (width 512 , 4 layers, 4 attention heads), with different random seeds.", "Similarly, we train a small TransformerXL (Dai et al., 2019), XL small (width 512 , 4 layers, 8 attention heads) and a full-sized one (width 4096 , 18 layers, 16 attention heads).", "We stop the full model after 600K steps, while the perplexity remained high.", "We use it for comparison to the early stages of learning of TransformerXL.", "All models' hyperparameters can be found in App.", "B.", "We also use the results of the fully trained GPT2, TransformerXL, LSTM and human performance reported in Warstadt et al. (2019).", "In 4, we compare NLMs with simpler models.", "To this end, we create two GPT2 tiny variations, denoted BOW and Window-5 .", "BOW replicates GPT2 tiny , but relies only on bag of words.", "This is achieved by removing the positional weights, and replacing the attention weights with a simple average.", "2 Window-5 similarly ignores the positions, and additionally only attends to the last 5 words.", "Note that both are unidirectional LMs and consider only previously predicted words at each step.", "Unless explicitly stated otherwise (as in 3.4), all models were trained on the WikiBooks dataset (Zhu et al., 2015), which contains the English Wikipedia ( 2 . 1 B words) and BookCorpus ( 854 M words).", "This dataset resembles BERT's training data (Devlin et al., 2019), except that current Wikipedia is used.", "Additionally, we trained models on the following datasets: English openSubtitles (Lison and Tiedemann, 2016), newsCrawl (Barrault et al., 2019), GigaWord (Napoles et al., 2012), and 2 Supposedly, removing the positional embeddings would suffice.", "Empirically, it has little effect.", "Presumably, as embeddings only attend to previous positions, the network manages to represent positions by the difference between them.", "This is in line with the finding that GPT2's positional embeddings are not meaning-bearing (Wang and Chen, 2020).", "a sample of openWebText (3B words; Gokaslan", "and Cohen, 2019) a replication of GPT2 dataset.", "Throughout this paper, we report Pearson correlation.", "Using Spearman correlation leads to qualitatively similar conclusions.", "When multiple models are correlated against each other, their average pairwise correlation is reported.", "In this section, we examine various aspects of NLMs, generally showing that their learning trajectories are similar.", "We evaluate network similarity by adopting a behavioral approach.", "Accordingly, networks are viewed as functions, whose latent features manifest themselves only by their influence on the network's behavior.", "Latent features are the unobserved causes of the measured behavior.", "Consequently, parameters, activation patterns and representations can be completely different among similar models.", "This is unlike the approaches employed by Williams et al. (2018); Saphra and Lopez (2019); Liu et al. (2021), which analyze internal representations directly.", "To formalize the above notion, let L t denote a checkpoint, the language model L at time t .", "Let pv ( L t ) denote its performance vector the accuracy obtained by L on each BLIMP challenge p : pv ( L t ) = [ acc ( L t , p )] p BLIMP R 67 (1) Time t is measured in training steps or perplexity.", "The trajectory of the performance vector as a function of t reflects L 's training dynamics.", "Given this behavioral definition, we focus on comparing the relative strength of models.", "Similarity between models is thus measured as the correlation between their performance vectors.", "Hence, models are similar if they rank phenomena in the same way.", "On the other hand, models of the same average performance can be dissimilar: consider two models that agree on everything except nouns.", "One generates only feminine nouns and the other plural nouns.", "The models' average performance is similar, but due to their biases, they are correct on different challenges.", "This dissimilarity suggests that the models rely on different latent features.", "We begin by showing that models produced by different initializations learn the same phenomena, in the same order.", "In terms of our definitions above, this may imply that despite converging to different parameter values, the learned latent features and the generalization patterns made are similar.", "In order to examine the hypothesis empirically, we compute the correlation between 4 random initializations (Fig. 1).", "Results confirm the hypothesis, the correlation between GPT2 tiny instances is extremely high.", "It is already high after 10K steps, and remains high throughout training.", "We note that the correlation at step 0 is 0 (not shown), and that after 10K warm-up steps the network's ability as a LM is still poor.", "For example, perplexity is 10.9 after 10K steps and 6.7 after 70K steps.", "Next, we show that different architectures also present similar trajectories.", "As the learning pace is not comparable across models, computing correla-8283 tion in fixed and identical intervals is not informative.", "Instead, we choose t to be the perplexity on the development set, comparing models at the same performance level.", "TransformerXL is not directly comparable as perplexity requires the vocabulary to be the same.", "Following this paradigm, we see that GPT2 small and GPT2 tiny are highly correlated (> 0 . 9 ), presenting similar learning order throughout training.", "Observing the trajectories per challenge qualitatively, we see that they align very well (cf. Fig. 2 and App. A, C).", "TransformerXL also seems to share the general tendencies of the GPT2 architectures.", "Interestingly, we see that models behave similarly not only in terms of relative performance, but also at the example level (binary decision per minimal pair).", "We find that GPT2 small and GPT2 tiny have an average agreement of = 0 .", "83 (Fleiss et al., 1969).", "This implies strong consistency in the order of learning of different examples also within phenomena.", "Henceforth, we focus on the phenomena-level as it is more interpretable, lending itself more easily to characterization.", "We discuss per-example similarity further in App.", "D.", "So far, we have observed the common trajectories presented by NLMs that are trained in parallel.", "We proceed to compare trajectories of one model to other models' performance vectors at a single point of interest in their learning, i.e. a checkpoint's performance vector.", "This allows us to analyze how similarities evolve, rather than whether two trajectories are synced.", "We compare fully trained off-the-shelf NLMs with the trajectory of GPT2 tiny (Fig. 3a) and GPT2 small (App. E).", "The observed similarity to off-the-shelf models is high (0.6-0.8), implying that NLMs in general share tendencies and biases.", "Moreover, similarity increases until the point of same performance and then (when relevant) decreases.", "This suggests that the small NLM approaches off-the-shelf tendencies as it improves and stops somewhere on the same trajectory of generalizations (cf. 3.5).", "Furthermore, we find considerable correlation with the performance levels of humans on the different challenges, but still, all NLMs correlate better with our model than humans correlate with it.", "These results present a curious order imposed on the NLMs.", "Both GPT2 tiny and GPT2 small (App. E) are more similar to the LSTM model than to TransformerXL, and even less similar to GPT2 large .", "Interestingly, our models are more similar to an RNN and a model with a different architecture, than to a larger model with the same architecture.", "Thus, it seems that the architecture type cannot explain the similarities in the relative order.", "We further examine this issue in the next section.", "This section examines the possibility that the similarities reported in Fig. 3a can simply be explained by the similarity in the NLM's training data.", "More specifically, since the ranking by model similarity reported above fits the similarity between the training sets that the models were trained on, we view it as a potential confound and attempt to control for it.", "Our training data (WikiBooks) consists mostly of Wikipedia and so do the LSTM's and TransformerXL's training sets, which are trained on earlier versions of Wikipedia and WikiMatrix (Schwenk et al., 2019) respectively.", "GPT2, on the other hand, is trained on openWebText, which consists of scraped web pages.", "To tease apart the effect of training data, we trained 3 additional GPT2 tiny instances over the openWebText, openSubtitles and newsCrawl datasets.", "Results (Fig. 1) show that the dataset has more effect on the correlation than initialization.", "Hence, the choice of training data does affect the learning trajectory, but its effect decreases with training (correlation gets higher with more training steps).", "We also recompute the correlations from 3.3 after training GPT2 tiny on the same data as GPT2 large (App. F), and find that the relative order between the NLMs remains the same, with GPT2 large being the least similar.", "We conclude that while the training data affects the learned generalizations, it only very partially explains the observed similarities between NLMs.", "Based on the findings of the previous sub-sections, we hypothesize that current NLMs all learn in a similar order, where the effect of training data and architecture is secondary.", "In other words, training time, size and efficiency may affect what a model has learned, but not its learning order.", "This implies that stronger models may improve performance, but still follow a similar learning trajectory.", "If this hypothesis is correct, models should be most similar to models with the same performance; similarity should drop as the gap in performance widens.", "Controlled comparison supports this hypothesis.", "Fig. 3b presents the correlation of GPT2 tiny training trajectory with several static checkpoints taken during GPT2 small training.", "We observe that at the point in which the average performance of GPT2 tiny is closest to that of the checkpoint, the correlation peaks, and then decreases again as GPT2 tiny surpasses the checkpoint in average performance.", "So overall correlation peaks when average performance is most similar.", "Note that despite the different network sizes and convergence rates, the correlation's maximal value is very high (higher than 0.9).", "Further experiments show similar trends.", "Fig. 3a presents a similar investigation, albeit with more varied architectures and training datasets.", "Here too the maximum correlation is obtained around the point of most similar performance.", "NLMs are most similar to other NLMs with the same performance.", "However, when compared to non-neural LMs, this is no longer the case.", "More specifically, we compare GPT2 tiny to two 5-gram LMs trained on the same dataset as the NLMs (WikiBooks) and another (GigaWord) dataset.", "Results are shown in Fig. 4, which is qualitatively different from Fig. 3a.", "Here, similarity in performance implies neither high correlation, nor the point of highest similarity.", "This serves both as a sanity check to our methodology, and as a reminder of model biases: In general, models may have different biases and tendencies, regardless of overall performance.", "In our case, it seems that NLMs share biases between them that are not necessarily shared with other LMs.", "While not the main purpose of the analysis, our comparison reveals other noteworthy trends.", "For example, 5-gram LMs trained on different corpora have different correlations to the GPT2 tiny trajectory.", "This is further discussed in App.", "G.", "We find that the order of learning is surprisingly stable across architectures, model sizes and training sets.", "Therefore, given a new NLM, the order 8285 in which it will learn linguistic phenomena can be predicted by another model that achieves a similar average accuracy.", "When considering non-neural LMs, this observation does not always hold: inherently different architectures (such as 5-grams) have very different trajectories.", "Hence, future models with very different induced biases may present different orders.", "Having established that different NLMs learn in a consistent order, we investigate the emerging learning trajectory by comparing it with simpler reference models.", "Our goal is to identify distinct learning phases that characterize NLM's training.", "Setup.", "We compare GPT2 tiny to fully trained LMs (same as 3.3), as well as to a variety of metrics.", "For each metric m we compute the average score over each example for each of the 67 sets E p i p [ m ( p i )] R 67 .", "The results are replicated with GPT2 small and TransformerXL and lead to similar conclusions (see App. E).", "Sentence-level Metrics.", "First, we consider two sentence-level metrics: sentence length (in tokens) and syntactic depth.", "Assuming a sentence parse tree, the depth is the longest path from a word to the root.", "Sentence length is often considered to be a source of challenge for infants (Brown, 1973) and networks (Neishi and Yoshinaga, 2019), regardless of the sentence's complexity.", "Syntactic depth (Yngve, 1960) is a measure used to assess how cognitively complex a sentence is.", "We leave the question of which measure of linguistic complexity (Szmrecsnyi, 2004) correlates best with the trajectory exhibited by NLMs to future work.", "Our results (Fig. 5) show that neither sentence-level metric (length and syntactic depth) can predict well what is difficult for the model.", "This is not surprising, as both measures only capture sentence complexity at a general level, and are not directly related to the linguistic phenomenon that is being tested.", "We do see that the syntactic depth starts off as a worse predictor of the NLM performance and ends as a better one.", "We provide a different perspective on this initial learning phase, before and after that switch, later in this section.", "Next, we compare the performance vector with task difficulty for humans, as reported in the original BLIMP paper.", "We observe that correlation is fairly high after a sufficient number of steps.", "In fact, the network becomes more similar to humans as it improves: at the beginning, the network relies on different features than humans, but with time more of the hurdles are shared.", "However, correlation saturates at a mid-range correlation of under 0.5.", "This suggests that the network (partially) relies on features that are not used by human annotators.", "These may be valid generalizations not tested by BLIMP, or erroneous ones that are still beneficial to reduce the score on the task it was trained on (cf. McCoy et al., 2019).", "We revisit this issue in 5.", "Comparison with Limited Context and Locality.", "Our methodology opens the door to examine other potential biases of LMs.", "We now do so, starting with context and locality.", "We consider models that take into account different scopes of context: unigram, and 2-5 gram LMs that can exploit the order of preceding words.", "We argue that the correlation between NLMs and n -gram LMs may indicate that features based on limited context are also employed by NLMs.", "Surprisingly, the unigram model, which doesn't use context, perfectly classifies 7 phenomena, achieves 98.1% accuracy on 1, and completely fails (0% accuracy) on 8.", "This suggests that high accuracy on some syntactic and semantic challenges (as defined by BLIMP) can be achieved by simple heuristics.", "Note, however, that the NLMs we test are not trained towards any specific phenomena and are not fine-tuned in any way.", "Hence, NLMs can only attain heuristics or biases (generalization errors) which are beneficial in general, not ones specific to our test challenges.", "While NLMs initially present a strong correlation with the unigram model, this correlation quickly drops (see Fig. 5).", "From the outset, GPT2 tiny succeeds on 6 of the 8 phenomena that are classified well by unigrams, and 4 of the 8 that 8286 the unigram model utterly fails on.", "Interestingly, for 3 of the other phenomena on which the unigram failed, GPT2 tiny initially achieves 0% accuracy (chance level is 50%), but its accuracy does climb during training (e.g., see App. A).", "We conclude that, as expected, the NLM acquires a bias towards predicting frequent words early in training, but that this bias is weighed in against other (contextual) considerations later on in training.", "Comparing different scopes of context, our results (Fig. 6 and App. E) show that throughout training, the network presents high correlation with n -gram models.", "From a certain point onward, the network becomes more similar to the bi-gram model than to the other n -gram LMs.", "We also note that similarity peaks early on, but with time the correlation decreases.", "This may suggest that initially, the NLMs acquire grammatical behavior that resembles a Markov model, or even a bi-gram model.", "Only later does the network rely more on global features.", "This is in line with our earlier findings, which show an increasing correlation with syntactic depth as compared to sentence length.", "At the very beginning, NLMs often generate one word repetitions (e.g., \"the\" Fu et al., 2020).", "This seems to be at odds with our finding that grammar learning already begins at this early stage.", "However, while frequency may dictate the most probable predictions, comparing two options that differ only slightly may prove to depend more on context, as our results indicate.", "comparing NLMs to n -grams, we examined the effect of context within a fixed window size.", "Now we examine the effect of word order, within a window and in general.", "To this end, we create two ablated GPT2 tiny models.", "BOW is agnostic to the order between preceding tokens, while Window-5 is similar but relies only on 5 tokens (details in 2).", "Our results suggest that initially, the identity of the preceding words is more important than their order.", "Both BOW and Window-5 better correlate with our NLM than the n -gram models.", "Later on, this trend reverses and the n -grams, that do exploit word order, become better correlated.", "Furthermore, the correlation with Window-5 is sig-nificantly smaller than with BOW at later stages of learning, suggesting that the network gradually learns to rely on more context (cf. Saphra and Lopez, 2019).", "To understand the latent features learned by NLMs, we categorize linguistic phenomena through the lens of their learning trajectories.", "We ask whether linguistically similar phenomena are learned in a similar fashion, and whether what is learned similarly is defined by linguistic terms.", "We inspect linguistic categories by comparing the learning trajectories of their phenomena.", "In the Morphology field, we find that they display similar gradual curves, ultimately reaching high performance (median accuracy 0 . 85 , see Fig. 7a).", "This may indicate that some latent features learned are morphological, and affect performance on almost all 'Morphology' phenomena.", "Syntax-semantics phenomena also present unique behavior: their scores plateau near chance performance (see Fig 7b), suggesting that the learned features are insufficient to correctly represent phenomena in this field.", "The other fields, \"se-mantics\" and \"syntax\" (Figs 7c,7d), do not present prototypical learning curves, suggesting that they are too broad to correspond to a single learning pattern.", "This, in turn, may suggest that they do not all correspond to a well-defined set of latent features.", "Next, we follow the reverse direction and cluster the learning curves of GPT2 tiny .", "We use spectral clustering with 10 clusters and sklearn default parameters, by projecting the learning curves into a normalized Laplacian and applying k-means.", "Intuitively, learning curves with similar values along the principal directions, are clustered together.", "Other clustering methods show similar results.", "learning profiles, some more expected than others.", "For some, accuracy improves as learning progresses (see Fig. 8a).", "Some are barely learned, and accuracy remains at near-chance level (see Fig. 8b).", "Perhaps more surprisingly, some clusters deteriorate, and accuracy drops to nearly 0 as learning progresses (see Fig. 8c).", "Notably, some challenges are quite easy NLMs instantly reach perfect accuracy (see Fig. 8d), while some are confusing NLMs performance is worse than chance (see Fig. 8c).", "In the latter cases, the NLMs presumably learn unrelated, harmful generalizations.", "When inspecting the emerging clusters, many (but not all, see Fig. 8b) contain a shared prominent field, but often varied super-phenomena (see Fig. 8a).", "Thus, while the categorization in BLIMP reflects a common linguistic organization of grammatical phenomena, from the perspective of learning trajectories only few of the super-phenomena in BLIMP show consistent behavior.", "We cautiously conclude that there is some discrepancy between the common linguistic categorization of grammatical phenomena and the categorization induced by the learning trajectories of NLMs.", "An interesting direction for future work would therefore be the development of a theory that can account for the patterns presented by NLMs' learning trajectories.", "by a simple rule, easily learnable by an n -gram model.", "For example, in \"principle A case 1\", always preferring subjective pronouns (e.g., \"she\" or \"he\") over reflexive ones (e.g., \"himself\", \"herself\") is sufficient to obtain a perfect score, and preferring \"not ever\" over \"probably/fortunately ever\" solves \"sentential negation NPI licensor present\".", "The fact that NLM performance deteriorates, fits our finding that nascent NLMs resemble an n -gram model.", "Characterizing what networks learn is a longstanding challenge.", "Recently, studies suggested methods to analyze trained models such as probing (Tenney et al., 2019; Slobodkin et al., 2021), analyzing attention heads (Voita et al., 2019; Abnar and Zuidema, 2020) and neurons (finding also correlations across epochs; Bau et al., 2018) and assessing the extent to which LMs represent syntax (van Schijndel et al., 2019).", "Other works compare outputs, like us, to assess network generalizations (Choshen and Abend, 2019; Ontan'on et al., 2021), look for systematic biases (Choshen and Abend, 2018; Stanovsky et al., 2019) or evaluate characteristics of outputs (Gehrmann et al., 2021; Choshen et al., 2020).", "McCoy et al. (2020) fine-tuned BERT and tested generalizations on the adversarial dataset HANS (McCoy et al., 2019), finding models to make inconsistent generalizations.", "Their results differ from ours, but so is their setup, which in-8288 volves fine-tuning for inference.", "Characterizing the features learned by networks according to the order in which examples and phenomena are learned is a relatively new topic.", "Recently, Hacohen et al. (2020); Hacohen and Weinshall (2021); Pliushch et al. (2021) showed that classifiers learn to label examples in the same order.", "While their focus was on computer vision, it provided motivation for this work.", "Other studies use learning dynamics as a tool, rather than a topic of study.", "They choose training examples (Toneva et al., 2018), categorize examples (Swayamdipta et al., 2020) or characterize the loss-space (Xing et al., 2018).", "Little research on NLM learning dynamics and generalization types was previously conducted.", "Perhaps the closest to this work is Saphra and Lopez (2019), which compared LSTM representations with 3 types of linguistic tagger outputs, finding that correlation is low and that later in training, more context is used.", "The latter is reminiscent of our findings in 4.", "In parallel work, Liu et al. (2021) probe models during training.", "They show that, early in training, information required for linguistic classifi-cations is found somewhere in the layers of the model.", "Our work supports their findings by showing that grammar learning experiments conducted with one model are likely to replicate on another.", "Our methodology differs from theirs in requiring the information the model learnt to manifest itself in behavior rather than to be extractable with a dedicated classifier.", "Studying the trajectories of language learning is a mostly untapped area in NLP, but is a long-established field of research in linguistics and psychology.", "Such lines of research study topics such as acquisition of phonemes (Kuhl et al., 1992), morphology (Marcus et al., 1992), complex constructions (Gropen et al., 1991; Qing-mei, 2007) and innate learning abilities (Tomasello, 2003).", "Considerable computational work was also done on constructing models that present similar learning trajectories to those of infants (McClelland and Rumelhart, 1981; Perfors et al., 2010; Abend et al., 2017, among many others).", "Our work suggests that the generalizations NLMs make are coupled with the bottom-line performance.", "This gives a new angle and opens avenues of research when combined with previous work about bottom-line performance.", "For example, the bottom-line performance of small models could predict the performance of larger models (Ivgi et al., 2022).", "In such cases, the type of generalizations made might also be predicted from the smaller models.", "Our work is also closely related to fields such as curriculum learning (Bengio et al., 2009; Hacohen and Weinshall, 2019), self-paced learning (Kumar et al., 2010; Tullis and Benjamin, 2011), hard data mining (Fu and Menzies, 2017), and active learning (Krogh and Vedelsby, 1994; Hacohen et al., 2022; Ein-Dor et al., 2020).", "In these fields, the order in which data should be presented to the learner is investigated.", "On the other hand, in our work, we study the order of the data in which the learner is learning which may shed some light on the advancement of such fields.", "We showed that NLMs learn English grammatical phenomena in a consistent order, and subsequently investigated the emerging trajectory.", "Our findings suggest that NLMs present consistent and informative trends.", "This finding suggests a path for studying NLMs' acquired behavior through their learning dynamics, as a useful complementary perspective to the study of final representations.", "Future work will consider the impact of additional factors, architectures and learning phases that appear only later in training.", "We hope that this work will increase the affinity between the knowledge and methodologies employed in developmental studies, and those used for studying NLMs.", "Our goal is to obtain a better understanding of what makes linguistic generalization complex or simple to learn, for both humans and NLMs.", "We thank Prof. Inbal Arnon for her helpful discussions.", "This work was supported in part by the Israel Science Foundation (grant no. 2424/21), by a grant from the Israeli Ministry of Science and Technology, and by the Gatsby Charitable Foundations." ]
[ "abstain", "abstain", "result", "abstain", "method", "method", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "result", "abstain", "abstain", "objective", "result", "abstain", "abstain", "result", "method", "other", "abstain", "result", "abstain", "abstain", "result", "objective", "result", "abstain", "method", "objective", "abstain", "abstain", "result", "objective", "result", "result", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "method", "other", "other", "other", "other", "objective", "other", "other", "other", "abstain", "abstain", "other", "other", "abstain", "objective", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "method", "objective", "method", "abstain", "abstain", "method", "objective", "other", "other" ]
[ "Tangled multi-party dialogue contexts lead to challenges for dialogue reading comprehension, where multiple dialogue threads flow simultaneously within a common dialogue record, increasing difficulties in understanding the dialogue history for both human and machine.", "Previous studies mainly focus on utterance encoding methods with carefully designed features but pay inadequate attention to characteristic features of the structure of dialogues.", "We specially take structure factors into account and design a novel model for dialogue disentangling.", "Based on the fact that dialogues are constructed on successive participation and interactions between speakers, we model structural information of dialogues in two aspects: 1)speaker property that indicates whom a message is from, and 2) reference dependency that shows whom a message may refer to.", "The proposed method achieves new state-of-the-art on the Ubuntu IRC benchmark dataset and contributes to dialogue-related comprehension.", "Communication between multiple parties happens anytime and anywhere, especially as the booming social network services hugely facilitate open conversations, such as group chatting and forum discussion, producing various tangled dialogue logs (Lowe et al., 2015; Zhang et al., 2018b; Choi et al., 2018; Reddy et al., 2019; Li et al., 2020a).", "Whereas, it can be challenging for a new participant to understand the previous chatting log since multi-party dialogues always exhibit disorder and complication (Shen et al., 2006; Elsner and Charniak, 2010; Jiang et al., 2018; Kummerfeld et al., 2019).", "In fact, it is because of the distributed and random organization, multi-party dialogues are Corresponding author.", "I cant ssh into it until I log in as a user.", "daftykins: oh haha sorry it says connetction refused.", "Do the ubuntu lts versions able to run as a live cd?", "regum: Because the root accoutn is disabled in Ubuntu, by default.", "blackdog: run as live for testing, yes which is really inconvenient, as it means I have to plug a keyboard every time I turn it on.", "thanks regum: so reconfigure SSHd I meant to the channel and not me, I know nothing about python :) daftykins regum regum blakdog bekks daftykins regum blakdog daftykins jancoow (1003) (1009) (1010) (1011) (1012) (1013) (1014) (1015) (1016) Figure 1: Here is an example piece of multi-party chatting logs from Ubuntu IRC (Kummerfeld et al., 2019).", "much less coherent or consistent than plain texts.", "As the example shown in figure 1, the development of a multi-party dialogue has the following characteristics: 1) Random users successively participate in the dialogue and follow specific topics that they are interested in, motivating the development of those topics.", "2) Users reply to former related utterances and mention involved users, forming dependencies among utterances.", "As a result, multiple ongoing conversation threads grow as the dialogue proceeds, which breaks the consistency and hinders both humans and machines from understanding the context, let alone giving a proper response (Jiang et al., 2018; Kummerfeld et al., 2019; Joty et al., 2019; Jiang et al., 2021).", "In a word, the behavior of speakers determines the structure of a dialogue passage.", "And the structure causes problems of reading comprehension.", "Hence, for better understanding, structural features of dialogue context deserve special attention.", "messages or clustering conversation threads help with screening concerned parts among contexts, therefore it may be naturally required by passage comprehension, and related downstream dialogue tasks (Elsner and Charniak, 2010; Jia et al., 2020; Liu et al., 2021a), such as response selection, question-answering, etc.", "Nevertheless, existing works on dialogue disentanglement (Zhu et al., 2020; Yu and Joty, 2020; Li et al., 2020b) generally ignore or pay little attention to characters of dialogues.", "Earlier works mainly depend on feature engineering (Kummerfeld et al., 2019; Elsner and Charniak, 2010; Yu and Joty, 2020), and use well-constructed handcrafted features to train a naive classifier (Elsner and Charniak, 2010) or linear feed-forward network (Kummerfeld et al., 2019).", "Recent works are mostly based on two strategies: 1) two-step (Mehri and Carenini, 2017; Zhu et al., 2020; Yu and Joty, 2020; Li et al., 2020b; Liu et al., 2021a) and 2) end-to-end (Tan et al., 2019; Liu et al., 2020a).", "In terms of the two-step method, the disentanglement task is divided into matching and clustering .", "It means firstly matching utterance pairs to detect reply-to relations and then clustering utterances according to the matching score.", "In the end-to-end strategy, alternatively, for each conversation thread, the state of dialogue is modeled, and is mapped with a subsequent utterance to update.", "At the same time, the subsequent utterance is judged to belong to the best-matched thread.", "Nonetheless, the essence of both strategies is to model the relations of utterance pairs.", "Recently, Pre-trained Language Models (PrLMs) (Devlin et al., 2019; an, 2019; Clark et al., 2020) have brought prosperity to numbers of natural language processing tasks by providing contextualized backbones.", "Various works have reported substantial performance gains with the contextualized information from PrLMs (Lowe et al., 2015; Li et al., 2020a; Liu et al., 2021c; Jia et al., 2020; Wang et al., 2020).", "Studies on dialogue disentanglement also get benefit from PrLMs (Li et al., 2020b; Zhu et al., 2020), whereas, there is still room for improvement due to their insufficient enhancement of dialogue structure information.", "So as to enhance characteristic structural features of tangled multi-party dialogues, we design a new model as a better solution for dialogue disentanglement.", "Structure of a multi-party dialogue is based on the actions of speakers according to the natural development of dialogues.", "Hence, we model two structural features to help with the detection of reply-to relationships: 1)user identities of messages, referred to as speaker property ; and 2) mention of users in messages, called reference dependency .", "With the two features enhanced between encoding and prediction, the model makes progress on dialogue disentanglement.", "Evaluation is conducted on DSTC-8 Ubuntu IRC dataset (Kummerfeld et al., 2019), where our proposed model achieves new state-of-the-art.", "Further analyses and applications illustrate the advantages and scalability additionally.", "Our source code is available 1 .", "Dialogue understanding brings challenges to machine reading comprehension (MRC), in terms of handling the complicated scenarios from multiple speakers and criss-crossed dependencies among utterances (Lowe et al., 2015; Yang and Choi, 2019; Sun et al., 2019; Li et al., 2020a).", "A dialogue is developed by all involved speakers in a distributed way.", "An individual speaker focuses on some topics that are discussed in the conversation, and then declares oneself or replies to utterances from related speakers.", "Therefore, consistency and continuity are broken by tangled reply-to dependencies between non-adjacent utterances (Li et al., 2020a; Jia et al., 2020; Ma et al., 2021; Li et al., 2021), leading to a graph structure that is different from smooth presentation in plain texts.", "PrLMs have made a significant breakthrough in MRC, where various training objectives and strategies (Devlin et al., 2019; Clark et al., 2020; an, 2019; Lan et al., 2020) have achieved further improvement.", "Devoted to MRC tasks, PrLMs usually work as a contextualized encoder with some task-oriented decoders added (Devlin et al., 2019).", "And this paradigm may be a generic but suboptimal solution, especially for some distinctive scenarios, such as dialogue.", "Recently, numbers of works of dialogue-related MRC have managed to enhance dialogue structural features in order to deal with dialogue passages better (Liu et al., 2021c; Jia et al., 2020; Zhang and Zhao, 2021; Ma et al., 2021; Li et al., 2021), 1 https://github.com/xbmxb/ StructureCharacterization4DD 286 which achieve progress compared to methods that were previously proposed for plain texts.", "This inspiration impacts and promotes a wide range of dialogue-related MRC tasks such as response selection (Gu et al., 2020; Liu et al., 2021c), question answering (Ma et al., 2021; Li et al., 2021), emotion detection (Hu et al., 2021), etc. 2.2 Dialogue Disentanglement Dialogue disentanglement (Elsner and Charniak, 2010), which is also referred to as conversation management (Traum et al., 2004) , thread detection (Shen et al., 2006) or thread extraction (Adams, 2008), has been studied for decades, since understanding long multi-party dialogues remains to be non-trivial.", "Thus, dialogue disentanglement methods have been proposed to cluster utterances.", "Early works can be summarized as feature encoder and clustering algorithms.", "Well-designed handcraft features are constructed as input of simple networks that predict whether a pair of utterances are alike or different, and clustering methods are then borrowed for partitioning (Elsner and Charniak, 2010; Jiang et al., 2018).", "Researches are facilitated by a large-scale, high-quality public dataset, Ubuntu IRC, created by Kummerfeld et al. (2019).", "And then the application of FeedForward network and pointer network (Vinyals et al., 2015) leads to significant progress, but the improvement still partially relies on handcraft-related features (Kummerfeld et al., 2019; Yu and Joty, 2020).", "Then the end-to-end strategy is proposed and fills the gap between the match and clustering (Liu et al., 2020a), where dialogue disentanglement is modeled as a dialogue state transition process.", "The utterances are clustered by mapping with the states of each dialogue thread.", "Inspired by achievements of pre-trained language models (Devlin et al., 2019; Clark et al., 2020; an, 2019), latest work use BERT to contextually encode the dialogue context (Zhu et al., 2020; Li et al., 2020b).", "Liu et al. (2021b) investigates disentanglement from a different perspective.", "Their end-to-end co-training approach provides a novel unsupervised baseline.", "However, attention paid to the characteristics of dialogues seems to be inadequate.", "Feature engineering-based works represent properties of individual utterances such as time, speakers, and topics with naive handcraft methods, thus ignoring dialogue contexts (Elsner and Charniak, 2010; Kummerfeld et al., 2019).", "PrLM-based Masked Hierarchical Transformer (Zhu et al., 2020) utilizes the golden conversation structures to operate attentions on related utterances when training models, which results in exposure bias.", "DialBERT (Li et al., 2020b), a recent architecture including a BERT (Devlin et al., 2019) and an LSTM (Hochreiter and Schmidhuber, 1997), models contextual clues but no dialogue-specific features, and claims a state-of-the-art performance.", "Our approach draws inspiration from these works and further models structural features for better dialogue understanding.", "Unlike the above studies, our work incorporates dialogue-specific characters.", "We propose a new model considering structural characteristics of dialogues, based on the fact that dialogues are developed according to the behavior of speakers.", "In detail, we model dialogue structures with two highlights: 1) speaker properties of each utterance and 2) reference of speakers between utterances, which both help with modeling inherent interactions among a dialogue passage.", "Speaker role, as a feature of dialogue passage, has received growing attention recently.", "On the one hand, speaker embedding facilities research of dialogues.", "Speaker-aware modeling has also made contributions to response retrieval (Gu et al., 2020; Liu et al., 2021c).", "SA-BERT (Gu et al., 2020) add a speaker embedding to the input of a PrLM, while MDFN (Liu et al., 2021c) modifies self-attention to enhance speaker switches.", "Persona has been utilized for smoother dialogue generation.", "In recent work (Liu et al., 2020b), the speaker-aware information is modeled by adding a reward of persona proximity to the reinforcement learning of generation, based on a persona-annotated dataset (Zhang et al., 2018a).", "On the other hand, speakers role is a valuable research object for personal knowledge analysis, since the persona can be extracted from one's words in dialogues.", "Relationship prediction task has been better handled through observing interactions of dialogue speakers (Jia et al., 2021; Tigunova et al., 2021).", "Tigunova et al. (2021) make use of speaker identity by a SA-BERT (Gu et al., 2020)-like embedding but in utterance-level representation.", "Relations between utterances have been studied for a long time.", "Earlier works mostly based on pioneer datasets, Penn Discourse TreeBank 287 (Prasad et al., 2008) and Rhetorical Structure Theory Discourse TreeBank (Mann and Thompson, 1988).", "In the dialogue field, the much more complex relations contain latent features (Shi and Huang, 2019; Zhang and Zhao, 2021; Jia et al., 2020).", "Due to the inherent graph structure, Graph Convolutional Network (Kipf and Welling, 2017) is well applied to natural language modeling.", "Derivations such as Relational-GCN (Schlichtkrull et al., 2018), TextGCN (Yao et al., 2019), LBGCN (Huang et al., 2021), etc, encourage better structural solutions in NLP.", "In this work, we aim to inject speaker-aware and reference-aware characteristic features for the motivation of disentanglement, instead of making progress on embedding approaches.", "The definition of the dialogue disentanglement task and details of our model are sequentially presented in this section, illustrating how we make efforts for disentanglement with dialogue structural features.", "Suppose that we perform disentanglement to a long multi-party dialogue history D = { u 0 , u 2 , . . . , u n } , where D is composed of n utterances.", "An utterance includes an identity of speaker and a message sent by this user, thus denoted as u i = { s i , m i } .", "As several threads are flowing simultaneously within D , we define a set of threads T = { t 0 , t 2 , . . . , t p } as a partition of D , where t i = { u i 0 , . . . , u i k } denoting a thread of the conversation.", "In this task, we aim to disentangle D into T .", "As indicated before, a multi-party dialogue is constructed by successive participation of speakers, who often reply to former utterances of interest.", "Thus, a dialogue passage can be modeled as a graph structure whose vertices denote utterances and edges denote reply-to relationships between utterances.", "Following the two-step method (Mehri and Carenini, 2017), we focus on finding a parent node for each utterance through inference of reply-to relationship, so as to discover edges and then determine the graph of a conversation thread.", "Figure 2 shows the architecture of the proposed model, which is introduced in detail in this part.", "The model architecture consists of three modules, including text encoder, structural interaction, and context-aware prediction: 1) The utterances from a dialogue history are encoded with a PrLM, whose output is then aggregated to context-level.", "2) The representation is sequentially fed into the structural modeling module, where dialogue structural features are used to characterize contexts.", "3) Then in the prediction module, the model performs a fusion and calculates the prediction of reply-to relationships.", "Pairwise encoding Following previous works (Zhu et al., 2020; Li et al., 2020b), we utilize a pre-trained language model e .", "g .", "BERT (Devlin et al., 2019) as an encoder for contextualized representation of tokens.", "Since chatting records are always long and continuous, it is inappropriate and unrealistic to concatenate the whole context as input.", "Hence, we focus on the pair of utterances with a reply-to relation.", "An utterance is concatenated with each parent candidate as input to a PrLM.", "This may sacrifice contextual information between candidates, but we make up for this in 3.2.3.", "Assuming that for an utterance u i , we consider former C utterances (including u i itself) as candidates for parent node of u i , the input of a PrLM is in the form of [CLS] u i j [SEP] U i [SEP] , where 0 j C 1 .", "The output is denoted as H 0 RC L D , where C denotes the window length in which former utterances are considered as candidates of the parent, L denotes the input sequence length in tokens, D denotes the dimension of hidden states of the PrLM.", "Note that there is a situation where the golden parent utterance is beyond the range of [ u i ( C 1) , u i ] .", "We label a self-loop for u i in this case, which means being too far from the parent making u i a beginning of a new dialogue thread.", "It makes sense in the real world, because when users join in a chat ( e . g . entering a chatting room), they intend to check a limited number of recent messages and make replies, instead of scanning the entire chatting record.", "Utterance Aggregation H 0 is pairwise contextualized representations of each pair of token sequences ( u i j , u i ) , thus need to be aggregated to context-level representation for further modeling.", "Since special token [CLS] makes more sense on classification tasks (Devlin et al., 2019), we simply reserve the representations of [CLS] .", "The 288 Encoder (1009) (1010) (1011) (1012) (1013) (1014) (1015) (1016) 1009100910101010101110111012101210131013101410141015101510161016 10091009101010101011101110121012 10131013 101410141015101510161016 10091009 10101010 10111011 10141014 10151015 10161016 (1003) 10171017 10171017 10031003 10031003 10031003 10131013 10121012 10171017 Speaker Classifier (1017) Dependency Aggregation PrLM regum: which is really inconvenient, as it means I have to plug a keyboard every time I turn it on.", "concatenated pairwise context-level representations from all candidates is denoted as H 1 RC D , where C denotes the window length and D denotes the dimension of hidden states of the PrLM.", "For our structural modeling, a simple but effective method is preferred.", "Hence, for speaker property, we applied the idea of masked MHSA method (Liu et al., 2021c) for better effectiveness and conciseness (Ma et al., 2021).", "In dependency modeling, we only built one relation type, i.e., reference, where a vanilla r-GCN (Schlichtkrull et al., 2018) is an appropriate baseline method.", "Speaker Property Modeling We use the term Speaker Property to denote the user identity from whom an utterance is, in formulation, s i .", "Modeling speaker property could be worthwhile because sometimes a participant may focus on conversations with specific speakers.", "Following the idea of masking attention (Liu et al., 2021c), we build a Multi-Head Self-Attention (MHSA) mechanism to emphasize correlations between utterances from the same speaker.", "The mask-based MHSA is formulated as follows: A ( Q, K, V, M ) = softmax( QKT d k + M ) V , head t = A ( HW Qt , HW Kt , HW Vt , M ), MHSA ( H, M ) = [ head 1 , , . . . ,head N ] WO , (1) where A , head t , Q , K , V , M , N denote the attention, head, query, key, value, mask, and the number of heads, respectively.", "H denotes the input matrix, and W Qt , W Kt , W Vt , WO are parameters.", "Operator [ , ] denotes concatenation.", "At this stage, the input of MHSA is the aggregated representation H 1 with a speaker-aware mask matrix M .", "The element at the i -th row, j -th column of M depend on speaker properties of u i and u j : M[i, j] = (cid:40) 0 , s i = s j , otherwise H 2 = MHSA ( H 1 , M ), (2) The output of MHSA, HMHSA , has the same dimension with H 1 RC D .", "We concatenate H 1 and HMHSA and adjust to the same size using a linear layer, resulting in an output of this module denoted as H 2 RC D .", "Reference Dependency Modeling As discussed above, the relation of references between speakers is the most important and straightforward dependency among utterances.", "Because references indicate interactions between users, it is the internal motivation of the development of a dialogue.", "To this end, we build a matrix to label the references, which is regarded as an adjacency matrix of a graph representation.", "In the graph of references, a vertice denotes an utterance and an edge for reference dependence.", "For example, u 1012 in Figure 1 mentions and reply to regum , forming dependence to utterances from regum , i.e., u 1009 , u 1010 , and u 1014 .", "Thus there are edges from v 1012 to v 1009 , v 1010 , and v 1014 .", "Impressed by the significant influence of graph convolutional network (GCN) (Kipf and Welling, 2017), we borrow the relation-modeling of relational graph convolutional network (r-GCN) (Schlichtkrull et al., 2018; Shi and Huang, 2019) in order to enhance the reference dependencies, which can be denoted 289 Model VI ARI 1-1 F1 P R Test Set FeedForward (Kummerfeld et al., 2019) 91.3 75.6 36.2 34.6 38.0 10 union (Kummerfeld et al., 2019) 86.2 62.5 33.4 40.4 28.5 10 vote (Kummerfeld et al., 2019) 91.5 76.0 38.0 36.3 39.7 10 intersect (Kummerfeld et al., 2019) 69.3 26.6 32.1 67.0 21.1 Elsner (Elsner and Charniak, 2008) 82.1 51.4 15.5 12.1 21.5 Lowe (Lowe et al., 2017) 80.6 53.7 8.9 10.8 7.6 BERT (Li et al., 2020b) 90.8 62.9 75.0 32.5 29.3 36.6 DialBERT (Li et al., 2020b) 92.6 69.6 78.5 44.1 42.3 46.2 +cov (Li et al., 2020b) 93.2 72.8 79.7 44.8 42.1 47.9 +feature (Li et al., 2020b) 92.4 66.6 77.6 42.2 38.8 46.3 +future context (Li et al., 2020b) 92.3 66.3 79.1 42.6 40.0 45.6 Ptr-Net (Yu and Joty, 2020) 92.3 70.2 36.0 33.0 38.9 + Joint train (Yu and Joty, 2020) 93.1 71.3 39.7 37.2 42.5 + Self-link (Yu and Joty, 2020) 93.0 74.3 41.5 42.2 44.9 + Joint train&Self-link (Yu and Joty, 2020) 94.2 80.1 -44.5 44.9 44.2 BERT base (Our baseline) 91.4 60.8 74.4 37.2 34.0 41.2 Our model 94.6 +3.2 76.8 +16 84.2 +9.8 51.7 +14.5 51.8 +17.8 51.7 +10.5 Dev Set Decom.", "ri where B is the set of relationships, which in our module is only reference dependencies.", "N ri denotes the set of neighbours of vertice v i , which are connected to v i through relationship r , and c i,r is constant for normalization.", "W ( l ) r and W ( l ) 0 are parameter matrix of layer l .", "is activated function, which in our implementation is ReLU (Glorot et al., 2011; Agarap, 2018).", "H 2 is fed into this module and derives H 3 RC D through r-GCN.", "The structure-aware representation H 3 needs to be combined with the original representation of [CLS] H 0 for enhancement.", "An LSTM-like layer (Hochreiter and Schmidhuber, 1997; Li et al., 2020b) can be utilized for compensating contextualized information of the whole candidate window.", "Motivated by the two points above, we employ a Syn-LSTM module (Xu et al., 2021), which was originally proposed for named entity recognition (NER).", "A Syn-LSTM is distinguished from an additional input gate for an extra input source, whose parameters are trainable, achieving a better fusion of two input sources.", "Thus, a layer of Syn-LSTM models the contextual information while the reference dependency is highlighted, enriching relations among parent candidates.", "In a Syn-LSTM cell, the cell state is derived from the two input and former state as well: c 1 t = tanh(W ( k ) x 1 t + U ( k ) h t 1 + b k ), c 2 t = tanh(W ( p ) x 2 t + U ( p ) h t 1 + b p ), c t = f t c t 1 + i 1 t c 1 t + i 2 t c 2 t , h t = o t tanh(c t ), where f t , o t , i 1 t , i 2 t are forget gate, output gate and two input gates.", "c t 1 , c t denote former and current cell states.", "h t 1 is former hidden state.", "And W, U, b are learnable parameters.", "We use the Syn-LSTM in a bi-directional way, and the output is denoted as H 4 RC 2 D r , where D r is the hidden size of the Syn-LSTM.", "At this stage, H 4 is the structural feature-enchanced representation of each pair of the utterance U i and a candidate parent utterance u i j .", "To measure the correlations of these pairs, we follow previous work (Li et al., 2020b) to consider 290 the Siamese architecture between each [ u i , u i j ] pair ( 1 j C 1 ) and [ u i , u i ] pair: H 5 [ j ] = [p ii , p ij , p ii p ij , p ii p ij ] , where p ij is the representation for the pair of [ U i , U i j ] from H 4 , and we got H 4 RC 8 D r .", "H 5 is then fed into a classifier to predict the most correlated pair and predict the parent.", "Cross-entropy loss is used as the model training objective.", "Our proposed model is evaluated on a large-scale multi-party dialogue log dataset Ubuntu IRC (Kummerfeld et al., 2019), which is also used as a dataset of DSTC-8 Track2 Task4.", "The results show that our model surpasses the baseline significantly and achieves a new state-of-the-art.", "Ubuntu IRC (Internet Relay Chat) (Kummerfeld et al., 2019) is the first available dataset and also the largest and most influential benchmark corpus for dialogue disentanglement, which promotes related research heavily.", "It is collected from #Ubuntu and #Linux IRC channels in the form of chatting logs.", "The usernames of dialogue participants are reserved, and reply-to relations are manually annotated in the form of (parent utterance, son utterance) .", "Table 2 shows statics of Ubuntu IRC.", "Reply-to relations We calculate the accuracy for the prediction of parent utterance, indicating the inference ability for reply-to relations.", "Disentanglement For the goal of dialogue disentanglement, threads of a conversation are formed by clustering all related utterances bridged by reply-to relations, in other words, a connected subgraph.", "At this stage, we use metrics to evaluate following DSTC-8, which are scaled-Variation of Information (VI) (Kummerfeld et al., 2019), Adjusted rand index (ARI) (Hubert and Arabie, Model VI ARI 1-1 F1 P R BERT base 91.7 74.6 80.2 33.5 32.16 35.0 Ablation study + speaker 94.0 81.2 84.9 45.0 44.7 45.3 + reference 94.1 82.4 85.6 47.4 47.4 47.4 + Both 94.4 81.8 86.1 52.6 51.0 54.3 Aggregation methods w/ max-pooling 94.1 80.0 85.3 50.8 52.5 49.2 w/ [CLS] 94.4 81.8 86.1 52.6 51.0 54.3 Layers of Syn-LSTM w/ 1 layer 94.4 81.8 86.1 52.6 51.0 54.3 w/ 2 layers 94.0 78.2 84.6 50.4 50.9 50.0 w/ 3 layers 94.3 79.6 85.3 52.2 51.9 52.6 Table 3: Results of architecture optimizing experiments. 1985), One-to-One Overlap (1-1) (Elsner and Charniak, 2010), precision (P), recall (R), and F1 score of clustering.", "Note that in the table of results, we present 1-VI instead of VI (Kummerfeld et al., 2019), thus for all metrics, we expect larger numerical values that mean stronger performance.", "Our implementations are based on Pytorch and Transformers Library (Wolf et al., 2020).", "We fine-tune the model employing AdamW (Loshchilov and Hutter, 2019) as the optimizer.", "The learning rate begins with 4e-6.", "In addition, due to the tradeoff for computing resources, the input sequence length is set to 128, which our inputs are truncated or padded to, and the window width of considered candidates is set to 50.", "As is presented in Table 1, the experimental results show that our model outperforms all baselines by a large margin as the annotated difference values.", "It is also shown that our model achieves superior performance on most metrics compared to previously proposed models as highlighted in the table, making a new state-of-the-art (SOTA).", "We study the effect of speaker property and reference dependency respectively to verify their specific contribution.", "We ablate either of the characters and train the model.", "Results in Table 3 show that both speaker property and reference dependency are non-trivial.", "At the stage of aggregation heading for context-level representations, we consider the influence of different methods of aggregation, i.e., max-pooling and extraction of [CLS] tokens, the models are trained with the same hyper-parameters.", "Results in Table 3 show [CLS] tokens is a better representation.", "To determine the optimal depth of the Bi-Syn-LSTM, we do experiments on the number of layers of a Syn-LSTM, also with the same hyper-parameters.", "According to the results, as shown in Table 3, we put a one-layer Bi-Syn-LSTM for better performance.", "To intuitively show and discuss the advantages of the proposed approach, we analyze predictions made by our model and the baseline model (i.e., BERT) in the following aspects.", "1) We categorize reply-to relationships based on the length of their golden spans (in utterances), and compute the precision of the baseline model and ours.", "Figure 3a shows that our model outperforms baseline by larger margins on links with longer spans (longer than 20 utterances), indicating that our model is more robust on the longer passages.", "2) We select bad cases of the baseline model to find out how the structure-aware modeling benefits dialogue disentanglement.", "We study predictions from our model on these bad cases.", "As depicted in Figure 3b, the model well solves 43.3% bad cases.", "Our model is observed to correct 20.8% bad cases whose utterance pairs are from the same speakers, and 18.3% bad cases whose utterance pairs have a reference.", "As the illustration shows, our model effectively captures the structural features caused by speaker property and reference dependency, thus gaining improvement.", "56.7% predictions are still wrong.", "It may suggest deeper inner relationships remain to be studied.", "The used metrics are explained and analyzed briefly for a better understanding of model performance in Appendix A.1.", "Empirically, it is consistent with our intuition that clarifying the structure of a passage helps with reading comprehension.", "This section studies the potential of dialogue disentanglement by conducting experiments on different tasks and domains.", "The dataset of DSTC7 subtask1 (Gunasekara et al., 2019) is a benchmark of response selection tasks, derived from Ubuntu chatting logs, which is challenging because of its massive scale.", "As shown in Table 4, it contains hundreds of thousand dialogue passages, and each dialogue has speaker-annotated messages and 100 response candidates.", "In the implementation, pre-processed context passages are firstly fed into the trained model for disentanglement to obtain predicted partitions of context utterances.", "Then when dealing with the response selection task, we add a self-attention layer to draw attention between utterances within a common cluster in the hope of labels of clusters leading to better contributions to performance.", "We also make efforts to apply disentanglement on span extraction tasks of question answering datasets, where we consider multi-party dialogue dataset Molweni (Li et al., 2020a), a set of speaker-annotated dialogues with some questions whose answers can be extracted from contexts, which is also collected from Ubuntu chatting logs", "4. Because passages in Molweni are brief compared to other datasets we used, utterances tend to belong to the same conversation session through crisscrossed relations.", "Thus we alternatively leverage labels of reply-to relations from our model, and build graphs among utterances.", "As the former two datasets are both extracted Ubuntu IRC chatting logs, we additionally consider an open-domain dataset, FriendsQA (Yang and Choi, 2019).", "It contains daily spoken languages from the TV show Friends", "4. FriendsQA gives QA questions and is handled in the same way as the Molweni dataset.", "Results of the above experiments are presented in Table", "5. It is shown that the disentanglement model brings consistent profits to downstream tasks.", "Yet, gains on FriendsQA are less impressive, indicating domain limitations to some extent.", "Here we only consider naive baselines and straightforward methods for simplicity and fair comparison, which suggests there is still latent room for performance improvement in future work.", "In this paper, we study disentanglement on long multi-party dialogue records and propose a new model by paying close attention to the characteristics of dialogue structure, i.e., the speaker property and reference dependency.", "Our model is evaluated on the largest and latest benchmark dataset Ubuntu IRC, where experimental results show a new SOTA performance and advancement compared to previous work.", "In addition, we analyze the contribution of each structure-related feature by ablation study and the effect of the different model architecture.", "Our work discloses that speaker and dependency-aware structural characters are significant and deserve studies in multi-turn dialogue modeling." ]
[ "abstain", "abstain", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "objective", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "method", "method", "abstain", "method", "result", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "objective", "objective", "result" ]
[ "Knowledge bases (KBs) and text often contain complementary knowledge: KBs store structured knowledge that can support long-range reasoning, while text stores more comprehensive and timely knowledge in an unstructured way.", "Separately embedding the individual knowledge sources into vector spaces has demonstrated tremendous successes in encoding the respective knowledge, but how to jointly embed and reason with both knowledge sources to fully leverage the complementary information is still largely an open problem.", "We conduct a large-scale, systematic investigation of aligning KB and text embeddings for joint reasoning.", "We set up a novel evaluation framework with two evaluation tasks, few-shot link prediction and analogical reasoning , and evaluate an array of KB-text embedding alignment methods.", "We also demonstrate how such alignment can infuse textual information into KB embeddings for more accurate link prediction on emerging entities and events, using COVID-19 as a case study.", "1 1 Introduction Recent years have witnessed a rapid growth of knowledge bases (KBs) such as Freebase (Bol-lacker et al., 2007), DBPedia (Auer et al., 2007), YAGO (Suchanek et al., 2007) and Wikidata (Vrandecic and Krotzsch, 2014).", "These KBs store facts about real-world entities (e.g. people, places, and things) in the form of RDF triples, i.e. (sub-ject, predicate, object).", "Today's KBs are massive in scale.", "For instance, Freebase contains over 45 million entities and 3 billion facts involving a large variety of relations.", "Such large-scale multi-relational knowledge provides a great potential for improving a wide range of tasks, from information retrieval (Castells et al., 2007; Shen et al., 2015), 1 Code and data are available at https://github.", "question answering (Yao and Van Durme, 2014; Yu et al., 2017) to biological data mining (Zheng et al., 2020b).", "KB embedding models (Bordes et al., 2013; Dong et al., 2014; Lin et al., 2015) embed entities and relations into vector space(s) such that the embeddings capture the symbolic knowledge present in the KB.", "Similarly, word embedding models (Mikolov et al., 2013b; Pennington et al., 2014) learn continuous vector representations that capture the distributional semantics of words.", "Experiments on analogical reasoning (Mikolov et al., 2013b; Gladkova et al., 2016) and multilingual word embedding alignment (Mikolov et al., 2013a) have shown that there exists a linear structure in the word embedding space encoding relational information.", "On the other hand, translation-based KB embedding models (Bordes et al., 2013; Lin et al., 2015; Ji et al., 2015), by construction, also present a linear structure in their embedding space.", "A natural question then is, can we align the two embedding spaces such that they mutually enhance each other?", "Such alignment could potentially inject structured knowledge from KBs into text embeddings and inject unstructured but more timely-updated knowledge from text into KB embeddings, leading to more universal and comprehensive embeddings (Figure 1).", "Several studies have attempted at this.", "Lao et al. (2012) use the Path-Ranking Algorithm (Lao and Cohen, 2010) on combined text and KB to improve binary relation prediction.", "Gardner et al. (2014) leverage text data to enhance KB inference and help address the incompleteness of KBs.", "Toutanova et al. (2015) augment the KB with facts and relations from the text corpus and learn joint embedding for entities, KB relations and textual relations.", "Enhancement of KB entity embeddings using using Entity Descriptions has been attempted in (Zhong et al., 2015; Xie et al., 2016).", "Wang et al. (2014) propose to jointly embed entities and words in the same vector space.", "The alignment of embeddings of words and entities is accomplished using Wikipedia anchors or entity names.", "However, existing studies are still ad-hoc and a more systematic investigation of KB-text embedding alignment is needed to answer an array of important open questions: What is the best way to align the KB and text embedding spaces?", "To what degree can such alignment inject information from one source to another?", "How to balance the alignment loss with the original embedding losses?", "In this work, we conduct a systematic investigation of KB-text embedding alignment at scale and seek to answer these questions.", "Our investigation uses the latest version of the full Wikidata (Vrandecic and Krotzsch, 2014) as the KB, the full Wikipedia as the text corpus, and the shared entities as anchors for alignment.", "We define two tasks, few-shot link prediction and analogical reasoning, to evaluate the effectiveness of injecting text information into KB embeddings and injecting KB information into text embeddings, respectively, based on which we evaluate and compare an array of embedding alignment methods.", "The results and discussion present new insights about this important problem.", "Finally, using COVID-19 as a case study, we also demonstrate that such alignment can effectively inject text information into KB embeddings to complete KBs on emerging entities and events.", "In summary, our contributions are three-fold: 1. We conduct the first systematic investigation on KB-text embedding alignment at scale and propose and compare multiple effective alignment methods.", "2. We set up a novel evaluation framework with two evaluation tasks, few-shot link prediction and analogical reasoning, to facilitate future research on this important problem.", "3. We have also learned joint KB-text embeddings on the largest-scale data to date and will release the embeddings as a valuable resource to the community.", "KB-KB embedding alignment.", "Most existing knowledge bases are incomplete.", "Learning of distributed representations for entities and relations in knowledge bases finds application in the task of link prediction i.e. to infer missing facts in the KB given the known facts.", "This includes translation-based models (Bordes et al., 2013; Lin et al., 2015; Ji et al., 2015), feed-forward neural network based approaches (Socher et al., 2013; Dong et al., 2014), convolutional neural networks (Dettmers et al., 2018; Nguyen et al., 2018) and models that leverage graph neural networks (Schlichtkrull et al., 2018; Shang et al., 2019; Nathani et al., 2019).", "Recently, many research works have focused on the alignment of embedding spaces of heterogeneous data sources such as different KBs.", "JE (Hao et al., 2016) introduces a projection matrix to align the embedding spaces of different KBs.", "MTransE (Chen et al., 2017) first learns the embeddings of entities and relations in each language independently and then learns the transformation between these embedding spaces.", "Wang et al. (2018) use Graph Convolutional networks and a set of pre-aligned entities to learn embeddings of entities in multilingual KBs in a unified vector space.", "In the present work, we focus on aligning the KB and textual embedding spaces.", "KB-text joint representation.", "Many recent approaches have attempted to learn the embeddings of words and knowledge base entities in the same vector space.", "Wang et al. (2014) propose an alignment technique for KB and text representations using entity names and/or anchors.", "Wikipedia2Vec (Yamada et al., 2016) extends the skip-gram based model by modeling entity-entity co-occurrences using a link graph and word-entity co-occurrences using KB anchors.", "However, an entity mention can be ambiguous i.e. it can refer to different entities in different contexts.", "To resolve this, Cao et al. (2017) propose Multi-Prototype Entity Mention Embedding model to learn representations for different senses of entity mentions.", "It includes a mention sense embedding model which uses context words and a set of reference entities to predict the actual entity referred to by the mention.", "Despite this progress, a comprehensive investigation of the merits of different alignment approaches is missing.", "Our work takes a step forward in this direction and proposes a novel evaluation framework to compare multiple alignment approaches for KB-Text joint embedding on a large-scale KB and textual corpus.", "In this section, we describe the four alignment methods used in our study.", "At first, we describe the component models used in all alignment methods the KB embedding model and the skip-gram model.", "We use the TransE model (Bordes et al., 2013) to learn the KB embeddings.", "We use the loss function proposed in Sun et al. (2019) as our KB embedding objective.", "Here, d r ( h , t ) = (cid:107) h + r t (cid:107) 2 denotes the score function for the triple ( h, r, t ) , S denotes the set of positive triples and S (cid:48) denotes the set of corrupted triples obtained by replacing the head or tail of a positive triple with a random entity.", "is a hyper-parameter which denotes the margin and y denotes the label (+1 for positive triple and -1 for negative triple).", "The skip-gram model learns the embeddings of words and entities by modeling the word-word, word-entity and entity-entity co-occurrences.", "We use the skip-gram model proposed in Yamada et al. (2016) for learning the word and entity representations.", "Let W and E denote the set of all words and entities in the vocabulary respectively and c denote the size of the context window.", "sequence of N words w 1 , w 2 , , w N , the skip-gram model maximizes the following objective:", "where p ( w O | w I ) = exp ( v wI v wO ) (cid:80) exp ( v (cid:48) v w ) .", "Here, v (cid:48) w and v w denote the input and output representations of the word w respectively.", "The input representations are used as the final representations for both words and entities.", "Word-Entity co-occurrence model : In the word-entity co-occurrence model, the model is trained to predict the context words of an entity pointed to by the target anchor.", "The training objective corresponding to the word-entity co-occurrences is L we = (cid:88) ( e i ,C ei ) A (cid:88) w o C ei log p ( w o | e i ) Here, A denotes the set of anchors in the corpus.", "Each anchor consists of an entity e i and its context words (represented by C e i ).", "The conditional probability p ( w o | e i ) is given by: p ( w O | e i ) = exp ( v (cid:48) e i T v w O ) (cid:80) w W exp ( v (cid:48) e i T v w ) Entity-Entity co-occurrence model : The entity-entity co-occurrence model learns to predict incoming links of an entity (denoted by C e ) given an entity e .", "In practice, the probabilities involved in the skip-gram model are estimated using negative sampling (Mikolov et al., 2013b).", "The overall objective is the sum of the three objectives for each type of co-occurrence.", "We align the entity pairs in KB and text corpus using a set of seed entity pairs, which are obtained from a mapping between Wikidata and Wikipedia.", "This mapping is constructed from the metadata associated with the Wikidata entities.", "The set of entities present in the TransE model and the skip-gram model is denoted by ETE and ESG respectively.", "(a) Alignment using same embedding : In this approach, we use the same embedding for the shared entities in the KB and text corpus.", "There is no separate alignment loss for this method.", "(b) Alignment using Projection : Inspired by the multilingual word embedding approaches (Mikolov et al., 2013a; Faruqui and Dyer, 2014) which use a linear transformation to map word embeddings from one space to an-other, we use an affine transformation from the skip-gram vector space to the TransE vector space to align the entity representations.", "The alignment loss is calculated as a squared L2 norm between the transformed skip-gram entity embeddings and the corresponding TransE entity embeddings.", "The vectors e TE and e SG denote the TransE and skip-gram versions of embeddings of the entity e respectively.", "L align = (cid:88) e E SG E TE (cid:107) ( W e SG + b ) e TE (cid:107) 22", "(c) Alignment using Entity Names : In this alignment technique inspired by Wang et al. (2014), for a particular triple ( h, r, t ) in the KB, if an equivalent entity e h exists in the text corpus, we add an additional triple ( e h , r, t ) to the KB.", "Similarly, if an equivalent entity e t also exists for the entity t , we add the triples ( h, r, e t ) and ( e h , r, e t ) to the KB.", "The term name graph is used to denote this subgraph of additional triples.", "L align = (cid:88) ( h,r,t ) KB 1 [ h E SG t E SG ] d r ( w h , w t )+ 1 [ t E SG ] d r ( h , w t ) + 1 [ h E SG ] d r ( w h , t )", "(d) Alignment using Wikipedia Anchors This alignment technique is motivated by a similar technique proposed in Wang et al. (2014).", "Here, we introduce an alignment loss term in which for word-entity co-occurrences, we substitute the textual entity embedding by its KB counterpart in the skip-gram objective.", "Let e ite denote the embedding of the KB entity equivalent to the textual entity e i .", "L align = (cid:88) ( e i ,C ei ) A (cid:88) w o C ei log ( exp ( e iteT v w O ))+ k (cid:88) i =1 E w i P n ( W ) [ log ( exp ( e iteT v w i ))] Here, P n ( W ) denotes the noise distribution over words and k is the number of negative samples.", "Here, denotes the balance parameter which controls the extent of influence of alignment on the embeddings of each of the individual vector spaces.", "An illustration of the different alignment methods used in our study is given in Figure 2. 4 Dataset We use Wikipedia as the text corpus and Wikidata (Vrandecic and Krotzsch, 2014) as the knowledge base.", "We use the Wikidata version dated 16 December 2020 and the Wikipedia version dated 3 December 2020 for all of our experiments.", "The term support set (as used in the subsequent sections), denoted by S , is used to refer to the intersection set of Wikidata entities and entities in Wikipedia for which an article is present.", "Dataset preprocessing.", "We pre-process the original set of Wikidata triples and filter out entities and relations with frequency less than 10 and 5 respectively.", "This results in a KB with 14.64 M entities, 1222 relations, and 261 M facts.", "Similarly, we preprocess Wikipedia and filter out words from the vocabulary with frequency less than 10.", "However, we utilize the entire entity set of Wikipedia to maximize the size of the support set.", "After processing, the Wikipedia vocabulary consists of 2.1 M words and 12.3 M entities.", "prediction and analogical reasoning .", "The few-shot link prediction task is designed to test the capability of the alignment model to inject the relational information present in text into the knowledge base embeddings.", "The train-test set for this task is constructed such that the test set contains triples corresponding to a subset of entities in the support set, but each of these entities is observed only once in the training triples set.", "Thus, the model is tasked to do link prediction on entities that occur rarely in the training set (hence the term few-shot).", "The training and test sets consist of 260.1 M and 110.8 K triples respectively.", "For this setting, both entities of each triple in the test set are contained in the support set.", "The purpose of the analogical reasoning task is to test the information flow from the knowledge-base embeddings to the skip-gram embeddings.", "This task was first proposed in Mikolov et al. (2013b) to test the syntactic and semantic information present in learned word embeddings.", "We choose the top 50 relations from the set of one-to-one and many-to-one relations based on the frequency of occurrence and construct a dataset of 1000 analogical reasoning examples for each relation.", "The 1st pair of entities is randomly chosen from the training triples set, as the pair of entities involved in that relation.", "The 2nd pair of entities is obtained from the test triples set.", "More formally, given a pair of entities ( h 1 , t 1 ) and the head entity of the 2nd pair ( h 2 ) , the task is to predict the tail entity ( t 2 ) of the 2nd pair by comparing the cosine similarity between the embedding of candidate entity ( e t 2 ) and ( e h 2 + e t 1 e h 1 ) .", "Evaluation protocol.", "For link prediction evaluation on a given test triple ( h, r, t ) , we corrupt either the head entity (by generating triplets like ( h (cid:48) , r, t ) ) or the tail entity (by generating triplets like ( h, r, t (cid:48) ) ) of the triple and then rank the score of correct entity amongst all entities in the candidate set.", "Due to the extremely large entity vocabulary size in Wikidata, we restrict the size of the candidate set to a sample of 1000 entities whose types lie in the set of permissible domain/range types for that relation (Lerer et al., 2019; Krompa et al., 2015).", "In cases where the number of such entities is less than 1000, we choose the entire set of those entities.", "In addition, we filter any positive triplets (triplets that exist in the KB) from the set of negative triplets for this evaluation, also known as filtered evaluation setting.", "We report results on standard evaluation metrics Mean Rank (MR), Hits@1, and Hits@10.", "For this task, we compare the TransE model and the KB-side embeddings of different alignment methods.", "For the analogical reasoning task, we report Mean Rank (MR), Hits@1, and Hits@10 by ranking the correct entity t 2 against the entities in the candidate set.", "The candidate set for the tail entity t 2 is a set of 1K entities sampled from the support set (excluding h 1 , h 2 and t 1 ) according to the node degree.", "All reported metrics are macro-averaged over the results for different relations.", "Here, we compare the skip-gram model embeddings with the textual embeddings obtained from different alignment methods.", "The scale of the training data (both the Wikidata Knowledge Base and the Wikipedia corpus) is huge, so the efficient implementation of the model is a key challenge.", "For efficient implementation of the TransE model, we used the DGL-KE (Zheng et al., 2020a) library.", "It uses graph partitioning to train across multiple partitions of the knowledge base in parallel and incorporates engineering optimizations like efficient negative sampling to reduce the training time by orders of magnitude compared to naive implementations.", "The skip-gram model is implemented using PyTorch (Paszke et al., 2019) and Wikipedia2vec (Yamada et al., 2020) libraries.", "For training, we optimize the parameters of the TransE and skip-gram models alternately in each epoch.", "We use the Adagrad (Duchi et al., 2011) optimizer for the KBE model and SGD for the skip-gram model.", "For both models, the training is done by multiple processes asynchronously using the Hogwild (Niu et al., 2011) approach.", "This introduces additional challenges like synchronizing the weights of parameters among different training processes.", "We choose the values of balance parameter for each of the two evaluation tasks based on the performance of aligned KB and textual embeddings on a small set of analogy examples (disjoint from the analogy test set used in the main evalu-ation).", "Our implementation can serve as a good resource to do a similar large-scale analysis of KB-Text alignment approaches in the future.", "The overall results for the two evaluation tasks are given in Table 1. For the few-shot link prediction task, we observe that all the alignment techniques", "techniques lead to improved performance of the KB embeddings over the naive TransE baseline.", "The Same Embedding alignment approach performs the best followed by Entity Name alignment, Projection, and alignment using Wikipedia Anchors.", "The use of the same embeddings for the shared entities helps in propagating the factual knowledge present in the text to the KB more efficiently, so the Same Embedding alignment performs better than others.", "The Entity Name alignment approach is worse than the Same embedding alignment approach since the test set entities occur less often in the train set (as the dataset is few-shot).", "So, the name graph doesn't make a substantial difference here.", "For the analogical reasoning task, the results show that all alignment approaches obtain an improvement over the naive skip-gram baseline.", "The Entity Name alignment approach performs the best followed by Projection, Same Embedding alignment, and alignment using Wikipedia Anchors.", "The good performance of the Entity Name alignment approach could be explained by the fact that for every test analogy example ( e h 1 , e t 1 , e h 2 , e t 2 ) , there is a relation r present between the entity pairs ( e h 1 , e t 1 ) and ( e h 2 , e t 2 ) , although that is unobserved.", "Since e h and e t also occur in the KB, due to the extra added triples, the KB reasoning process incorporates the relation r in these embeddings, just like it does for KB entities h and t .", "The other approaches viz.", "Same Embedding alignment, Projection, and Wikipedia Anchors don't have a mechanism for explicit KB reasoning like the Entity Name alignment approach.", "The Projection technique outperforms the Same Embedding alignment as the embeddings in the two spaces are less tightly coupled in the former, so it can take advantage of the complementary relational information in textual as well as the KB embeddings.", "In this section, we present a fine-grained analysis of the efficacy of the alignment methods w.r.t. changes in training data size and whether the test set entities belong to the support set.", "We also study the impact of balance parameter on the performance of the two evaluation tasks.", "Due to resource constraint, we do this analysis on two representative methods of different nature Projection alignment and Same Embedding alignment.", "Effect of Training data size.", "To study and differentiate the impact of entities present in the support Few-shot Link Prediction Analogical Reasoning Model MR Hits@1 Hits@10 MR Hits@1 Hits@10 TransE 187 20.3 40.4 Skip-gram 25 50.6 78.0 Projection 134 22.9 47.2 12 65.9 89.0 Same Embedding align.", "set on the performance of the few-shot link prediction task, we create two versions of the training set with different sizes:", "(a) Full version : In this version of the training set, we include all triples in Wikidata which don't violate the few-shot property of the dataset.", "This is the same as the training set for the evaluation proposed in Section 5.1.", "(b) Support version : In this version of the training set, we exclude triples from the full version whose either head or tail entity isn't present in the support set.", "Next, we try to analyze the impact of whether the head/tail entity of the test triple is present in the support set S , on the few-shot link prediction performance.", "To this end, we create two versions of test sets:", "(a) Both in support : Both head and tail entity of the triple lie in the support set.", "(b) Missing support : Atleast one out of the head/tail entity of the triple doesn't lie in the support set.", "The statistics for this dataset are given in Table 3. The results for the training data size analysis for different alignment methods on Test set (Both in support) are shown in Table 4. The results show that for both Projection and Same Embedding alignment approach, the performance is significantly better with using the full training set of triples instead of just the support set.", "This shows that triples involving non-support set entities play a vital role in helping learn better entity and relation representations which in turn helps in injecting textual information to the KB embeddings via alignment.", "Effect of Support set for Test triples.", "Here, we investigate the performance of the few-shot link prediction task for triples whose entities may not lie in the support set.", "The results for this evaluation are given in Table 5. We observe that there is no significant gain in performance for any of the alignment methods over the simple TransE baseline.", "This shows these alignment methods are only effective for triples whose both entities lie in the support set.", "Effect of balance parameter.", "In this analysis, we study the role of balance parameter for the Projection alignment method.", "This parameter controls the extent of alignment between the two embedding spaces.", "The higher the value of the balance parameter, the more the embedding tries to capture the entity information from the other embedding space, rather than its own.", "The results of this study are shown in Table 2. The peak performance for the few-shot link prediction task is obtained for balance parameter = 1e0 in terms of Hits@1 and Hits@10.", "Whereas, for the analogical reasoning task, the peak performance is obtained for balance parameter = 1e-3.", "This difference in the optimal value of the balance parameter can be explained by the fact that the skip-gram objective relies on cosine similarity which is more sensitive to changes in the values of vector embeddings than the TransE model.", "We show this analytically.", "Let ( h, r, t ) be a KB triple and let h , r , and t denote the embeddings of h , r , and t respectively.", "The partial derivative of score function of the triple w.r.t. h is given by d r ( h , t ) = (cid:107) h + r t (cid:107) 2 (cid:13)(cid:13)(cid:13)(cid:13) d r ( h , t ) h (cid:13)(cid:13)(cid:13)(cid:13) 2 = (cid:13)(cid:13)(cid:13)(cid:13) ( h + r t ) (cid:107) h + r t (cid:107) 2 (cid:13)(cid:13)(cid:13)(cid:13) 2 = 1 Similarly, let ( u, v ) be an entity-word pair in the text corpus.", "Let u and v denote the embeddings of u and v respectively.", "The partial derivative of the score function for the entity-word pair ( u, v ) w.r.t. Few-shot Link Prediction Analogical Reasoning Model MR Hits@1 Hits@10 MR Hits@1 Hits@10 TransE 187 20.3 40.4 Skip-gram 25 50.6 78.0 Projection (balance param.=1e-4) 188 20.4 40.4 14 65.0 88.0 Projection (balance param.=1e-3) 186 20.5 40.5 12 65.9 89.0 Projection (balance param.=1e-2) 184 20.6 40.6 10 61.4 87.3 Projection (balance param.=1e-1) 169 20.7 42.0 16 57.8 84.2 Projection (balance param.=1e0) 134 22.9 47.2 23 49.6 78.9 Projection (balance param.=1e1) 129 21.4 43.1 26 42.2 75.4 Table 2: Overall results for Projection alignment model for different values of balance parameter.", "d ( u , v ) = exp ( u T v ) (cid:13)(cid:13)(cid:13)(cid:13) d ( u , v ) u (cid:13)(cid:13)(cid:13)(cid:13) 2 = (cid:13)(cid:13) ( u T v ) v (cid:13)(cid:13) 2 = ( u T v ) (cid:107) v (cid:107) 2 The value of (cid:13)(cid:13)(cid:13) d r ( h , t ) h (cid:13)(cid:13)(cid:13) 2 equals 1 whereas for", "the skip-gram model, (cid:13)(cid:13)(cid:13) d ( u , v ) u (cid:13)(cid:13)(cid:13) 2 = ( u T v ) (cid:107) v (cid:107) 2 which is greater than 1, as seen empirically.", "This shows that the skip-gram embeddings are more sensitive to delta changes in values of the parameters.", "For them to be reasonably assigned with their KB counterparts without losing the textual information, thus a lower value of balance parameter is optimal.", "Recently, the COVID pandemic (Fauci et al., 2020) has been responsible for bringing a tremendous change in the lives of people across the globe.", "Through this case study, we demonstrate that aligning embedding representations can help us do knowledge base completion for recent events like COVID-19.", "We selected 4 relevant relations (Risk factor, Symptoms, Medical Condition and Cause of Death) with atleast 10 triples in the difference between March 2020 and December 2020 snapshots of Wikidata.", "We use the March 2020 Wikidata and December 2020 Wikipedia to train the alignment models and do link prediction on these triples.", "For each of the relations, we keep the COVID-19 entity (Entity ID: Q84263196) unchanged and corrupt the other entity in the triple.", "This would correspond to asking questions like What are the symptoms of COVID-19?, Who died due to COVID-19? etc.", "The results are shown in Table 6.", "We observe that the Projection model obtains a decent improvement over the TransE model on the link prediction task on these triples in terms of Mean Rank.", "Similarly, the Same Embedding alignment model obtains outperforms the TransE baseline for three out of four relations.", "This case study gives a real-life use-case of how the text information can be injected into the KB embeddings using alignment in scenarios when such information is not yet curated in the KB in structured form.", "In this work, we presented a systematic study of different alignment approaches that can be applied to align entity representations in a knowledge base and textual corpora.", "By evaluating on the few-shot link prediction task and analogical reasoning task, we found that although all approaches have the desired outcome, i.e., to incorporate information from the other modality, some approaches perform better than others on a particular task.", "We also analyzed the impact of different factors such as the size of the training set, the presence of test set entities in the support set, and the balance parameter on the evaluation task performance.", "We believe our evaluation framework, as well as jointly trained embeddings can serve as a useful resource for future research and applications.", "We would like to thank the anonymous reviewers for their helpful comments.", "This research was sponsored by a gift grant from Fujitsu and the Ohio Supercomputer Center (Center, 1987)." ]
[ "abstain", "abstain", "method", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "objective", "objective", "objective", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "other", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "method", "result", "method", "method", "other", "other" ]
[ "Abstract", "We present a graph-based Tree Adjoining Grammar (TAG) parser that uses BiLSTMs, highway connections, and character-level CNNs.", "Our best end-to-end parser, which jointly performs supertagging, POS tagging, and parsing, outperforms the previously reported best results by more than 2.2 LAS and UAS points.", "The graph-based parsing architecture allows for global inference and rich feature representations for TAG parsing, alleviating the fundamental trade-off between transition-based and graph-based parsing systems.", "We also demonstrate that the proposed parser achieves state-of-the-art performance in the downstream tasks of Parsing Evaluation using Textual Entailments (PETE) and Unbounded Dependency Recovery.", "This provides further support for the claim that TAG is a viable formalism for problems that require rich structural analysis of sentences.", "Tree Adjoining Grammar (TAG, Joshi and Sch-abes (1997)) and Combinatory Categorial Grammar (CCG, Steedman and Baldridge (2011)) are both mildly context-sensitive grammar formalisms that are lexicalized: every elementary structure (elementary tree for TAG and category for CCG) is associated with exactly one lexical item, and every lexical item of the language is associated with a finite set of elementary structures in the grammar (Rambow and Joshi, 1994).", "In TAG and CCG, the task of parsing can be decomposed into two phases (e.g. TAG: Bangalore and Joshi (1999); CCG: Clark and Curran (2007)): supertagging , where elementary units or supertags are assigned to each lexical item and parsing where these supertags are combined together.", "The first phase of supertagging can be considered as almost pars-ing because supertags for a sentence almost always determine a unique parse (Bangalore and Joshi, 1999).", "This near uniqueness of a parse given a gold sequence of supertags has been con-firmed empirically (TAG: Bangalore et al. (2009); Chung et al. (2016); Kasai et al. (2017); CCG: Lewis et al. (2016)).", "We focus on TAG parsing in this work.", "TAG differs from CCG in having a more varied set of supertags.", "Concretely, the TAG-annotated version of the WSJ Penn Treebank (Marcus et al., 1993) that we use (Chen et al., 2005) includes 4727 distinct supertags (2165 occur once) while the CCG-annotated version (Hockenmaier and Steedman, 2007) only includes 1286 distinct supertags (439 occur once).", "This large set of supertags in TAG presents a severe challenge in supertagging and causes a large discrepancy in parsing performance with gold supertags and predicted supertags (Ban-galore et al., 2009; Chung et al., 2016; Kasai et al., 2017).", "In this work, we present a supertagger and a parser that substantially improve upon previously reported results.", "We propose crucial modifications to the bidirectional LSTM (BiLSTM) supertagger in Kasai et al. (2017).", "First, we use character-level Convolutional Neural Networks (CNNs) for encoding morphological information instead of suf-fix embeddings.", "Secondly, we perform concatenation after each BiLSTM layer.", "Lastly, we explore the impact of adding additional BiLSTM layers and highway connections.", "These techniques yield an increase of 1.3% in accuracy.", "For parsing, since the derivation tree in a lexicalized TAG is a type of dependency tree (Rambow and Joshi, 1994), we can directly apply dependency parsing models.", "In particular, we use the biaffine graph-based parser proposed by Dozat and Manning (2017) together with our novel techniques for supertagging.", "In addition to these architectural extensions for supertagging and parsing, we also explore multitask learning approaches for TAG parsing.", "Specif-1181 ically, we perform POS tagging, supertagging, and parsing using the same feature representations from the BiLSTMs.", "This joint modeling has the benefit of avoiding a time-consuming and complicated pipeline process, and instead produces a full syntactic analysis, consisting of supertags and the derivation that combines them, simultaneously.", "Moreover, this multi-task learning framework further improves performance in all three tasks.", "We hypothesize that our multi-task learning yields feature representations in the LSTM layers that are more linguistically relevant and that generalize better (Caruana, 1997).", "We provide support for this hypothesis by analyzing syntactic analogies across induced vector representations of supertags (Kasai et al., 2017; Friedman et al., 2017).", "The end-to-end TAG parser substantially outperforms the previously reported best results.", "Finally, we apply our new parsers to the downstream tasks of Parsing Evaluation using Textual Entailements (PETE, Yuret et al. (2010)) and Unbounded Dependency Recovery (Rimell et al., 2009).", "We demonstrate that our end-to-end parser outperforms the best results in both tasks.", "These results illustrate that TAG is a viable formalism for tasks that benefit from the assignment of rich structural descriptions to sentences.", "TAG parsing can be decomposed into supertagging and parsing .", "Supertagging assigns to words elementary trees (supertags) chosen from a finite set, and parsing determines how these elementary trees can be combined to form a derivation tree that yield the observed sentence.", "The combinatory operations consist of substitution , which inserts obligatory arguments, and adjunction , which is responsible for the introduction of modifiers, function words, as well as the derivation of sentences involving long-distance dependencies.", "In this section, we present our supertagging models, parsing models, and joint models.", "Recent work has explored neural network models for supertagging in TAG (Kasai et al., 2017) and CCG (Xu et al., 2015; Lewis et al., 2016; Vaswani et al., 2016; Xu, 2016), and has shown that such models substantially improve performance beyond non-neural models.", "We extend previously proposed BiLSTM-based models (Lewis et al., 2016; Kasai et al., 2017) in three ways: 1) we add character-level Convolutional Neural Networks (CNNs) to the input layer, 2) we perform concatenation of both directions of the LSTM not only after the final layer but also after each layer, and 3) we use a modified BiLSTM with highway connections.", "The input for each word is represented via concatenation of a 100-dimensional embedding of the word, a 100-dimensional embedding of a predicted part of speech (POS) tag, and a 30-dimensional character-level representation from CNNs that have been found to capture morphological information (Santos and Zadrozny, 2014; Chiu and Nichols, 2016; Ma and Hovy, 2016).", "The CNNs encode each character in a word by a 30 dimensional vector and 30 filters produce a 30 dimensional vector for the word.", "We initialize the word embeddings to be the pre-trained GloVe vectors (Pennington et al., 2014); for words not in GloVe, we initialize their embedding to a zero vector.", "The other embeddings are randomly initialized.", "We obtain predicted POS tags from a BiLSTM POS tagger with the same configuration as in Ma and Hovy (2016).", "The core of the supertagging model is a deep bidirectional Long Short-Term Memory network (Graves and Schmidhuber, 2005).", "We use the following formulas to compute the activation of a single LSTM cell at time step t : i t = ( W i [ x t ; h t 1 ] + b i ) (1) f t = ( W f [ x t ; h t 1 ] + b f ) (2) c t = tanh ( W c [ x t ; h t 1 ] + b c ) (3) o t = ( W o [ x t ; h t 1 ] + b o ) (4) c t = f (cid:12) c t 1 + i t (cid:12) c t (5) h t = o (cid:12) tanh ( c t ) (6) Here a semicolon ; means concatenation, (cid:12) is element-wise multiplication, and is the sigmoid function.", "In the first BiLSTM layer, the input x t is the vector representation of word t .", "(The sequence is reversed for the backwards pass.)", "In all subsequent layers, x t is the corresponding output from the previous BiLSTM; the output of a BiLSTM at timestep t is equal to [ h ft ; h bt ] , the concatenation of hidden state corresponding to input t in the forward and backward pass.", "This concatenation af-1182 ter each layer differs from Kasai et al. (2017) and Lewis et al. (2016), where concatenation happens only after the final BiLSTM layer.", "We will show in a later section that concatenation after each layer contributes to improvement in performance.", "We also extend the models in Kasai et al. (2017) and Lewis et al. (2016) by allowing highway connections between LSTM layers.", "A highway connection is a gating mechanism that combines the current and previous layer outputs, which can prevent the problem of vanishing/exploding gradients (Srivastava et al., 2015).", "Specifically, in networks with highway connections, we replace Eq.", "6 by: r t = ( W r [ x t ; h t 1 ] + b r ) h t = r t (cid:12) o t (cid:12) tanh ( c t ) + (1 r t ) (cid:12) W h x t Indeed, our experiments will show that highway connections play a crucial role as we add more BiLSTM layers.", "We generally follow the hyperparameters chosen in Lewis et al. (2016) and Kasai et al. (2017).", "Specifically, we use BiLSTMs layers with 512 units each.", "Input, layer-to-layer, and recurrent (Gal and Ghahramani, 2016) dropout rates are all 0.5.", "For the CNN character-level representation, we used the hyperparameters from Ma and Hovy (2016).", "We train this network, including the embeddings, by optimizing the negative log-likelihood of the observed sequences of supertags in a mini-batch stochastic fashion with the Adam optimization algorithm with batch size 100 and = 0 . 01 (Kingma and Ba, 2015). In order to obtain predicted POS tags and supertags of the training data for subsequent parser input, we also perform 10-fold jackknife training. After each training epoch, we test the supertagger on the dev set. When classification accuracy does not improve on five consecutive epochs, training ends. 2.2 Parsing Model Until recently, TAG parsers have been grammar based, requiring as input a set of elemenetary trees (supertags). For example, Bangalore et al. (2009) proposes the MICA parser, an Earley parser that exploits a TAG grammar that has been transformed into a variant of a probabilistic CFG. One advantage of such a parser is that its parses are guaranteed to be well-formed according to the TAG grammar provided as input. More recent work, however, has shown that data-driven transition-based parsing systems outperform such grammar-based parsers (Chung et al., 2016; Kasai et al., 2017; Friedman et al., 2017). Kasai et al. (2017) and Friedman et al. (2017) achieved state-of-the-art TAG parsing performance using an unlexicalized shift-reduce parser with feed-forward neural networks that was trained on a version of the Penn Treebank that had been annotated with TAG derivations. Here, we pursue this data-driven approach, applying a graph-based parser with deep biaffine attention (Dozat and Manning, 2017) that allows for global training and inference. 2.2.1 Input Representations The input for each word is the concatenation of a 100-dimensional embedding of the word and a 30-dimensional character-level representation obtained from CNNs in the same fashion as in the supertagger. 1 We also consider adding 100-dimensional embeddings for a predicted POS tag (Dozat and Manning, 2017) and a predicted supertag (Kasai et al., 2017; Friedman et al., 2017). The ablation experiments in Kiperwasser and Goldberg (2016) illustrated that adding predicted POS tags boosted performance in Stanford Dependencies. In Universal Dependencies, Dozat et al. (2017) empirically showed that their dependency parser gains significant improvements by using POS tags predicted by a Bi-LSTM POS tagger. Indeed, Kasai et al. (2017) and Friedman et al. (2017) demonstrated that their unlexicalized neural network TAG parsers that only get as input predicted supertags can achieve state-of-the-art performance, with lexical inputs providing no improvement in performance. We initialize word embeddings to be the pre-trained GloVe vectors as in the supertagger. The other embeddings are randomly initialized. 2.2.2 Biaffine Parser We train our parser to predict edges between lexical items in an LTAG derivation tree. Edges are labeled by the operations together with the deep syntactic roles of substitution sites (0=underlying subject, 1=underlying direct object, 2=underlying indirect object, 3,4=oblique arguments, CO=co-head for prepositional/particle verbs, and adj=all adjuncts). Figure 1 shows our biaffine parsing ar-1 We fix the embedding of the ROOT token to be a 0-vector. 1183 Figure 1: Biaffine parsing architecture. For the dependency from John to sleeps in the sentence John sleeps , the parser first predicts the head of John and then predicts the dependency label by combining the dependent and head representations. In the joint setting, the parser also predicts POS tags and supertags. chitecture. Following Dozat and Manning (2017) and Kiperwasser and Goldberg (2016), we use BiLSTMs to obtain features for each word in a sentence. We add highway connections in the same fashion as our supertagging model. We first perform unlabeled arc-factored scoring using the final output vectors from the BiLSTMs, and then label the resulting arcs. Specifically, suppose that we score edges coming into the i th word in a sentence i.e. assigning scores to the potential parents of the i th word. Denote the final output vector from the BiLSTM for the k th word by h k and suppose that h k is d -dimensional. Then, we produce two vectors from two separate multilayer perceptrons (MLPs) with the ReLU activation: h arc-dep k = MLP (arc-dep) ( h k ) h arc-head k = MLP (arc-head) ( h k ) where h arc-dep k and h arc-head k are d arc -dimensional vectors that represent the k th word as a dependent and a head respectively. Now, suppose the kth row of matrix H (arc-head) is h arc-head k . Then, the probability distribution s i over the potential heads of the i th word is computed by s i = softmax ( H (arc-head) W (arc) h arc-dep i + H (arc-head) b (arc) ) (7) where W (arc) R d arc d arc and b ( arc ) R d arc . In training, we simply take the greedy maximum probability to predict the parent of each word. In the testing phase, we use the heuristics formulated by Dozat and Manning (2017) to ensure that the resulting parse is single-rooted and acyclic. Given the head prediction of each word in the sentence, we assign labeling scores using vectors obtained from two additional MLP with ReLU. For the k th word, we obtain: h rel-dep k = MLP (rel-dep) ( h k ) h rel-head k = MLP (rel-head) ( h k ) where h rel-dep k , h rel-head k R d rel . Let p i be the index of the predicted head of the i th word, and r be the number of dependency relations in the dataset. Then, the probability distribution i over the possible dependency relations of the arc pointing from the p i th word to the i th word is calculated by: i = softmax ( h T (rel-head) p i U (rel) h (rel-dep) i + W (rel) ( h (rel-head) i + h (rel-head) p i ) + b (rel) ) (8) where U (rel) R d rel d rel r , W (rel) R r d rel , and b (rel) R r . We generally follow the hyperparameters chosen in Dozat and Manning (2017). Specifically, we use BiLSTMs layers with 400 units each. Input, layer-to-layer, and recurrent dropout rates are all 0.33. The depths of all MLPs are all 1, and the MLPs for unlabeled attachment and those for labeling contain 500 ( d arc ) and 100 ( d rel ) units respectively. For character-level CNNs, we use the hyperparameters from Ma and Hovy (2016). We train this model with the Adam algorithm to minimize the sum of the cross-entropy losses from head predictions ( s i from Eq. 7) and label predictions ( i from Eq. 8) with = 0 . 01 and batch size 100 (Kingma and Ba, 2015). After each training epoch, we test the parser on the dev set. When labeled attachment score (LAS) 2 does not improve on five consecutive epochs, training ends. 2.3 Joint Modeling The simple BiLSTM feature representations for parsing presented above are conducive to joint modeling of POS tagging and supertagging; rather than using POS tags and supertags to predict a derivation tree, we can instead use the BiLSTM hidden vectors derived from lexical inputs alone 2 We disregard pure punctuation when evaluating LAS and UAS, following prior work (Bangalore et al., 2009; Chung et al., 2016; Kasai et al., 2017; Friedman et al., 2017). 1184 to predict POS tags and supertags along with the TAG derivation tree. h pos k = MLP (pos) ( h k ) h stag k = MLP (stag) ( h k ) where h pos k R d pos and h stag k R d stag . We obtain probability distribution over the POS tags and supertags by: softmax ( W (pos) h pos k + b (pos) ) (9) softmax ( W (stag) h stag k + b (stag) ) (10) where W (pos) , b (pos) , W (stag) , and b (stag) are in R n pos d pos , R n pos , R n stag d stag , and R n stag respectively, with n pos and n stag the numbers of possible POS tags and supertags respectively. We use the same hyperparameters as in the parser. The MLPs for POS tagging and supertagging both contain 500 units. We again train this model with the Adam algorithm to minimize the sum of the cross-entropy losses from head predictions ( s i from Eq. 7), label predictions ( i from Eq. 8), POS predictions (Eq. 9), and supertag predictions (Eq. 10) with = 0 . 01 and batch size 100. After each training epoch, we test the parser on the dev set and compute the percentage of each token that is assigned the correct parent, relation, supertag, and POS tag. When the percentage does not improve on five consecutive epochs, training ends. This joint modeling has several advantages. First, the joint model yields a full syntactic analysis simultaneously without the need for training separate models or performing jackknife training. Secondly, joint modeling introduces a bias on the hidden representations that could allow for better generalization in each task (Caruana, 1997). Indeed, in experiments described in a later section, we show empirically that predicting POS tags and supertags does indeed benefit performance on parsing (as well as the tagging tasks). 3 Results and Discussion We follow the protocol of Bangalore et al. (2009), Chung et al. (2016), Kasai et al. (2017), and Friedman et al. (2017); we use the grammar and the TAG-annotated WSJ Penn Tree Bank extracted by Chen et al. (2005). Following that work, we use Sections 01-22 as the training set, Section 00 as the dev set, and Section 23 as the test set. The training, dev, and test sets comprise 39832, 1921, and 2415 sentences, respectively. We implement all of our models in TensorFlow (Abadi et al., 2016). 3 3.1 Supertaggers Our BiLSTM POS tagger yielded 97.37% and 97.53% tagging accuracy on the dev and test sets, performance on par with the state-of-the-art (Ling et al., 2015; Ma and Hovy, 2016). 4 Seen in the middle section of Table 1 is supertagging performance obtained from various model configurations. Final concat in the model name indicates that vectors from forward and backward pass are concatenated only after the final layer. Concatenation happens after each layer otherwise. Numbers immediately after BiLSTM indicate the numbers of layers. CNN, HW, and POS denote respectively character-level CNNs, highway connections, and pipeline POS input from our BiLSTM POS tagger. Firstly, the differences in performance between BiLSTM2 (final concat) and BiLSTM2 and between BiLSTM2 and BiLSTM2-CNN suggest an advantage to performing concatenation after each layer and adding character-level CNNs. Adding predicted POS to the input somewhat helps supertagging though the difference is small. Adding a third BiLSTM layer helps only if there are highway connections, presumably because deeper BiLSTMs are more vulnerable to the vanishing/exploding gradient problem. Our supertagging model (BiLSTM3-HW-CNN-POS) that performs best on the dev set achieves an accuracy of 90.81% on the test set, outperforming the previously best result by more than 1.3%. 3.2 Parsers Table 3 shows parsing results on the dev set. Abbreviations for models are as before with one addition: Stag denotes pipeline supertag input from our best supertagger (BiLSTM3-HW-CNN-POS in Table 1). As with supertagging, we observe a gain from adding character-level CNNs. Interestingly, adding predicted POS tags or supertags deteriorates performance with BiLSTM3. These results suggest that morphological information and word information from character-level CNNs and word embeddings overwhelm the in-3 Our code is available online for easy replication of our results at https://github.com/jungokasai/ graph_parser . 4 We cannot directly compare these results because the data split is different in the POS tagging literature. 1185 Supertagger Dev Test Bangalore et al. (2009) 88.52 86.85 Chung et al. (2016) 87.88 Kasai et al. (2017) 89.32 89.44 BiLSTM2 (final concat) 88.96 BiLSTM2 89.60 BiLSTM2-CNN 89.97 BiLSTM2-CNN-POS 90.03 BiLSTM2-HW-CNN-POS 90.12 BiLSTM3-CNN-POS 90.12 BiLSTM3-HW-CNN-POS 90.45 90.81 BiLSTM4-CNN-POS 89.99 BiLSTM4-HW-CNN-POS 90.43 Joint (Stag) 90.51 Joint (POS+Stag) 90.67 91.01 Table 1: Supertagging Results. Joint (Stag) and Joint (POS+Stag) indicate joint parsing models that perform supertagging, and POS tagging and supertagging respectively. POS tagger Dev Test BiLSTM 97.37 97.53 Joint (POS+Stag) 97.54 97.73 Table 2: POS tagging results. formation from predicted POS tags and supertags. Again, highway connections become crucial as the number of layers increases. We finally evaluate the parsing model with the best dev performance (BiLSTM4-HW-CNN) on the test set (Table 3). It achieves 91.37 LAS points and 92.77 UAS points, improvements of 1.8 and 1.7 points respectively from the state-of-the-art. 3.3 Joint Models We provide joint modeling results for supertagging and parsing in Tables 2 and 3. For these joint models, we employed the best parsing configuration (4 layers of BiLSTMs, character-level CNNs, and highway connections), with and without POS tagging added as an additional task. We can observe that our full joint model that performs 1 2 3 4 5 6 7 8 9 10 11+ 80 85 90 95 Our Joint Parser Shift-reduce Parser Figure 2: F1 Score with Dependency Length. Dev Test Parser UAS LAS UAS LAS Bangalore et al. (2009) 87.60 85.80 86.66 84.90 Chung et al. (2016) 89.96 87.86 Friedman et al. (2017) 90.36 88.91 90.31 88.96 Kasai et al. (2017) 90.88 89.39 90.97 89.68 BiLSTM3 91.75 90.22 BiLSTM3-CNN 92.27 90.76 BiLSTM3-CNN-POS 92.07 90.53 BiLSTM3-CNN-Stag 92.15 90.65 BiLSTM3-HW-CNN 92.29 90.71 BiLSTM4-CNN 92.11 90.66 BiLSTM4-HW-CNN 92.78 91.26 92.77 91.37 BiLSTM5-CNN 92.34 90.77 BiLSTM5-HW-CNN 92.64 91.11 Joint (Stag) 92.97 91.48 Joint (POS+Stag) 93.22 91.80 93.26 91.89 Joint (Shuffled Stag) 92.23 90.56 Table 3: Parsing results on the dev and test sets. POS tagging, supertagging, and parsing further improves performance in all of the three tasks, yielding the test result of 91.89 LAS and 93.26 UAS points, an improvement of more than 2.2 points each from the state-of-the-art. Figures 2 and 3 illustrate the relative performance of the feed-forward neural network shift-reduce TAG parser (Kasai et al., 2017) and our joint graph-based parser with respect to two of the measures explored by McDonald and Nivre (2011), namely dependency length and distance between a dependency and the root of a parse. The graph-based parser outperforms the shift-reduce parser across all conditions. Most interesting is the fact that the graph-based parser shows less of an effect of dependency length. Since the shift-reduce parser builds a parse sequentially with one parsing action depending on those that come before it, we would expect to find a propogation of errors made in establishing shorter dependencies to the establishment of longer dependencies. Lastly, it is worth noting our joint parsing ar-1 2 3 4 5 6 7 8 9 10 11+ 90 95 Our Joint Parser Shift-reduce Parser Figure 3: F1 Score with Distance to Root. 1186 chitecture has a substantial advantage regarding parsing speed. Since POS tagging, supertagging, and parsing decisions are made independently for each word in a sentence, our system can parallelize computation once the sentence is encoded in the BiLSTM layers. Our current implementation processes 225 sentences per second on a single Tesla K80 GPU, an order of magnitude faster than the MICA system (Bangalore et al., 2009). 5 4 Joint Modeling and Network Representations Given the improvements we have derived from the joint models, we analyze the nature of inductive bias that results from multi-task training and attempt to provide an explanation as to why joint modeling improves performance. 4.1 Noise vs. Inductive Bias One might argue that joint modeling improves performance merely because it adds noise to each task and prevents over-fitting. If the introduction of noise were the key, we would still expect to gain an improvement in parsing even if the target supertag were corrupted, say by shuffling the order of supertags for the entire training data (Caruana, 1997). We performed this experiment, and the result is shown as Joint (Shuffled Stag) in Table 3. Parsing performance falls behind the best non-joint parser by 0.7 LAS points. This suggests that inducing the parser to create representations to predict both supertags and a parse tree is beneficial for both tasks, beyond a mere introduction of noise. 4.2 Syntactic Analogies We next analyze the induced vector representations in the output projection matrices of our supertagger and joint parsers using the syntactic analogy framework (Kasai et al., 2017). Consider, for instance, the analogy that an elementary tree representing a clause headed by a transitive verb (t27) is to a clause headed by an intransitive verb (t81) as a subject relative clause headed by a transitive verb (t99) is to a subject relative headed by an intransitive verb (t109). Following the ideas in Mikolov et al. (2013) for word analogies, we can express this structural analogy as t27 t81 + 5 While such computational resources were not available in 2009, our parser differs from the MICA chart parser in being able to better exploit parallel computation enabled by modern GPUs. t109 = t99 and test it by cosine similarity. Table 4 shows the results of the analogy test with 246 equations involving structural analogies with only the 300 most frequent supertags in the training data. While the embeddings (projection matrix) from the independently trained supertagger do not appear to reflect the syntax, those obtained from the joint models yield linguistic structure despite the fact that the supertag embeddings (projection matrix) is trained without any a priori syntactic knowledge about the elementary trees. The best performance is obtained by the supertag representations obtained from the training of the transition-based parser Kasai et al. (2017) and Friedman et al. (2017). For the transition-based parser, it is beneficial to share statistics among the input supertags that differ only by a certain operation or property (Kasai et al., 2017) during the training phase, yielding the success in the analogy task. For example, a transitive verb supertag whose object has been filled by substitution should be treated by the parser in the same way as an intransitive verb supertag. In our graph-based parsing setting, we do not have a notion of parse history or partial derivations that directly connect intransitive and transitive verbs. However, syntactic analogies still hold to a considerable degree in the vector representations of supertags induced by our joint models, with average rank of the correct answer nearly the same as that obtained in the transition-based parser. This analysis bolsters our hypothesis that joint training biases representation learning toward linguistically sensible structure. The supertagger is just trained to predict linear sequences of supertags. In this setting, many intervening supertags can occur, for instance, between a subject noun and its verb, and the supertagger might not be able to systematically link the presence of the two in the sequence. In the joint models, on the other hand, parsing actions will explicitly guide the network to associate the two supertags. 5 Downstream Tasks Previous work has applied TAG parsing to the downstream tasks of syntactically-oriented textual entailment (Xu et al., 2017) and semantic role labeling (Chen and Rambow, 2003). In this work, we apply our parsers to the textual entailment and unbounded dependency recovery tasks and achieve state-of-the-art performance. These re-1187 Parser / Supertagger %correct Avg. rank Transition-based 67.07 2.36 Our Supertagger 0.00 152.46 Our Joint (Stag) 29.27 2.55 Our Joint (POS+Stag) 30.08 2.57 Table 4: Syntactic analogy test results on the 300 most frequent supertags. Avg. rank is the average position of the correct choice in the ranked list of the closest neighbors; the top line indicates the result of using supertag embeddings that are trained jointly with a transition based parser (Friedman et al., 2017). sults bolster the significance of the improvements gained from our joint parser and the utility of TAG parsing for downstream tasks. 5.1 PETE Parser Evaluation using Textual Entailments (PETE) is a shared task from the SemEval-2010 Exercises on Semantic Evaluation (Yuret et al., 2010). The task was intended to evaluate syntactic parsers across different formalisms, focusing on entailments that could be determined entirely on the basis of the syntactic representations of the sentences that are involved, without recourse to lexical semantics, logical reasoning, or world knowledge. For example, syntactic knowledge alone tells us that the sentence John, who loves Mary, saw a squirrel entails John saw a squirrel and John loves Mary but not, for instance, that John knows Mary or John saw an animal . Prior work found the best performance was achieved with parsers using grammatical frameworks that provided rich linguistic descriptions, including CCG (Rimell and Clark, 2010; Ng et al., 2010), Minimal Recursion Semantics (MRS) (Lien, 2014), and TAG (Xu et al., 2017). Xu et al. (2017) provided a set of linguistically-motivated transformations to use TAG derivation trees to solve the PETE task. We follow their procedures and evaluation for our new parsers. We present test results from two configurations in Table 5. One configuration is a pipeline approach that runs our BiLSTM POS tagger, supertagger, and parser. The other one is a joint approach that only uses our full joint parser. The joint method yields 78.1% in accuracy and 76.4% in F1, improvements of 2.4 and 2.7 points over the previously reported best results. System %A %P %R F1 Rimell and Clark (2010) 72.4 79.6 62.8 70.2 Ng et al. (2010) 70.4 68.3 80.1 73.7 Lien (2014) 70.7 88.6 50.0 63.9 Xu et al. (2017) 75.7 88.1 61.5 72.5 Our Pipeline Method 77.1 86.6 66.0 74.9 Our Joint Method 78.1 86.3 68.6 76.4 Table 5: PETE test results. Precision (P), recall (R), and F1 are calculated for entails. 5.2 Unbounded Dependency Recovery The unbounded dependency corpus (Rimell et al., 2009) specifically evaluates parsers on unbounded dependencies, which involve a constituent moved from its original position, where an unlimited number of clause boundaries can intervene. The corpus comprises 7 constructions: object extraction from a relative clause (ObRC), object extraction from a reduced relative clause (ObRed), subject extraction from a relative clause (SbRC), free relatives (Free), object wh-questions (ObQ), right node raising (RNR), and subject extraction from an embedded clause (SbEm). Because of variations across formalisms in their representational format for unbounded depden-dencies, past work has conducted manual evaluation on this corpus (Rimell et al., 2009; Nivre et al., 2010). We instead conduct an automatic evaluation using a procedure that converts TAG parses to structures directly comparable to those specified in the unbounded dependency corpus. To this end, we apply two types of structural transformation in addition to those used for the PETE task: 6 1) a more extensive analysis of coordination, 2) resolution of differences in dependency representations in cases involving copula verbs and co-anchors (e.g., verbal particles). See Appendix A for details. After the transformations, we simply check if the resulting dependency graphs contain target labeled arcs given in the dataset. Table 6 shows the results. Our joint parser outperforms the other parsers, including the neural network shift-reduce TAG parser (Kasai et al., 2017). Our data-driven parsers yield relatively low performance in the ObQ and RNR constructions. Performance on ObQ is low, we expect, because of their rarity in the data on which the parser is 6 One might argue that since the unbounded dependency evaluation is recall-based, we added too many edges by the transformations. However, it turns out that applying all the transformations for the corpus even improves performance on PETE (77.6 F1 score), which considers precision and recall, verifying that our transformations are reasonable. 1188 System ObRC ObRed SbRC Free ObQ RNR SbEm Total Avg C&C (CCG) 59.3 62.6 80.0 72.6 72.6 49.4 22.4 53.6 61.1 Enju (HPSG) 47.3 65.9 82.1 76.2 32.5 47.1 32.9 54.4 54.9 Stanford (PCFG) 22.0 1.1 74.7 64.3 41.2 45.4 10.6 38.1 37.0 MST (Stanford Dependencies) 34.1 47.3 78.9 65.5 41.2 45.4 37.6 49.7 50.0 MALT (Stanford Dependencies) 40.7 50.5 84.2 70.2 31.2 39.7 23.5 48.0 48.5 NN Shift-Reduce TAG Parser 60.4 75.8 68.4 79.8 53.8 45.4 44.7 59.4 61.2 Our Joint Method 72.5 78.0 81.1 85.7 56.3 47.1 49.4 64.9 67.0 Table 6: Parser accuracy on the unbounded dependency corpus. The results of the first five parsers are taken from Rimell et al. (2009) and Nivre et al. (2010). The Total and Avg columns indicate the percentage of correctly recovered dependencies out of all dependencies and the average of accuracy on the 7 constructions. trained. 7 For RNR, rarity may be an issue as well as the limits of the TAG analysis of this construction. Nonetheless, we see that the rich structural representations that a TAG parser provides enables substantial improvements in the extraction of unbounded dependencies. In the future, we hope to evaluate state-of-the-art Stanford dependency parsers automatically. 6 Related Work The two major classes of data-driven methods for dependency parsing are often called transition-based and graph-based parsing (Kubler et al., 2009). Transition-based parsers (e.g. MALT (Nivre, 2003)) learn to predict the next transition given the input and the parse history. Graph-based parsers (e.g. MST (McDonald et al., 2005)) are trained to directly assign scores to dependency graphs. Empirical studies have shown that a transition-based parser and a graph-based parser yield similar overall performance across languages (Mc-Donald and Nivre, 2011), but the two strands of data-driven parsing methods manifest the fundamental trade-off of parsing algorithms. The former prefers rich feature representations with parsing history over global training and exhaustive search, and the latter allows for global training and inference at the expense of limited feature representations (Kubler et al., 2009). Recent neural network models for transition-based and graph-based parsing can be viewed as remedies for the aforementioned limitations. Andor et al. (2016) developed a transition-based parser using feed-forward neural networks that performs global training approximated by beam search. The globally normalized objective addresses the label bias problem and makes global 7 The substantially better performance of the C&C parser is in fact the result of additions that were made to the training data. training effective in the transition-based parsing setting. Kiperwasser and Goldberg (2016) incorporated a dynamic oracle (Goldberg and Nivre, 2013) in a BiLSTM transition-based parser that remedies global error propagation. Kiperwasser and Goldberg (2016) and Dozat and Manning (2017) proposed graph-based parsers that have access to rich feature representations obtained from BiLSTMs. Previous work integrated CCG supertagging and parsing using belief propagation and dual decomposition approaches (Auli and Lopez, 2011). Nguyen et al. (2017) incorporated a graph-based dependency parser (Kiperwasser and Goldberg, 2016) with POS tagging. Our work followed these lines of effort and improved TAG parsing performance. 7 Conclusion and Future Work In this work, we presented a state-of-the-art TAG supertagger, a parser, and a joint parser that performs POS tagging, supertagging, and parsing. The joint parser has the benefit of giving a full syntactic analysis of a sentence simultaneously. Furthermore, the joint parser achieved the best performance, an improvement of over 2.2 LAS points from the previous state-of-the-art. We have also seen that the joint parser yields state-of-the-art in textual entailment and unbounded dependency recovery tasks, and raised the possibility that TAG can provide useful structural analysis of sentences for other NLP tasks. We will explore more applications of our TAG parsers in future work. References Martn Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, et al. 2016. Tensorflow: Large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467 . 1189 Daniel Andor, Chris Alberti, David Weiss, Aliak-sei Severyn, Alessandro Presta, Kuzman Ganchev, Slav Petrov, and Michael Collins. 2016. Globally normalized transition-based neural networks. In ACL . Association for Computational Linguistics, Berlin, Germany, pages 24422452. http:// www.aclweb.org/anthology/P16-1231 . Michael Auli and Adam Lopez. 2011. A comparison of loopy belief propagation and dual decomposition for integrated CCG supertagging and parsing. In ACL . Association for Computational Linguistics, pages 470480. http://www.aclweb.org/ anthology/P11-1048 . Srinivas Bangalore, Pierre Boullier, Alexis Nasr, Owen Rambow, and Benot Sagot. 2009. MICA: A probabilistic dependency parser based on tree insertion grammars (application note). In NAACL-HLT (short) . Association for Computational Linguistics, Boulder, Colorado, pages 185188. http://www.aclweb.org/anthology/N/N09/N09-2047 . Srinivas Bangalore and Aravind K. Joshi. 1999. Supertagging: An Approach to Almost Parsing. Computational Linguistics 25:237266. Steven Bird, Ewan Klein, and Edward Loper. 2009. Natural Language Processing with Python . OReilly Media. Rich Caruana. 1997. Multitask learning. Machine Learning 28:4175. John Chen, Srinivas Bangalore, and K. Vijay-Shanker. 2005. Automated extraction of tree-adjoining grammars from treebanks. Natural Language Engineering 12(3):251299. John Chen and Owen Rambow. 2003. Use of deep linguistic features for the recognition and labeling of semantic arguments. In EMNLP . pages 4148. http://aclanthology. coli.uni-saarland.de/pdf/W/W03/W03-1006.pdf . Jason Chiu and Eric Nichols. 2016. Named entity recognition with bidirectional LSTM-CNNs. TACL 4:357370. https://transacl.org/ojs/ index.php/tacl/article/view/792 . Wonchang Chung, Suhas Siddhesh Mhatre, Alexis Nasr, Owen Rambow, and Srinivas Bangalore. 2016. Revisiting supertagging and parsing: How to use supertags in transition-based parsing. In TAG+ . pages 8592. http://www.aclweb. org/anthology/W16-3309 . Stephen Clark and James R Curran. 2007. Wide-coverage efficient statistical parsing with CCG and log-linear models. Computational Linguistics 33(4):493552. http://www.newdesign.aclweb.org/anthology-new/J/J07/J07-4004.pdf . Timothy Dozat and Christopher Manning. 2017. Deep biaffine attention for neural dependency parsing. In ICLR . Timothy Dozat, Peng Qi, and Christopher D. Manning. 2017. Stanford's graph-based neural dependency parser at the CoNLL 2017 shared task." ]
[ "abstain", "method", "result", "abstain", "objective", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "result", "abstain", "objective", "method", "objective", "abstain", "method", "abstain", "objective", "method", "abstain", "abstain", "objective", "method", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "other" ]
[ "Defining action spaces for conversational agents and optimizing their decision-making process with reinforcement learning is an enduring challenge.", "Common practice has been to use handcrafted dialog acts, or the output vocabulary, e.g. in neural encoder decoders, as the action spaces.", "Both have their own limitations.", "This paper proposes a novel latent action framework that treats the action spaces of an end-to-end dialog agent as latent variables and develops unsupervised methods in order to induce its own action space from the data.", "Comprehensive experiments are conducted examining both continuous and discrete action types and two different optimization methods based on stochastic variational inference.", "Results show that the proposed latent actions achieve superior empirical performance improvement over previous word-level policy gradient methods on both DealOrNoDeal and MultiWoz dialogs.", "Our detailed analysis also provides insights about various latent variable approaches for policy learning and can serve as a foundation for developing better latent actions in future research.", "1 1 Introduction Optimizing dialog strategies in multi-turn dialog models is the cornerstone of building dialog systems that more efficiently solve real-world challenges, e.g. providing information (Young, 2006), winning negotiations (Lewis et al., 2017), improving engagement (Li et al., 2016) etc.", "A classic solution employs reinforcement learning (RL) to learn a dialog policy that models the optimal action distribution conditioned on the dialog state (Williams and Young, 2007).", "However, since there are infinite human language possibilities, an enduring challenge has been to define what the 1 Data and code are available at https://github.", "action space is.", "For traditional modular systems, the action space is defined by hand-crafted semantic representations such as dialog acts and slot-values (Raux et al., 2005; Chen et al., 2013) and the goal is to obtain a dialog policy that chooses the best hand-crafted action at each dialog turn.", "But it is limited because it can only handle simple domains whose entire action space can be captured by hand-crafted representations (Walker, 2000; Su et al., 2017).", "This cripples a system's ability to handle conversations in complex domains.", "Conversely, end-to-end (E2E) dialog systems have removed this limit by directly learning a response generation model conditioned on the dialog context using neural networks (Vinyals and Le, 2015; Sordoni et al., 2015).", "To apply RL to E2E systems, the action space is typically defined as the entire vocabulary; every response output word is considered to be an action selection step (Li et al., 2016), which we denote as the word-level RL.", "Word-level RL, however, has been shown to have several major limitations in learning dialog strategies.", "The foremost one is that direct application of word-level RL leads to degenerate behavior: the response decoder deviates from human language and generates utterances that are incomprehensible (Lewis et al., 2017; Das et al., 2017; Kottur et al., 2017).", "A second issue is that since a multi-turn dialog can easily span hundreds of words, word-level RL suffers from credit assignment over a long horizon, leading to slow and suboptimal convergence (Kaelbling et al., 1996; He et al., 2018).", "This paper proposes Latent Action Reinforcement Learning (LaRL), a novel framework that overcomes the limitations of word-level RL for E2E dialog models, marrying the benefits of a traditional modular approach in an unsupervised manner.", "The key idea is to develop E2E models that can invent their own discourse-level actions.", "These actions must be expressive enough to capture response semantics in complex domains (i.e. have the capacity to represent a large number of actions), thus decoupling the discourse-level decision-making process from natural language generation.", "Then any RL technique can be applied to this induced action space in the place of word-level output.", "We propose a flexible latent variable dialog framework and investigate several approaches to inducing latent action space from natural conversational data.", "We further propose (1) a novel training objective that outperforms the typical evidence lower bound used in dialog generation and (2) an attention mechanism for integrating discrete latent variables in the decoder to better model long responses.", "We test this on two datasets, DealOrNoDeal (Lewis et al., 2017) and MultiWoz (Budzianowski et al., 2018), to answer two key questions: (1) what are the advantages of LaRL over Word-level RL and (2) what effective methods can induce this latent action space.", "Results show that LaRL is significantly more effective than word-level RL for learning dialog policies and it does not lead to incomprehensible language generation.", "Our models achieve 18.2% absolute improvement over the previous state-of-the-art on MultiWoz and discover novel and diverse negotiation strategies on DealOrNoDeal.", "Besides strong empirical improvement, our model analysis reveals novel insights, e.g. it is crucial to reduce the exposure bias in the latent action space and discrete latent actions are more suitable than continuous ones to serve as action spaces for RL dialog agents.", "Prior RL research in modular dialog management has focused on policy optimization over hand-crafted action spaces in task-oriented domains (Walker, 2000; Young et al., 2007).", "A dialog manager is formulated as a Partially Observable Markov Decision Process (POMDP) (Young et al., 2013), where the dialog state is estimated via dialog state tracking models from the raw dialog context (Lee, 2013; Henderson et al., 2014; Ren et al., 2018).", "RL techniques are then used to find the optimal dialog policy (Gasic and Young, 2014; Su et al., 2017; Williams et al., 2017).", "Recent deep-learning modular dialog models have also explored joint optimization over dialog policy and state tracking to achieve stronger performance (Wen et al., 2016; Zhao and Eskenazi, 2016; Liu and Lane, 2017).", "A related line of work is reinforcement learning for E2E dialog systems.", "Due to the flexibility of encoder-decoder dialog models, prior work has applied reinforcement learning to more complex domains and achieved higher dialog-level rewards, such as open-domain chatting (Li et al., 2016; Ser-ban et al., 2017a), negotiation (Lewis et al., 2017), visual dialogs (Das et al., 2017), grounded dialog (Mordatch and Abbeel, 2017) etc.", "As discussed in Section 1, these methods consider the output vocabulary at every decoding step to be the action space; they suffer from limitations such as deviation from natural language and sub-optimal convergence.", "Finally, research in latent variable dialog models is closely related to our work, which strives to learn meaningful latent variables for E2E dialog systems.", "Prior work has shown that learning with latent variables leads to benefits like diverse response decoding (Serban et al., 2017b; Zhao et al., 2017; Cao and Clark, 2017), interpretable decision-making (Wen et al., 2017; Zhao et al., 2018) and zero-shot domain transfer (Zhao and Eskenazi, 2018).", "Also, driven by similar motivations of this work, prior studies have explored to utilize a coarse discrete node, either handcrafted or learned, to decouple the word generation process from dialog policy in E2E systems for better dialog policy (He et al., 2018; Yarats and Lewis, 2017).", "Our work differs from prior work for two reasons: (1) latent action in previous work is only auxiliary, small-scale and mostly learned in a supervised or semi-supervised setting.", "This paper focuses on unsupervised learning of latent variables and learns variables that are expressive enough to capture the entire action space by itself.", "(2) to our best knowledge, our work is the first comprehensive study of the use of latent variables for RL policy optimization in dialog systems.", "E2E response generation can be treated as a conditional language generation task, which uses neural encoder-decoders (Cho et al., 2014) to model the conditional distribution p ( x | c ) where c is the observed dialog context and x is the system's response to the context.", "The format of the dialog context is domain dependent.", "It can vary from tex-Figure 1: High-level comparison between word-level and latent-action reinforcement learning in a sample multiturn dialog.", "The decoder network generates the response given the latent code z .", "Dashed line denotes places where policy gradients from task rewards are applied to the model.", "tual raw dialog history (Vinyals and Le, 2015) to visual and textual context (Das et al., 2017).", "Training with RL usually has 2 steps: supervised pretraining and policy gradient reinforcement learning (Williams and Zweig, 2016; Dhingra et al., 2017; Li et al., 2016).", "Specifically, the supervised learning step maximizes the log likelihood on the training dialogs, where is the model parameter: LSL ( ) = E x , c [log p ( x | c )] (1) Then the following RL step uses policy gradients, e.g. the REINFORCE algorithm (Williams, 1992) to update the model parameters with respect to task-dependent goals.", "We assume that we have an environment that the dialog agent can interact with and that there is a turn-level reward r t at every turn t of the dialog.", "We can then write the expected discounted return under a dialog model as J ( ) = E [ (cid:80) T 0 t r t ] , where [0 , 1] is the discounting factor and T is the length of the dialog.", "Often a baseline function b is used to reduce the variance of the policy gradient (Greensmith et al., 2004), leading to R t = (cid:80) T t k =0 k ( r t + k b ) .", "Word-level Reinforcement Learning : as shown in Figure 1, the baseline approach treats every output word as an action step and its policy gradient is: J ( ) = E [ T (cid:88) t =0 U t (cid:88) j =0 R tj log p ( w tj | w <tj , c t )] (2) where U t is the number of tokens in the response at turn t and j is the word index in the response.", "It is evident that Eq 2 has a very large action space, i.e. | V | and a long learning horizon, i.e. T U .", "Prior work has found that the direct application of Eq 2 leads to divergence of the decoder.", "The common solution is to alternate with supervised learning with Eq 2 at a certain ratio (Lewis et al., 2017).", "We denote this ratio as RL:SL=A:B, which means for every A policy gradient updates, we run B supervised learning updates.", "We use RL:SL=off for the case where only policy gradients are used and no supervised learning is involved.", "We now describe the proposed LaRL framework.", "As shown in Figure 1, a latent variable z is introduced in the response generation process.", "The conditional distribution is factorized into p ( x | c ) = p ( x | z ) p ( z | c ) and the generative story is: (1) given a dialog context c we first sample a latent action z from p e ( z | c ) and (2) generate the response by sampling x based on z via p d ( x | z ) , where p e is the dialog encoder network and p d is the response decoder network.", "Given the above setup, LaRL treats the latent variable z as its action space instead of outputting words in response x .", "We can now apply REINFORCE in the latent action space: J ( ) = E [ T (cid:88) t =0 R t log p ( z | c t )] (3) Compared to Eq 2, LaRL differs by: Shortens the horizon from T U to T .", "Latent action space is designed to be low-dimensional, much smaller than V .", "The policy gradient only updates the encoder e and the decoder d stays intact.", "These properties reduce the difficulties for dialog policy optimization and decouple high-level decision-making from natural language generation.", "The p e are responsible for choosing the best latent action given a context c while p d is only responsible for transforming z into the surface-form words.", "Our formulation also provides a flexible framework for experimenting with various types of model learning methods.", "In this paper, we focus on two key aspects: the type of latent variable z and optimization methods for learning z in the supervised pre-training step.", "Two types of latent variables have been used in previous research: continuous isotropic Gaussian distribution (Serban et al., 2017b) and multivariate categorical distribution (Zhao et al., 2018).", "These two types are both compatible with our LaRL framework and can be defined as follows: Gaussian Latent Actions follow M dimensional multivariate Gaussian distribution with a diagonal covariance matrix, i.e. z N ( , 2 I ) .", "Let the encoder p e consist of two parts: a context encoder F , a neural network that encodes the dialog context c into a vector representation h , and a feed forward network that projects h into and .", "The process is defined as follows: h = F ( c ) (4) (cid:20) log( 2 ) (cid:21) = ( h ) (5) p ( x | z ) = p d ( z ) z N ( , 2 I ) (6) where the sampled z is used as the initial state of the decoder for response generation.", "Also we use p ( z | c ) = N ( z ; , 2 I ) to compute the policy gradient update in Eq", "3. Categorical Latent Actions are M indepen-dent K-way categorical random variables.", "Each z m has its own token embeddings to map latent symbols into vector space E m RK D where m [1 , M ] and D is the embedding size.", "Thus M latent actions can represent exponentially, KM , unique combinations, making it expressive enough to model dialog acts in complex domains.", "Similar to Gaussian Latent Actions, we have h = F ( c ) (7) p ( Z m | c ) = softmax ( m ( h )) (8) p ( x | z ) = p d ( E 1: M ( z 1: M )) z m p ( Z m | c ) (9) For the computing policy gradient in Eq 3, we have p ( z | c ) = (cid:81) Mm =1 p ( Z m = z m | c ) Unlike Gaussian latent actions, a matrix RM D comes after the embedding layers E 1: M ( z 1: M ) , whereas the decoder's initial state is a vector of size RD .", "Previous work integrated this matrix with the decoder by summing over the latent embeddings, i.e. x = p d ( (cid:80) M 1 E m ( z m )) , denoted as Summation Fusion for later discussion (Zhao et al., 2018).", "A limitation of this method is that it could lose fine-grained order information in each latent dimension and have issues with long responses that involve multiple dialog acts.", "Therefore, we propose a novel method, Attention Fusion , to combine categorical latent actions with the decoder.", "We apply the attention mechanism (Lu-ong et al., 2015) over latent actions as the following.", "Let i be the step index during decoding.", "Then we have: mi = softmax ( h Ti W a E m ( z m )) (10) c i = M (cid:88) m =1 mi E m ( z m ) (11) (cid:101) h i = tanh ( W s (cid:20) h i c i (cid:21) ) (12) p ( w i | h i , c i ) = softmax ( W o (cid:101) h i ) (13) The decoder's next state is updated by h i +1 = RNN ( h i , w i +1 ) , (cid:101) h i ) and h 0 is computed via summation-fusion.", "Thus attention fusion lets the decoder focus on different latent dimensions at each generation step.", "Full ELBO : Now given a training dataset { x , c } , our base optimization method is via stochastic variational inference by maximizing the evidence lowerbound (ELBO), a lowerbound on the data log likelihood:", "L full ( ) = p q ( z | x , c ) ( x | z ) DKL [ q ( z | x , c ) (cid:107) p ( z | c )] (14)", "where q ( z | x , c ) is a neural network that is trained to approximate the posterior distribution q ( z | x , c ) and p ( z | c ) and p ( x | z ) are achieved by F , and p d .", "For Gaussian latent actions, we use the reparametrization trick (Kingma and Welling, 2013) to backpropagate through Gaussian latent actions and the Gumbel-Softmax (Jang et al., 2016) to backpropagate through categorical latent actions.", "Lite ELBO : a major limitation is that Full ELBO can suffer from exposure bias at latent space, i.e. the decoder only sees z sampled from q ( z | x , c ) and never experiences z sampled from p ( z | c ) , which is always used at testing time.", "Therefore, in this paper, we propose a simplified ELBO for encoder-decoder models with stochastic latent variables: L lite ( ) = p p ( z | c ) ( x | z ) DKL [ p ( z | c )) (cid:107) p ( z )] (15) Essentially this simplified objective sets the posterior network the same as our encoder, i.e. q ( z | x , c ) = p e ( z | c ) , which makes the KL term in Eq 14 zero and removes the issue of exposure bias.", "But this leaves the latent spaces unregularized and our experiments show that if we only maximize p p ( z | c ) ( x | z ) there is overfitting.", "For this, we add the additional regularization term DKL [ p ( z | c )) (cid:107) p ( z )] that encourages the posterior be similar to certain prior distributions and is a hyper-parameter between 0 and", "1. We set the p ( z ) for categorical latent actions to be uniform, i.e. p ( z ) = 1 /K , and set the prior for Gaussian latent actions to be N ( 0 , I ) , which we will show that are effective.", "DealOrNoDeal is a negotiation dataset that contains 5805 dialogs based on 2236 unique scenarios (Lewis et al., 2017).", "We hold out 252 scenarios for testing environment and randomly sample 400 scenarios from the training set for validation.", "The results are evaluated from 4 perspectives: Perplexity (PPL), Reward, Agree and Diversity.", "PPL helps us to identify which model produces the most human-like responses, while Reward and Agree evaluate the model's negotiation strength.", "Diversity indicates whether the model discovers a novel discourse-level strategy or just repeats dull responses to compromise with the opponent.", "We closely follow the original paper and use the same reward function and baseline calculation.", "At last, to have a fair comparison, all the compared models shared the identical judge model and user simulator, which are a standard hierarchical encoder-decoder model trained with Maximum Likelihood Estimation (MLE).", "Multi-Woz is a slot-filling dataset that contains 10438 dialogs on 6 different domains.", "8438 dialogs are for training and 1000 each are for validation and testing.", "Since no prior user simulator exists for this dataset, for a fair comparison with the previous state-of-the-art we focus on the Dialog-Context-to-Text Generation task proposed in (Budzianowski et al., 2018).", "This task assumes that the model has access to the ground-truth dialog belief state and is asked to generate the next response at every system turn in a dialog.", "The results are evaluated from 3 perspectives: BLEU, Inform Rate and Success Rate.", "The BLEU score checks the response-level lexical similarity, while Inform and Success Rate measure whether the model gives recommendations and provides all the requested information at dialog-level.", "Current state-of-the-art results struggle in this task and MLE models only achieve 60% success (Budzianowski et al., 2018).", "To transform this task into an RL task, we propose a novel extension to the original task as follows:", "1. For each RL episode, randomly sample a dialog from the training set", "2. Run the model on every system turn, and do not alter the original dialog context at every turn given the generated responses.", "3. Compute Success Rate based on the generated responses in this dialog.", "4. Compute policy gradient using Eq 3 and update the parameters.", "This setup creates a variant RL problem that is similar to the Contextual Bandits (Langford and Zhang, 2008), where the goal is to adjust its parameters to generate responses that yield better Success Rate.", "Our results show that this problem is challenging and that word-level RL falls short.", "It is challenging to quantify the performance of RL-based neural generation systems because it is possible for a model to achieve high task reward and yet not generate human language (Das et al., 2017).", "Therefore, we propose a novel measure, the Language Constrained Reward (LCR) curve as an additional robust measure.", "The basic idea is to use an ROC-style curve to visualize the tradeoff between achieving higher reward and being faithful to human language.", "Specifically, at each checkpoint i over the course of RL training, we record two measures: (1) the PPL of a given model on the test data p i = PPL ( i ) and (2) this model's average cumulative task reward in the test environment R ti .", "After RL training is complete, we create a 2D plot where the x-axis is the maximum PPL allowed, and the y-axis is the best achievable reward within the PPL budget in the testing environments: y = max i R ti subject to p i < x (16) As a result, a perfect model should lie in the upper left corner whereas a model that sacrifices language quality for higher reward will lie in the lower right corner.", "Our results will show that the LCR curve is an informative and robust measure for model comparison.", "We have created 6 different variations of latent action dialog models under our LaRL framework.", "To demonstrate the advantages of LaRL, during Model Var Type Loss Integration Gauss Gaussian L full / Cat Categorical L full sum AttnCat Categorical L full attn LiteGauss Gaussian L lite / LiteCat Categorical L lite sum LiteAttnCat Categorical L lite attn Table 1: All proposed variations of LaRL models.", "the RL training step, we set RL:SL=off for all latent action models, while the baseline word-level RL models are free to tune RL:SL for best performance.", "For latent variable models, their perplexity is estimated via Monte Carlo p ( x | c ) E p ( z | c ) [ p ( x | z ) p ( z | c )] .", "For the sake of clarity, this section only compares the best performing latent action models to the best performing word-level models and focuses on the differences between them.", "A detailed comparison of the 6 latent space configurations is addressed in Section 7.", "The baseline system is a hierarchical recurrent encoder-decoder (HRED) model (Serban et al., 2016) that is tuned to reproduce results from (Lewis et al., 2017).", "Word-level RL is then used to fine-tune the pre-trained model with RL:SL=4:1.", "On the other hand, the best performing latent action model is LiteCat.", "Best models are chosen based on performance on the validation environment.", "The results are summarized in Table 2 and Figure 2 shows the LCR curves for the baseline with the two best models plus LiteAttnCat and baseline without RL:SL.", "From Table 2, it appears that the word-level RL baseline performs better than LiteCat in terms of rewards.", "However, Figure 2 shows that the two LaRL models achieve strong task rewards with a much smaller performance drop in language quality (PPL), whereas the word-level PPL Reward Agree% Diversity Baseline 5.23 3.75 59 109 LiteCat 5.35 2.65 41 58 Baseline +RL 8.23 7.61 86 5 LiteCat +RL 6.14 7.27 87 202 Table 2: Results on DealOrNoDeal.", "model can only increase its task rewards by deviating significantly from natural language.", "Closer analysis shows the word-level baseline severely overfits to the user simulator.", "The caveat is that the word-level models have in fact discovered a loophole in the simulator by insisting on 'hat' and 'ball' several times and the user model eventually yields to agree to the deal.", "This is re-flected in the diversity measure, which is the number of unique responses that a model uses in all 200 testing scenarios.", "As shown in Figure 3, after RL training, the diversity of the baseline model drops to only 5.", "It is surprising that the agent can achieve high reward with a well-trained HRED user simulator using only 5 unique utterances.", "On the contrary, LiteCat increases its response diversity after RL training from 58 to 202, suggesting that LiteCat discovers novel discourse-level strategies in order to win the negotiation instead of exploiting local loopholes in the same user simulator.", "Our qualitative analysis confirms this when we observe that our LiteCat model is able to use multiple strategies in negotiation, e.g. elicit preference question, request different offers, insist on key objects etc.", "See supplementary material for example conversations.", "For MultiWoz, we reproduce results from (Budzianowski et al., 2018) as the baseline.", "After RL training, the best LaRL model is LiteAttnCat and the best word-level model is word RL:SL=off.", "Table 3 shows that LiteAttnCat is on PPL BLEU Inform Success Human / / 90% 82.3% Baseline 3.98 18.9 71.33% 60.96% LiteAttnCat 4.05 19.1 67.98% 57.36% Baseline +RL 17.11 1.4 80.5% 79.07% LiteAttnCat +RL 5.22 12.8 82.78% 79.2% Table 3: Main results on MultiWoz test set.", "par with the baseline in the supervised learning step, showing that multivariate categorical latent variables alone are powerful enough to match with continuous hidden representations for modeling dialog actions.", "For performance after RL training, LiteAttnCat achieves near-human performance in terms of success rate and inform rate, obtaining 18.24% absolute improvement over the MLE-based state-of-the-art (Budzianowski et al., 2018).", "More importantly, perplexity only slightly increases from 4.05 to 5.22.", "On the other hand, the word-level RL's success rate also improves to 79%, but the generated responses completely deviate from natural language, increasing perplexity from 3.98 to 17.11 and dropping BLEU from 18.9 to 1.4.", "Figure 4 shows the LCR curves for MultiWoz, with a trend similar to the previous section: the word-level models can only achieve task reward improvement by sacrificing their response decoder PPL.", "Figure 4 also shows the LCR curve for the baseline trained with RL:SL=100:1, hoping that supervised learning can force the model to conform to natural language.", "While PPL and BLEU are indeed improved, it also limits final reward performance.", "The latent-level models, on the contrary, do not suffer from this tradeoff.", "We also observe that LiteAttnCat consistently outperforms LiteCat on MultiWoz, confirming the effectiveness of Attention Fusion for handling long dialog responses with multiple entities and dialog acts.", "Lastly, Table 4 qualitatively exhibits the generation differences between the two approaches.", "The RL:SL=off model learns to continuously output entities to fool the evaluation script for high success rate, whereas LiteCatAttn learns to give more information while maintaining the language quality.", "We compare the 6 variants of latent action models on DealOrNoDeal and MultiWoz.", "Table 5 Deal PPL Reward Agree% Diversity Baseline 3.23 3.75 59 109 Gauss 110K 2.71 43 176 LiteGauss 5.35 4.48 65 91 Cat 80.41 3.9 62 115 AttnCat 118.3 3.23 51 145 LiteCat 5.35 2.67 41 58 LiteAttnCat 5.25 3.69 52 75 MultiWoz PPL BLEU Inform% Succ% Baseline 3.98 18.9 71.33 60.96 Gauss 712.3 7.54 60.5 23.0 LiteGauss 4.06 19.3 56.46 48.06 Cat 7.07 13.7 54.15 42.04 AttnCat 12.01 12.6 63.9 45.8 LiteCat 4.10 19.1 61.56 49.15 LiteAttnCat 4.05 19.1 67.97 57.36 Table 5: Comparison of 6 model variants with only supervised learning training.", "shows performance of the models that are pre-trained only with supervised learning.", "Figure 5 shows LCR curves for the 3 models pre-trained with L lite and fine-tuned with policy gradient reinforcement learning.", "The following are the main Figure 5: LCR curves on DealOrNoDeal and MultiWoz.", "L lite outperforms L full as a pre-train objective.", "Table 5 shows that models with L full fall behind their Lite counterparts on PPL and BLEU.", "We attribute this to the exposure bias in the latent space, i.e. the decoder is not trained to consider the discrepancy between the posterior network and actual dialog policy network.", "Meanwhile, the full models tend to enjoy higher diversity at pre-training, which agrees with the diversity-promoting effect observed in prior research (Zhao et al., 2017).", "However, our previous discussion on Figure 3 shows that Lite models are able to increase their response diversity in order to win more in negotiation through RL training.", "This is fundamentally different from diversity in pretraining, since diversity in LaRL is optimized to improve task reward, rather than to better model the original data distribution.", "Table 6 shows the 0.0 0.01 0.0 0.01 LiteCat 4.23 7.27 LiteGauss 4.83 6.67 Table 6: Best rewards in test environments on DealOrNoDeal with various .", "importance of latent space regularization.", "When is 0 , both LiteCat and LiteGauss reach suboptimal policies with final reward that are much smaller than the regularized versions ( = 0 . 01 ).", "The reason behind this is that the unregularized pre-trained policy has very low entropy, which prohibits sufficient exploration in the RL stage.", "Categorical latent actions outperform Gaussian latent actions.", "Models with discrete actions consistently outperform models with Gaussian ones.", "This is surprising since continuously distributed representations are a key reason for the success of deep learning in natural language processing.", "Our finding suggests that (1) multivariate categorical distributions are powerful enough to model complex natural dialog responses semantics, and can achieve on par results with Gaussian or non-stochastic continuous representations.", "(2) categorical variables are a better choice to serve as action spaces for reinforcement learning.", "Figure 5 shows that Lite(Attn)Cat easily achieves strong rewards while LiteGauss struggles to improve its reward.", "Also, applying REINFORCE on Gaussian latent actions is unstable and often leads to model divergence.", "We suspect the reason for this is the unbounded nature of continuous latent space: RL exploration in the continuous space may lead to areas in the manifold that are not covered in supervised training, which causes undefined decoder behavior given z in these unknown areas.", "Future Work", "In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , volume 1, pages 484495.", "Milica Gasic and Steve Young.", "2014.", "Gaussian processes for pomdp-based dialogue manager optimization.", "IEEE/ACM Transactions on Audio, Speech, and Language Processing , 22(1):2840.", "Evan Greensmith, Peter L Bartlett, and Jonathan Baxter.", "2004.", "Variance reduction techniques for gradient estimates in reinforcement learning.", "Journal of Machine Learning Research , 5(Nov):14711530.", "Eric Jang, Shixiang Gu, and Ben Poole.", "2016.", "Categorical reparameterization with gumbel-softmax.", "arXiv preprint arXiv:1611.01144 .", "Leslie Pack Kaelbling, Michael L Littman, and Andrew W Moore.", "1996.", "Reinforcement learning: A survey.", "Journal of artificial intelligence research , 4:237285.", "Diederik P Kingma and Max Welling.", "2013.", "Auto-encoding variational bayes.", "arXiv preprint arXiv:1312.6114 .", "John Langford and Tong Zhang.", "2008.", "The epoch-greedy algorithm for multi-armed bandits with side information.", "In Advances in neural information processing systems , pages 817824.", "Sungjin Lee.", "2013.", "Structured discriminative model for dialog state tracking.", "In Proceedings of the SIGDIAL 2013 Conference , pages 442451.", "In conclusion, this paper proposes a latent variable action space for RL in E2E dialog agents.", "We present a general framework with a regularized ELBO objective and attention fusion for discrete variables.", "The methods are assessed on two dialog tasks and analyzed using the proposed LCR curve.", "Results show our models achieve superior performance and create a new state-of-the-art success rate on MultiWoz.", "Extensive analyses enable us to gain insight on how to properly train latent variables that can serve as the action spaces for dialog agents.", "This work is situated in the approach concerning practical latent variables in dialog agents, being able to create action abstraction in an unsupervised manner.", "We believe that our findings are a basic first step in this promising research direction." ]
[ "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "objective", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "objective", "objective", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "abstain", "objective", "method", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "other", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "objective", "method", "method", "objective" ]
[ "Fernando Alva-Manchego and Louis Martin and Antoine Bordes Carolina Scarton 1 and Benot Sagot 2 and Lucia Specia 1 , 4 1 University of Sheffield, 2 Inria, 3 Facebook AI Research, 4 Imperial College London [email protected], [email protected], [email protected] [email protected], [email protected] [email protected]", "Abstract In order to simplify a sentence, human editors perform multiple rewriting transformations: they split it into several shorter sentences, paraphrase words (i.e. replacing complex words or phrases by simpler synonyms), reorder components, and/or delete information deemed unnecessary.", "Despite these varied range of possible text alterations, current models for automatic sentence simplification are evaluated using datasets that are focused on a single transformation, such as lexical paraphrasing or splitting.", "This makes it impossible to understand the ability of simplification models in more realistic settings.", "To alleviate this limitation, this paper introduces ASSET, a new dataset for assessing sentence simplification in English.", "ASSET is a crowdsourced multi-reference corpus where each simplification was produced by executing several rewriting transformations.", "Through quantitative and qualitative experiments, we show that simplifications in ASSET are better at capturing characteristics of simplicity when compared to other standard evaluation datasets for the task.", "Furthermore, we motivate the need for developing better methods for automatic evaluation using ASSET, since we show that current popular metrics may not be suitable when multiple simplification transformations are performed.", "Sentence Simplification (SS) consists in modifying the content and structure of a sentence to make it easier to understand, while retaining its main idea and most of its original meaning (Alva-Manchego et al., 2020).", "Simplified texts can benefit non-native speakers (Paetzold, 2016), people suffering from aphasia (Carroll et al., 1998), dyslexia (Rello et al., 2013) or autism (Evans et al., 2014).", "They also help language processing tasks, such as parsing (Chan-drasekar et al., 1996), summarisation (Silveira and Equal Contribution Branco, 2012), and machine translation (Hasler et al., 2017).", "In order simplify a sentence, several rewriting transformations can be performed: replacing complex words/phrases with simpler synonyms (i.e. lexical paraphrasing), changing the syntactic structure of the sentence (e.g. splitting), or removing super-fluous information that make the sentence more complicated (Petersen, 2007; Alusio et al., 2008; Bott and Saggion, 2011).", "However, models for automatic SS are evaluated on datasets whose simplifications are not representative of this variety of transformations.", "For instance, TurkCorpus (Xu et al., 2016), a standard dataset for assessment in SS, contains simplifications produced mostly by lexical paraphrasing, while reference simplifications in HSplit (Sulem et al., 2018a) focus on splitting sentences.", "The Newsela corpus (Xu et al., 2015) contains simplifications produced by professionals applying multiple rewriting transformations, but sentence alignments are automatically computed and thus imperfect, and its data can only be accessed after signing a restrictive public-sharing licence and cannot be redistributed, hampering reproducibility.", "These limitations in evaluation data prevent studying models' capabilities to perform a broad range of simplification transformations.", "Even though most SS models are trained on simplification instances displaying several text transformations (e.g. WikiLarge (Zhang and Lapata, 2017)), we currently do not measure their performance in more abstractive scenarios, i.e. cases with substantial modifications to the original sentences.", "In this paper we introduce ASSET ( A bstractive S entence S implification E valuation and T uning), a new dataset for tuning and evaluation of automatic SS models.", "ASSET consists of 23,590 human simplifications associated with the 2,359 original sentences from TurkCorpus (10 simplifications per original sentence).", "Simplifications in ASSET were collected via crowdsourcing ( 3), and encompass a variety of rewriting transformations ( 4), which make them simpler than those in TurkCorpus and HSplit ( 5), thus providing an additional suitable benchmark for comparing and evaluating automatic SS models.", "In addition, we study the applicability of standard metrics for evaluating SS using simplifications in ASSET as references ( 6).", "We analyse whether BLEU (Papineni et al., 2002) or SARI (Xu et al., 2016) scores correlate with human judgements of fluency, adequacy and simplicity, and find that neither of the metrics shows a strong correlation with simplicity ratings.", "This motivates the need for developing better metrics for assessing SS when multiple rewriting transformations are performed.", "We make the following contributions: A high quality large dataset for tuning and evaluation of SS models containing simplifications produced by applying multiple rewriting transformations.", "1 An analysis of the characteristics of the dataset that turn it into a new suitable benchmark for evaluation.", "A study questioning the suitability of popular metrics for evaluating automatic simplifications in a multiple-transformation scenario.", "A few corpus studies have been carried out to analyse how humans simplify sentences, and to attempt to determine the rewriting transformations that are performed.", "Petersen and Ostendorf (2007) analysed a corpus of 104 original and professionally simplified news articles in English.", "Sentences were manually aligned and each simplification instance was categorised as dropped (1-to-0 alignment), split (1-to-N), total (1-to-1) or merged (2-to-1).", "Some splits were further sub-categorised as edited (i.e. the sentence was split and some part was dropped) or different (i.e. same information but very different wording).", "This provides evidence that sentence splitting and deletion of information can be performed simultaneously.", "Alusio et al. (2008) studied six corpora of simple texts (different genres) and a corpus of complex news texts in Brazilian Portuguese, to produce a manual for Portuguese text simplification (Specia et al., 2008).", "It contains several rules to perform the task focused on syntactic alterations: to split adverbial/coordinated/subordinated sentences, to reorder clauses to a subject-verb-object structure, to transform passive to active voice, among others.", "Bott and Saggion (2011) worked with a dataset of 200 news articles in Spanish with their corresponding manual simplifications.", "After automatically aligning the sentences, the authors determined the simplification transformations performed: change (e.g. difficult words, pronouns, voice of verb), delete (words, phrases or clauses), insert (word or phrases), split (relative clauses, coordination, etc.), proximisation (add locative phrases, change from third to second person), reorder, select, and join (sentences).", "From all these studies, it can be argued that the scope of rewriting transformations involved in the simplification process goes beyond only replacing words with simpler synonyms.", "In fact, human perception of complexity is most affected by syntactic features related to sentence structure (Brunato et al., 2018).", "Therefore, since human editors make several changes to both the lexical content and syntactic structure of sentences when simplifying them, we should expect that models for automatic sentence simplification can also make such changes.", "Most datasets for SS (Zhu et al., 2010; Coster and Kauchak, 2011; Hwang et al., 2015) consist of automatic sentence alignments between related articles in English Wikipedia (EW) and Simple English Wikipedia (SEW).", "In SEW, contributors are asked to write texts using simpler language, such as by shortening sentences or by using words from Basic English (Ogden, 1930).", "However, Yasseri et al. (2012) found that the syntactic complexity of sentences in SEW is almost the same as in EW.", "In addition, Xu et al. (2015) determined that automatically-aligned simple sentences are sometimes just as complex as their original counterparts, with only a few words replaced or dropped and the rest of the sentences left unchanged.", "More diverse simplifications are available in the Newsela corpus (Xu et al., 2015), a dataset of 1,130 news articles that were each manually simplified to up to 5 levels of simplicity.", "The parallel articles can be automatically aligned at the sentence level to train and test simplification models (Alva-Manchego et al., 2017; Stajner et al., 2018).", "However, the Newsela corpus can only be accessed after signing a restrictive license that prevents publicly sharing train/test splits of the dataset, which impedes reproducibility.", "Evaluating models on automatically-aligned sentences is problematic.", "Even more so if only one (potentially noisy) reference simplification for each original sentence is available.", "With this concern in mind, Xu et al. (2016) collected the TurkCorpus, a dataset with 2,359 original sentences from EW, each with 8 manual reference simplifications.", "The dataset is divided into two subsets: 2,000 sentences for validation and 359 for testing of sentence simplification models.", "TurkCorpus is suitable for automatic evaluation that involves metrics requiring multiple references, such as BLEU (Papineni et al., 2002) and SARI (Xu et al., 2016).", "However, Xu et al. (2016) focused on simplifications through lexical paraphrasing, instructing annotators to rewrite sentences by reducing the number of difficult words or idioms, but without deleting content or splitting the sentences.", "This prevents evaluating a model's ability to perform a more diverse set of rewriting transformations when simplifying sentences.", "HSplit (Sulem et al., 2018a), on the other hand, provides simplifications involving only splitting for sentences in the test set of TurkCorpus.", "We build on TurkCorpus and HSplit by collecting a dataset that provides several manually-produced simplifications involving multiple types of rewriting transformations.", "A few projects have been carried out to collect manual simplifications through crowdsourcing.", "Pellow and Eskenazi (2014a) built a corpus of everyday documents (e.g. driving test preparation ma-terials), and analysed the feasibly of crowdsourcing their sentence-level simplifications.", "Of all the quality control measures taken, the most successful was providing a training session to workers, since it allowed to block spammers and those without the skills to perform the task.", "Additionally, they proposed to use workers' self-reported confidence scores to flag submissions that could be discarded or reviewed.", "Later on, Pellow and Eskenazi (2014b) presented a preliminary study on producing simplifications through a collaborative process.", "Groups of four workers were assigned one sentence to simplify, and they had to discuss and agree on the process to perform it.", "Unfortunately, the data collected in these studies is no longer publicly available.", "Simplifications in TurkCorpus were also collected through crowdsourcing.", "Regarding the methodology followed, Xu et al. (2016) only report removing bad workers after manual check of their first several submissions.", "More recently, Scarton et al. (2018) used volunteers to collect simplifications for SimPA, a dataset with sentences from the Public Administration domain.", "One particular characteristic of the methodology followed is that lexical and syntactic simplifications were performed independently.", "We extended TurkCorpus (Xu et al., 2016) by using the same original sentences, but crowdsourced manual simplifications that encompass a richer set of rewriting transformations.", "Since TurkCorpus was adopted as the standard dataset for evaluating SS models, several system outputs on this data are already publicly available (Zhang and Lapata, 2017; Zhao et al., 2018; Martin et al., 2020).", "Therefore, we can now assess the capabilities of these and other systems in scenarios with varying simplification expectations: lexical paraphrasing with TurkCorpus, sentence splitting with HSplit, and multiple transformations with ASSET.", "Manual simplifications were collected using Amazon Mechanical Turk (AMT).", "AMT allows us to publish HITs (Human Intelligence Tasks), which workers can choose to work on, submit an answer, and collect a reward if the work is approved.", "This was also the platform used for TurkCorpus.", "Worker Requirements.", "Participants were workers who: (1) have a HIT approval rate > = 95% ; (2) have a number of HITs approved > 1000 ; (3) are residents of the United States of America, the United Kingdom or Canada; and (4) passed the corresponding Qualification Test designed for our task (more details below).", "The first two requirements are measured by the AMT platform and ensure that the workers have experience on different tasks and have had most of their work approved by previous requesters.", "The last two requirements are intended Original Their eyes are quite small, and their visual acuity is poor.", "to ensure that the workers have a proficient level of English, and are capable of performing the simplification task.", "Qualification Test.", "We provided a training session to workers in the form of a Qualification Test (QT).", "Following Pellow and Eskenazi (2014a), we showed them explanations and examples of multiple simplification transformations (see details below).", "Each HIT consisted of three sentences to simplify, and all submissions were manually checked to filter out spammers and workers who could not perform the task correctly.", "The sentences used in this stage were extracted from the QATS dataset (Stajner et al., 2016).", "We had 100 workers take the QT, out of which 42 passed the test (42%) and worked on the task.", "Annotation Round.", "Workers who passed the QT had access to this round.", "Similar to Pellow and Eskenazi (2014a), each HIT now consisted of four original sentences that needed to be simplified.", "In addition to the simplification of each sentence, workers were asked to submit confidence scores on their simplifications using a 5-point likert scale (1:Very Low, 5:Very High).", "We collected 10 simplifications (similar to Pellow and Eskenazi (2014a)) for each of the 2,359 original sentences in TurkCorpus.", "Simplification Instructions.", "For both the QT and the Annotation Round, workers received the same set of instructions about how to simplify a sentence.", "We provided examples of lexical paraphrasing (lexical simplification and reordering), sentence splitting, and compression (deleting unimportant information).", "We also included an example where all transformations were performed.", "However, we clarified that it was at their discretion to decide which types of rewriting to execute in any given original sentence.", "2 Table 1 presents a few examples of simplifications in ASSET, together with references from TurkCorpus and HSplit, randomly sampled for the same original sentences.", "It can be noticed that annotators in ASSET had more freedom to change the structure of the original sentences.", "ASSET contains 23,590 human simplifications associated with the 2,359 original sentences from TurkCorpus (2,000 from the validation set and 359 from the test set).", "Table 2 presents some general statistics from simplifications in ASSET.", "We show the same statistics for TurkCorpus and HSplit for comparison.", "3 In addition to having more references per original sentence, ASSET's simplifications offer more variability, for example containing many more instances of natural sentence splitting than TurkCorpus.", "In addition, reference simplifications are shorter on average in ASSET, given that we allowed annotators to delete information that they considered unnecessary.", "In the next section, we further compare these datasets with more detailed text features.", "We study the simplifications collected for ASSET through a series of text features to measure the", "Full instructions are available in the dataset's repository.", "3 HSplit is composed of two sets of simplifications: one where annotators were asked to split sentences as much as they could, and one where they were asked to split the original sentence only if it made the simplification easier to read and understand.", "However, we consider HSplit as a whole because differences between datasets far outweigh differences between these two sets.", "abstractiveness of the rewriting transformations performed by the annotators.", "From here on, the analysis and statistics reported refer to the test set only (i.e. 359 original sentences), so that we can fairly compare ASSET, TurkCorpus and HSplit.", "In order to quantify the rewriting transformations, we computed several low-level features for all simplification instances using the tseval package (Martin et al., 2018):", "Number of sentence splits: Corresponds to the difference between the number of sentences in the simplification and the number of sentences in the original sentence.", "In tseval , the number of sentences is calculated using NLTK (Loper and Bird, 2002).", "Compression level: Number of characters in the simplification divided by the number of characters in the original sentence.", "Replace-only Levenshtein distance: Computed as the normalised character-level Levenshtein distance (Levenshtein, 1966) for replace operations only, between the original sentence and the simplification.", "Replace-only Levenshtein distance is computed as follows (with o the original sentence and s the simpli-fication): replace ops ( o, s ) min ( len ( o ) , len ( s )) We do not consider insertions and deletions in the Levenshtein distance computation so that this feature is independent from the compression level.", "It therefore serves as a proxy for measuring the lexical paraphrases of the simplification.", "Proportion of words deleted, added and reordered: Number of words deleted/reordered from the original sentence divided by the number of words in the original sentence; and the number of words that were added to the original sentence divided by the number of words in the simplification.", "Exact match: Boolean feature that equals to true when the original sentence and the simplification are exactly the same, to account for unchanged sentences.", "Word deletion only: Boolean feature that equals to true when the simplification is obtained only by deleting words from the original sentence.", "This feature captures extractive compression.", "Lexical complexity score ratio: We compute the score as the mean squared log-ranks of content words in a sentence (i.e. without stopwords).", "We use the 50k most frequent words of the FastText word embeddings vocabulary (Bojanowski et al., 2016).", "This vocabulary was originally sorted with frequencies of words in the Common Crawl.", "This score is a proxy to the lexical complexity of the sentence given that word ranks (in a frequency table) have been shown to be best indicators of word complexity (Paetzold and Specia, 2016).", "The ratio is then the value of this score on the simplification divided by that of the original sentence.", "Dependency tree depth ratio: We compute the ratio of the depth of the dependency parse tree of the simplification relative to that of the original sentence.", "When a simplification is composed by more than one sentence, we choose the maximum depth of all dependency trees.", "Parsing is performed using spaCy.", "4 This feature serves as a proxy to measure improvements in structural simplicity.", "Figure 1 shows the density of all features in ASSET, and compares them with those in TurkCorpus and", "4 github.com/explosion/spaCy", "HSplit.", "Table 3 highlights some of these statistics.", "In particular, we report the percentage of sentences that: have at least one sentence split, have a compression level of 75% or lower, have at least one reordered word, are exact copies of the original sentences, and operated word deletion only (e.g. by removing only an adverb).", "Sentence splits are practically non-existent in TurkCorpus (only 4.6% have one split or more), and are more present and distributed in HSplit.", "In ASSET, annotators tended to not split sentences, and those who did mostly divided the original sentence into just two sentences (1 split).", "Compression is a differentiating feature of ASSET.", "Both TurkCorpus and HSplit have high density of a compression ratio of 1.0, which means that no compression was performed.", "In fact, HSplit has several instances with compression levels greater than 1.0, which could be explained by splitting requiring adding words to preserve fluency.", "In contrast, ASSET offers more variability, perhaps signalling that annotators consider deleting information as an important simplification operation.", "By analysing replace-only Levenshtein distance, we can see that simplifications in ASSET paraphrase the input more.", "For TurkCorpus and HSplit, most simplifications are similar to their original counterparts (higher densities closer to 0).", "On the other hand, ASSET's simplifications are distributed in all levels, indicating more diversity in the reword-ings performed.", "This observation is complemented by the distributions of deleted, added and reordered words.", "Both TurkCorpus and HSplit have high densities of ratios close to 0.0 in all these features, while ASSET's are more distributed.", "Moreover, these ratios are rarely equal to 0 (low density), meaning that for most simplifications, at least some effort was put into rewriting the original sentence.", "This is comfirmed by the low percentage of exact matches in ASSET (0.4%) with respect to TurkCorpus (16.3%) and HSplit (26.5%).", "Once again, it suggests that more rewriting transformations are being performed in ASSET.", "In terms of lexical complexity, HSplit has a high density of ratios close to 1.0 due to its simplifications being structural and not lexical.", "TurkCorpus offers more variability, as expected, but still their simplifications contain a high number of words that are equally complex, perhaps due to most simplifications just changing a few words.", "On the other hand, ASSET's simplifications are more distributed across different levels of reductions in lexical complexity.", "Finally, all datasets show high densities of a 1.0 ratio in dependency tree depth.", "This could mean that significant structural changes were not made, which is indicated by most instances corresponding to operations other than splitting.", "Here we measure the quality of the collected simplifications using human judges.", "In particular, we study if the abstractive simplifications in ASSET (test set) are preferred over lexical-paraphrase-only or splitting-only simplifications in TurkCorpus (test set) and HSplit, respectively.", "Preference judgments were crowdsourced with a protocol similar to that of the simplifications ( 3.1).", "Selecting Human Judges.", "Workers needed to comply with the same basic requirements as described in 3.1.", "For this task, the Qualification Test (QT) consisted in rating the quality of simplifications based on three criteria: fluency (or gram-maticality), adequacy (or meaning preservation), and simplicity.", "Each HIT consisted of six original-simplified sentence pairs, and workers were asked to use a continuous scale (0-100) to submit their level of agreement (0: Strongly disagree, 100: Strongly agree) with the following statements:", "1. The Simplified sentence adequately expresses the meaning of the Original, perhaps omitting the least important information.", "2. The Simplified sentence is fluent, there are no grammatical errors.", "3. The Simplified sentence is easier to understand than the Original sentence.", "Using continuous scales when crowdsourcing human evaluations is common practice in Machine Translation (Bojar et al., 2018; Barrault et al., 2019), since it results in higher levels of inter-annotator consistency (Graham et al., 2013).", "The six sentence pairs for the Rating QT consisted of: Three submissions to the Annotation QT, manually selected so that one contains splitting, one has a medium level of compression, and one contains grammatical and spelling mistakes.", "These allowed to check that the particular characteristics of each sentence pair affect the corresponding evaluation criteria.", "One sentence pair extracted from WikiLarge (Zhang and Lapata, 2017) that contains several sentence splits.", "This instance appeared twice in the HIT and allowed checking for intra-annotator consistency.", "One sentence pair from WikiLarge where the Original and the Simplification had no relation to each other.", "This served to check the attention level of the worker.", "All submitted ratings were manually reviewed to validate the quality control established and to select the qualified workers for the task.", "Preference Task.", "For each of the 359 original sentences in the test set, we randomly sampled one reference simplification from ASSET and one from TurkCorpus, and then asked qualified workers to choose which simplification answers best each of the following questions: Fluency : Which sentence is more fluent?", "Meaning : Which sentence expresses the original meaning the best?", "Simplicity : Which sentence is easier to read and understand?", "Workers were also allowed to judge simplifications as similar when they could not determine which one was better.", "The same process was followed to compare simplifications in ASSET against those in HSplit.", "Each HIT consisted of 10 sentence pairs.", "Table 4 (top section) presents, for each evaluation dimension, the percentage of times a simplification from ASSET or TurkCorpus was preferred over the other, and the percentage of times they were judged as similar.", "In general, judges preferred ASSET's simplifications in terms of fluency and simplicity.", "However, they found TurkCorpus' simplifications more meaning preserving.", "This is expected since they were produced mainly by replacing words/phrases with virtually no deletion of content.", "A similar behaviour was observed when comparing ASSET to HSplit (bottom section of Table 4).", "In this case, however, the differences in preferences are greater than with TurkCorpus.", "This could indicate that changes in syntactic structure are not enough for a sentence to be consider simpler.", "In this section we study the behaviour of evaluation metrics for SS when using ASSET's simplifications (test set) as references.", "In particular, we measure the correlation of standard metrics with human judgements of fluency, adequacy and simplicity, on simplifications produced by automatic systems.", "Evaluation Metrics.", "We analysed the behaviour of two standard metrics in automatic evaluation of SS outputs: BLEU (Papineni et al., 2002) and SARI (Xu et al., 2016).", "BLEU is a precision-oriented metric that relies on the number of n -grams in the output that match n -grams in the references, independently of position.", "SARI measures improvement in the simplicity of a sentence based on the n -grams added, deleted and kept by the simplification system.", "It does so by comparing the output of the simplification model to multiple references and the original sentence, using both precision and recall.", "BLEU has shown positive correlation with human judgements of grammaticality and meaning preservation (Stajner et al., 2014; Wubben et al., 2012; Xu et al., 2016), while SARI has high correlation with judgements of simplicity gain (Xu et al., 2016).", "In our experiments, we used the implementations of these metrics available in the EASSE package for automatic sentence simplification evaluation (Alva-Manchego et al., 2019).", "5 We computed all the scores at sentence-level as in the experiment by Xu et al. (2016), where they compared sentence-level correlations of FKGL, BLEU and SARI with human ratings.", "We used a smoothed sentence-level version of BLEU so that comparison is possible, 5 https://github.com/feralvam/easse even though BLEU was designed as a corpus-level metric.", "System Outputs.", "We used publicly-available simplifications produced by automatic SS systems: PBSMT-R (Wubben et al., 2012), which is a phrase-based MT model; Hybrid (Narayan and Gar-dent, 2014), which uses phrase-based MT coupled with semantic analysis; SBSMT-SARI (Xu et al., 2016), which relies on syntax-based MT; NTS-SARI (Nisioi et al., 2017), a neural sequence-to-sequence model with a standard encoder-decoder architecture; and ACCESS (Martin et al., 2020), an encoder-decoder architecture conditioned on explicit attributes of sentence simplification.", "Collection of Human Ratings.", "We randomly chose 100 original sentences from ASSET and, for each of them, we sampled one system simplification.", "The automatic simplifications were selected so that the distribution of simplification transformations (e.g. sentence splitting, compression, paraphrases) would match that from human simplifications in ASSET.", "That was done so that we could obtain a sample that has variability in the types of rewritings performed.", "For each sentence pair (original and automatic simplification), we crowdsourced 15 human ratings on fluency (i.e. grammat-icality), adequacy (i.e. meaning preservation) and simplicity, using the same worker selection criteria and HIT design of the Qualification Test as in 5.1.", "We followed the process suggested in (Graham et al., 2013).", "First, we normalised the scores of each rater by their individual mean and standard deviation, which helps eliminate individual judge preferences.", "Then, the normalised continuous scores were converted to five interval categories using equally spaced bins.", "After that, we followed Pavlick and Tetreault (2016) and computed quadratic weighted Cohen's (Cohen, 1968) simulating two raters: for each sentence, we chose one worker's rating as the category for annotator A, and selected the rounded average scores for the remaining workers as the category for annotator B. We then computed for this pair over the whole dataset.", "We repeated the process 1,000 times to compute the mean and variance of .", "The resulting values are: 0 .", "687 0 .", "028 for Fluency, 0 .", "686 0 .", "030 for Meaning and 0 .", "628 0 .", "032 for Simplicity.", "All values point to a moderate level Metric References Fluency Meaning Simplicity BLEU ASSET 0.42* 0.61* 0.31* TurkCorpus 0.35* 0.59* 0.18 SARI ASSET 0.16 0.13 0.28* TurkCorpus 0.14 0.10 0.17 Table 5: Pearson correlation of human ratings with automatic metrics on system simplifications.", "of agreement, which is in line with the subjective nature of the simplification task.", "We computed the Pearson correlation between the normalised ratings and the evaluation metrics of our interest (BLEU and SARI) using ASSET or TurkCorpus as the set of references.", "We refrained from experimenting with HSplit since neither BLEU nor SARI correlate with human judgements when calculated using that dataset as references (Sulem et al., 2018a).", "Results are reported in Table 5.", "BLEU shows a strong positive correlation with Meaning Preservation using either simplifications from ASSET or TurkCorpus as references.", "There is also some positive correlation with Fluency judgements, but that is not always the case for Simplicity: no correlation when using TurkCorpus and moderate when using ASSET.", "This is in line with previous studies that have shown that BLEU is not a good estimate for simplicity (Wubben et al., 2012; Xu et al., 2016; Sulem et al., 2018b).", "In the case of SARI, correlations are positive but low with all criteria and significant only for simplicity with ASSET's references.", "Xu et al. (2016) showed that SARI correlated with human judgements of simplicity gain, when instructing judges to grade the quality of the variations by identifying the words/phrases that are altered, and counting how many of them are good simplifications .", "6 The judgements they requested differ from the ones we collected, since theirs were tailored to rate simplifications produced by lexical paraphrasing only.", "These results show that SARI might not be suitable for the evaluation of automatic simplifications with multiple rewrite operations.", "In Table 6, we further analyse the human ratings collected, and compute their correlations with similar text features as in 4.", "The results shown re-6 https://github.com/cocoxu/ simplification/tree/master/HIT_MTurk_crowdsourcing Feature Fluency Meaning Simplicity Length 0.12 0.31* 0.03 Sentence Splits -0.13 -0.06 -0.08 Compression Level 0.26* 0.46* 0.04 Levenshtein Distance -0.40* -0.67* -0.18 Replace-only Lev.", "inforce our previous observations that judgements on Meaning correlate with making few changes to the sentence: strong negative correlation with Levenshtein distance, and strong negative correlation with proportion of words added, deleted, and reordered.", "No conclusions could be drawn with respect to Simplicity.", "We have introduced ASSET, a new dataset for tuning and evaluation of SS models.", "Simplifications in ASSET were crowdsourced, and annotators were instructed to apply multiple rewriting transformations.", "This improves current publicly-available evaluation datasets, which are focused on only one type of transformation.", "Through several experiments, we have shown that ASSET contains simplifications that are more abstractive , and that are consider simpler than those in other evaluation corpora.", "Furthermore, we have motivated the need to develop new metrics for automatic evaluation of SS models, especially when evaluating simplifications with multiple rewriting operations.", "Finally, we hope that ASSET's multi-transformation features will motivate the development of SS models that benefit a variety of target audiences according to their specific needs such as people with low literacy or cognitive disabilities.", "This work was partly supported by Benot Sagot's chair in the PRAIRIE institute, funded by the French national agency ANR as part of the In-vestissements d'avenir programme under the reference ANR-19-P3IA-0001." ]
[ "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "abstain", "abstain", "method", "result", "abstain", "objective", "objective", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "other", "abstain", "method", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "method", "abstain", "objective", "abstain", "abstain", "result", "objective", "objective", "other" ]
[ "Reasoning is a crucial part of natural language argumentation.", "To comprehend an argument, one must analyze its warrant , which explains why its claim follows from its premises.", "As arguments are highly contextualized, warrants are usually presupposed and left implicit.", "Thus, the comprehension does not only require language understanding and logic skills, but also depends on common sense.", "In this paper we develop a methodology for reconstructing warrants systematically.", "We operationalize it in a scalable crowdsourcing process, resulting in a freely licensed dataset with warrants for 2k authentic arguments from news comments.", "1 On this basis, we present a new challenging task, the argument reasoning comprehension task .", "Given an argument with a claim and a premise, the goal is to choose the correct implicit warrant from two options.", "Both warrants are plausible and lexically close, but lead to contradicting claims.", "A solution to this task will define a substantial step towards automatic warrant reconstruction.", "However, experiments with several neural attention and language models reveal that current approaches do not suffice.", "What do the three propositions have in common?", "They were never uttered but solely presupposed in arguments made by the participants of online discussions.", "Presuppositions are a fundamental pragmatic instrument of natural language argumentation in which parts of arguments are left unstated.", "This phenomenon is also referred to as 1 Available at https://github.com/UKPLab/ argumentreasoning-comprehension-task/ , including source codes and supplementary materials.", "common knowledge (Macagno and Walton, 2014, p. 218), enthymemes (Walton, 2007b, p. 12), tacit major premises (Amossy, 2009, p. 319), or implicit warrants (Newman and Marshall, 1991, p. 8).", "Wilson and Sperber (2004) suggest that, when we comprehend arguments, we reconstruct their warrants driven by the cognitive principle of relevance.", "In other words, we go straight for the interpretation that seems most relevant and logical within the given context (Hobbs et al., 1993).", "Although any incomplete argument can be completed in different ways (Plumer, 2016), it is assumed that certain knowledge is shared between the arguing parties (Macagno and Walton, 2014, p. 180).", "Filling the gap between the claim and premises (aka reasons) of a natural language argument empirically remains an open issue, due to the inherent difficulty of reconstructing the world knowledge and reasoning patterns in arguments.", "In a direct fashion, Boltuzic and Snajder (2016) let annotators write down implicit warrants, but they concluded only with a preliminary analysis due to large variance in the responses.", "In an indirect fashion, implicit warrants correspond to major premises in argumentation schemes; a concept heavily referenced in argumentation theory (Wal-ton, 2012).", "However, mapping schemes to real-world arguments has turned out difficult even for the author himself.", "Our main hypothesis is that, even if there is no limit to the tacit length of the reasoning chain between claims and premises, it is possible to systematically reconstruct a meaningful warrant, depending only on what we take as granted and what needs to be explicit.", "As warrants encode our current presupposed world knowledge and connect the reason with the claim in a given argument, we expect that other warrants can be found which connect the reason with a different claim.", "In the ex-1930 Title: Is Marijuana a Gateway Drug?", "treme case, there may exist an alternative warrant in which the same reason is connected to the opposite claim.", "The intuition of alternative warrants is key to the systematic methodology that we develop in this paper for reconstructing a warrant for the original claim of an argument.", "In particular, we first twist' the stance of a given argument, trying to plausibly explain its reasoning towards the opposite claim.", "Then, we twist the stance back and use a similar reasoning chain to come up with a warrant for the original argument.", "As we discuss further below, this works for real-world arguments with a missing piece of information that is taken for granted and considered as common knowledge, yet, would lead to the opposite stance if twisted.", "We demonstrate the applicability of our methodology in a large crowdsourcing study.", "The study results in 1,970 high-quality instances for a new task that we call argument reasoning comprehension : Given a reason and a claim, identify the correct warrant from two opposing options.", "An example is given in Figure 1. A solution to this task will represent a substantial step towards automatic warrant reconstruction.", "However, we present experiments with several neural attention and language models which reveal that current approaches based on the words and phrases in arguments and warrants do not suffice to solve the task.", "The main contributions of this paper are (1) a methodology for obtaining implicit warrants realized by means of scalable crowdsourcing and (2) a new task along with a high-quality dataset.", "In addition, we provide", "(a) 2,884 user-generated arguments annotated for their stance, covering 50+ controversial topics,", "(b) 2,026 arguments with annotated reasons supporting the stance,", "(c) 4,235 rephrased reason gists, useful for argument summarization and sentence compression, and", "(d) a method for checking the reliability of crowdworkers in document and span labeling using traditional inter-annotator agreement measures.", "It is widely accepted that an argument consists of a claim and one or more premises (reasons) (Damer, 2013).", "Toulmin (1958) elaborated on a model of argument in which the reason supports the claim on behalf of a warrant .", "The abstract structure of an argument then is Reason (since) Warrant (therefore) Claim .", "The warrant takes the role of an inference rule, similar to the major premise in Walton's terminology (Walton, 2007a).", "In principle, the chain Reason Warrant Claim is applicable to deductive arguments and syllogisms, which allows us to validate arguments properly formalized in propositional logic.", "However, most natural language arguments are in fact inductive (Govier, 2010, p. 255) or defeasible (Walton, 2007b, p. 29).", "2 Accordingly, the unsuitability of formal logic for natural language arguments has been discussed by argumentation scholars since the 1950's (Toulmin, 1958).", "To be clear, we do not claim that arguments cannot be represented logically (e.g., in predicate logic), however the drift to informal logic in the 20th cen-tury makes a strong case that natural language argumentation is more than modus ponens (van Eemeren et al., 2014).", "In argumentation theory, the notion of a warrant has also been contentious.", "Some argue that the distinction of warrants from premises is clear only in Toulmin's examples but fails in practice, i.e., it is hard to tell whether the reason of a given argument is a premise or a warrant (van Eemeren et al., 1987, p. 205).", "However, Freeman (2011) provides alternative views on modeling an argument.", "Given a claim and two or more premises, the argument structure is linked if the reasoning step involves the logical conjunction of the premises.", "If we treat a warrant as a simple premise, then the linked structure fits the intuition behind Toulmin's model, such that premise and warrant combined give support to the claim.", "For details, see (Free-man, 2011, Chap. 4).", "2 A recent empirical example is provided by Walker et al. (2014) who propose possible approaches to identify patterns of inference from premises to claims in vaccine court cases.", "The authors conclude that it is extremely rare that a reasoning is explicitly laid out in a deductively valid format.", "What makes comprehending and analyzing arguments hard is that claims and warrants are usually implicit (Freeman, 2011, p. 82).", "As they are taken for granted' by the arguer, the reader has to infer the contextually most relevant content that she believes the arguer intended to use.", "To this end, the reader relies on common sense knowledge (Oswald, 2016; Wilson and Sperber, 2004).", "The reconstruction of implicit premises has already been faced in computational approaches.", "In light of the design of their argument diagramming tool, Reed and Rowe (2004) pointed out that the automatic reconstruction is a task that skilled analysts find both taxing and hard to explain.", "More recently, Feng and Hirst (2011) as well as Green (2014) outlined the reconstruction of missing enthymemes or warrants as future work, but they never approached it since.", "To date, the most advanced attempt in this regard is from Boltuzic and Snajder (2016).", "The authors let annotators re-construct' several propositions between premises and claims and investigated whether the number of propositions correlates with the semantic distance between the claim and the premises.", "However, they conclude that the written warrants heavily vary both in depth and in content.", "By contrast, we explore cases with a missing single piece of information that is considered as common knowledge, yet leading to the opposite conclusion if twisted.", "Recently, Becker et al. (2017) also experimented with reconstructing implicit knowledge in short German argumentative essays.", "In contrast to our work, they used expert annotators who iteratively converged to a single proposition.", "As the task we propose involves natural language comprehension, we also review relevant work outside argumentation here.", "In particular, the goal of the semantic inference task textual entailment is to classify whether a proposition entails or contradicts a hypothesis (Dagan et al., 2009).", "A similar task, natural language inference , was boosted by releasing the large SNLI dataset (Bow-man et al., 2015) containing 0.5M entailment pairs crowdsourced by describing pictures.", "While the understanding of semantic inference is crucial in language comprehension, argumentation also requires coping with phenomena beyond semantics.", "Rajpurkar et al. (2016) presented a large dataset for reading comprehension by answering questions over Wikipedia articles (SQuAD).", "In an analysis of this dataset Sugawara and Aizawa (2016) found, though, that only 6.2% of the questions require causal reasoning, 1.2% logical reasoning, and 0% analogy.", "In contrast, these reasoning types often make up the core of argumentation (Walton, 2007a).", "Mostafazadeh et al. (2016) introduced the cloze story test , in which the appropriate ending of a narrative has to be selected automatically.", "The overall context of this task is completely different to ours.", "Moreover, the narratives were written from scratch by explicitly instructing crowd workers, whereas our data come from genuine argumentative comments.", "Common-sense reasoning was also approached by Angeli and Manning (2014) who targeted the inference of commonsense facts from a large knowledge base.", "Since their logical formalism builds upon an enhanced version of Aristotle's syllogisms, its applicability to natural language argumentation remains limited (see our discussion above).", "In contrast to our data source, a few synthetic datasets for general natural language reasoning have been recently introduced, such as answers to questions over a described physical world (Weston et al., 2016) or an evaluation set of 100 questions in the Winograd Schema Challenge (Levesque et al., 2012).", "Finally, we note that, although being related, research on argument mining, argumentation quality, and stance classification is not in the immediate scope of this paper.", "For details on these, we therefore refer to recent papers from Lippi and Torroni (2016); Habernal and Gurevych (2017) or Mohammad et al. (2016).", "Let R be a reason for a claim C , both of which being propositions extracted from a natural language argument.", "Then there is a warrant W that justifies the use of R as support for C , but W is left implicit.", "For example, in a discussion about whether de-clawing a cat should be illegal, an author takes the following position (which is her claim C ): It should be illegal to declaw your cat'.", "She gives the following reason ( R ): They need to use their claws for defense and instinct'.", "3 The warrant W could then be If cat needs claws for instincts, de-clawing would be against nature' or similar.", "W remains implicit, because R already implies C quite obviously and so, according to common sense, any further explanation seems superfluous.", "3 The example is taken from our dataset introduced below.", "Now, the question is how to find the warrant W for a given reason R and claim C .", "Our key hypothesis in the definition of the argument reasoning comprehension task is the existence of an alternative warrant AW that justifies the use of R as support for the opposite C of the claim C (regard-less of the question of how strong this justification is).", "For the example above, assume that we twist' C to It should be legal to declaw your cat' ( C ) but use the same reason R .", "Is it possible to come up with an alternative warrant AW that justifies R ?", "In the given case, most house cats don't face en-emies' would bridge R to C quite plausibly.", "If we now use a reasoning based on AW but twist AW again such that it leads to the claim C , we get most house cats face enemies', which is a plausible warrant W for the original argument containing R and C .", "4 Constructing an alternative warrant is not possible for all reason/claim pairs; in some reasons the arguer's position is deeply embedded.", "As a result, trying to give a plausible reasoning for the opposite claim C either leads to nonsense or to a proposition that resembles a rebuttal rather than a warrant (Toulmin, 1958).", "However, if both W and AW are available, they usually capture the core of a reason's relevance and reveal the implicit presuppositions (examples follow further below).", "argument reasoning comprehension task as: Given a reason R and a claim C along with the title and a short description of the debate they occur in, identify the correct warrant W from two candidates: the correct warrant W and an incorrect alternative warrant AW.", "An instance of the task is thus basically given by a tuple ( R , C , W , AW ) .", "The debate title and description serve as the context of R and C .", "As it is binary, we propose to evaluate the task using accuracy.", "We now describe our methodology to systematically reconstruct implicit warrants, along with the scalable crowdsourcing process that operational-izes the methodology.", "The result of the process is 4 This way, we also reveal the weakness of the original argument that was hidden in the implicit premise.", "It can be challenged by asking the arguer whether house cats really face enemies.", "a dataset with authentic instances ( R , C , W , AW ) of the argument reasoning comprehension task.", "Instead of extending an existing dataset, we decided to create a new one from scratch, because we aimed to study a variety of controversial issues in user-generated web comments and because we sought for a dataset with a permissive license.", "As a source, we opted for the Room for Debate section of the New York Times.", "5 It provides authentic argumentation on contemporary issues with good editorial work and moderation as opposed to debate portals such as createde-bate.com , where classroom assignments, silly topics, and bad writing prevail.", "We manually selected 188 debates with polar questions in the title.", "These questions are controversial and provoking, giving a stimulus for stance-taking and argumentation.", "6 For each debate we created two explicit opposing claims, e.g., It should be illegal to declaw your cat' and It should be legal to declaw your cat'.", "We crawled all comments from each debate and sampled about 11k high-ranked, root-level comments.", "7 4.2 Methodology and Crowdsourcing Process The methodology we propose consists of eight consecutive steps that are illustrated in Figure 2 and detailed below.", "Each step can be operational-ized with crowdsourcing.", "For our dataset, we performed crowdsourcing on 5,000 randomly sampled comments using Amazon Mechanical Turk (AMT) from December 2016 to April 2017.", "Before, each comment was split into elementary discourse units (EDUs) using SistaNLP (Surdeanu et al., 2015).", "1. Stance Annotation For each comment, we first classify what stance it is taking (recall that we always have two explicit claims with opposing stance).", "Alternatively, it may be neutral (consider-5 https://www.nytimes.com/roomfordebate 6 Detailed theoretical research on polar and alternative questions can be found in (van Rooy and Safarova, 2003); Asher and Reese (2005) analyze bias and presupposition in polar questions.", "7 To remove noisy' candidates, we applied several criteria, such as the absence of quotations or URLs and certain lengths.", "For details, see the source code we provide.", "We did not check any quality criteria of arguments, as this was not our focus; see, e.g., (Wachsmuth et al., 2017) for argumentation quality.", "All 2,884 comments in our dataset classified as stance-taking by the crowdworkers were then also annotated as to whether being sarcastic or ironic; both pose challenges in analyzing argumentation not solved so far (Habernal and Gurevych, 2017).", "2. Reason Span Annotation For all comments taking a stance, the next step is to select those spans that give a reason for the claim (with a single EDU as the minimal unit).", "In our dataset, the workers found 5,119 reason spans, of which 2,026 lay within arguments.", "About 40 comments lacked any explicit reason.", "3. Reason Gist Summarization This new task is, in our view, crucial for downstream annotations.", "Each reason from the previous step is rewritten, such that the reason's gist in the argument remains the same but the clutter is removed (exam-ples are given in the supplementary material which is available both in the ACL Anthology and the project GitHub site).", "Besides, wrongly annotated reasons are removed in this step.", "The result is pairs of reason R and claim C .", "All 4,294 gists in our dataset were summarized under Creative Commons Zero license (CC-0).", "4. Reason Disambiguation Within our methodology, we need to be able to identify to what extent a reason itself implies a stance: While C because R ' allows for many plausible interpretations (as discussed above), whether R C or R C depends on how much presupposition is encoded in R .", "In this step, we decide which claim ( C or C ) is most plausible for R , or whether both are 8 We also experimented with approaching the annotations top-down starting by annotating explicit claims, but the results were unsatisfying.", "This is in line with empirical observations made by Habernal and Gurevych (2017) who showed that the majority of claims in user-generated arguments are implicit.", "similarly plausible (in the given data, respective reasons turned out to be rather irrelevant though).", "We used only those 1,955 instances where R indeed implied C according to the workers, as this suggests at least some implicit presupposition in R .", "5. Alternative Warrant This step is the trickiest, since it requires both creativity and brain twisting'.", "As exemplified in Section 3, a plausible explanation needs to be given why R supports C (i.e., the alternative warrant AW ).", "Alternatively, this may be classified as being impossible.", "Exact instructions for our workers can be found in the provided sources.", "All 5,342 alternative warrants in our dataset are written under CC-0 license.", "6. Alternative Warrant Validation As the previous step produces largely uncontrolled writings, we validate each fabricated alternative warrant AW as to whether it actually relates to the reason R .", "To this end, we show AW and C together with two alternatives: R itself and a distracting reason.", "Only instances with correctly validated R are kept.", "For our dataset, we sampled the distracting reason from the same debate topic, using the most dissimilar to R in terms of skip-thought vectors (Kiros et al., 2015) and cosine similarity.", "We kept 3,791 instances, for which the workers also rated how logical' the explanation of AW was (0 2 scale).", "7. Warrant For Original Claim This step refers to the second task in the example from Section 3: Given R and C , make minimal modifi-cations to the alternative warrant AW , such that it becomes an actual warrant W (i.e., such that R W C ).", "For our dataset, we restricted this step to those 2,613 instances that had a logic score' of at least 0.68 (obtained from the annotations mentioned 1934 above), in order to filter out nonsense alternative warrants.", "8. Warrant Validation To ensure that each tuple ( R , C , W , AW ) allows only one logical explanation (i.e., either R W C or R AW C is correct, not both), all instances are validated again.", "Disputed cases in the dataset (according to our workers) were fixed by an expert to ensure quality.", "We ended up with 1,970 instances to be used for the argument reasoning comprehension task.", "To strictly assess quality in the entire crowdsourcing process, we propose an evaluation method that enables classic' inter-annotator agreement measures for crowdsourcing, such as Fleiss' or Krippendorff's .", "Applying and directly to crowdsourced data has been disputed (Passonneau and Carpenter, 2014).", "For estimating gold labels from the crowd, several models have been proposed; we rely on MACE (Hovy et al., 2013).", "Given a number of noisy workers, MACE outputs best estimates, outperforming simple majority votes.", "At least five workers are recommended for a crowdsourcing task, but how reliable is the output really?", "We hence collected 18 assignments per item and split them into two groups (9+9) based on their submission time.", "We then considered each group as an independent crowdsourcing experiment and estimated gold labels using MACE for each group, thus yielding two experts from the crowd.' Having two independent experts' from the crowd allowed us to compute standard agreement scores.", "We also varied the size of the sub-sample from each group from 1 to 9 by repeated random sampling of assignments.", "This revealed how the score varies with respect to the crowd size per expert'.", "Figure 3 shows the Cohen's agreement for stance annotation with respect to the crowd size computed by our method.", "As MACE also includes a threshold for keeping only the most confident predictions in order to benefit precision, we tuned this parameter, too.", "Deciding on the number of workers per task is a trade-off between the desired quality and the budget.", "For example, reason span annotation is a harder task; however, the results for six workers are comparable to those for the expert annotations of Habernal and Gurevych (2017).", "9 9 The supplementary material contains a detailed figure; 1 2 3 4 5 6 7 8 9 Workers per \"expert\" 0.3 0.4 0.5 0.6 0.7 0.8 C o h e n ' s k a pp a MACE Threshold 0.850.90.951.0 Error bars = std.", "Table 1 lists statistics of the entire crowdsourcing process carried out for our dataset, including tasks for which we created data as a by-product.", "Below, we show three examples in which implicit common-sense presuppositions were revealed during the construction of the alternative warrant AW and the original warrant W .", "For brevity, we omit the debate title and description here.", "A full walk-through example is found in the supplementary material.", "R : Cooperating with Russia on terrorism ignores Russia's overall objectives.", "C : Russia cannot be a partner.", "AW : Russia has the same objectives of the US.", "W : Russia has the opposite objectives of the US.", "R : Economic growth needs innovation.", "C : 3-D printing will change the world.", "AW : There is no innovation in 3-d printing since it's unsustainable.", "W : There is much innovation in 3-d printing and it is sustainable.", "R : College students have the best chance of knowing history.", "C : College students' votes do matter in an election.", "AW : Knowing history doesn't mean that we will repeat it.", "W : Knowing history means that we won't repeat it.", "Given the dataset, we performed first experiments to assess the complexity of argument reasoning comprehension.", "To this end, we split the 1,970 instances into three sets based on the year of the de-not to be confused with Figure 3 which refers to stance annotation.", "bate they were taken from: 20112015 became the training set (1,210 instances), 2016 the development set (316 instances), and 2017 the test set (444 instances).", "This follows the paradigm of learning on past data and predicting on new ones.", "In addition, it removes much lexical and topical overlap.", "To evaluate human upper bounds for the task, we sampled 100 random questions (such as those presented in Section 4.4) from the test set and distributed them among 173 participants of an AMT survey.", "Every participant had to answer 10 questions.", "We also asked the participants about their highest completed education (six categories) and the amount of formal training they have in reasoning, logic, or argumentation (no training, some, or extensive).", "In addition, they specified for each question how familiar they were with the topic (3-point scale).", "How Hard is the Task for Humans?", "It depends, as shown in Figure 4. Whereas education had almost negligible influence on the performance, the more extensive formal training in reasoning the participants had, the higher their score was.", "Overall, 30 of the 173 participants scored 100%.", "The mean score for those with extensive 1 12 29 7 30 8 2 19 7 39 8 2 1 5 3 < H g h .", "formal training was 90.9%.", "For all participants, the mean was 79.8%.", "However, we have to note that some of the questions are more difficult than others, for which we could not control explicitly.", "Does Topic Familiarity Affect Human Performance?", "Not really, i.e., we found no significant (Spearman) correlation between the mean score and familiarity of a participant in almost all educa-tion/training configurations.", "This suggests that ar-1936 gument reasoning comprehension skills are likely to be independent of topic-specific knowledge.", "To assess the complexity of computationally approaching argument reasoning comprehension, we carried out first experiments with systems based on the following models.", "The simplest considered model was the random baseline , which chooses either of the candidate warrants of an instance by chance.", "As another baseline, we used a 4-gram Modified Kneser-Ney language model trained on 500M tokens (100k vocabulary) from the C4Corpus (Habernal et al., 2016).", "The effectiveness of language models was demonstrated by Rudinger et al. (2015) for the narrative cloze test where they achieved state-of-the-art results.", "We computed log-likelihood of the candidate warrants and picked the one with lower score.", "10 To specifically appoach the given task, we implemented two neural models based on a bidirectional LSTM.", "In the standard attention version, we encoded the reason and claim using a BiL-STM and provided it as an attention vector after max-pooling to LSTM layers from the two available warrants W 0 and W 1 (corresponding to W and AW , see below).", "Our more elaborated version used intra-warrant attention , as shown in Figure 5. Both versions were also extended with the debate title and description added as context to the attention layer ( w/ context ).", "We trained the resulting four models using the ADAM optimizer, with heavy dropout (0.9) and early stopping (5 epochs), tuned on the development set.", "Input em-beddings were pre-trained word2vec's (Mikolov et al., 2013).", "We ran each model three times with random initializations.", "To evaluate all systems, each instance in our dataset is represented as a tuple ( R , C , W 0 , W 1 ) with a label (0 or 1).", "If the label is 0, W 0 is the correct warrant, otherwise W 1 .", "Recall that we have two warrants W and AW whose correctness depends on the claim: W is correct for R and the original claim C , whereas AW would be correct for R and the opposite claim C .", "We thus doubled the training data by adding a permuted instance ( R , C , W 1 , W 0 ) with the respective correct label; this led to increased performance.", "The overall 10 This might seem counterintuitive, but since W is created by rewriting AW , it may suffer from some dis-coherency, which is then caught by the language model.", "results of all approaches (humans and systems) are shown in Table 2. Intra-warrant attention with rich context outperforms standard neural models with a simple attention, but it only slightly beats the language model on the dev set.", "The language model is basically random on the test set.", "A manual error analysis of 50 random wrong predictions (a single run of the best-performing system on the dev set) revealed no explicit pattern of encountered errors.", "Drawing any conclusions is hard given the diversity of included topics and the variety of reasoning patterns.", "A possible approach would be to categorize warrants using, e.g., argumentation schemes (Walton et al., 2008) and break down errors accordingly.", "However, this is beyond the scope here and thus left for future work.", "Can We Benefit from Alternative Warrants and Opposite Claims?", "Since the reasoning chain R AW C is correct, too, we also tried adding respective instances to the training set (thus doubling the size).", "In this configuration, however, the neural models failed to learn anything better than a random guess.", "The reason behind is probably that the opposing claims are lexically very close, usually negated, and the models cannot pick this up.", "This underlines that argument reasoning comprehension cannot be solved by simply looking at the occurring words or phrases.", "We presented a new task called argument reasoning comprehension that tackles the core of reasoning in natural language argumentation im-1937", "plicit warrants.", "Moreover, we proposed a methodology to systematically reconstruct implicit warrants in eight consecutive steps.", "So far, we implemented the methodology in a manual crowdsourcing process, along with a strategy that enables standard inter-annotator agreement measures in crowdsourcing.", "Following the process, we constructed a new dataset with 1,970 instances for the task.", "This number might not seem large (e.g., compared to 0.5M from SNLI), but tasks with hand-crafted data are of a similar size (e.g., 3,744 Story Cloze Test instances).", "Also, the crowdsourcing process is scalable and is limited only by the bud-get.", "11 Moreover, we created several data by-products' that are valuable for argumentation research: 5,000 comments annotated with stance, which outnumbers the 4,163 tweets for stance detection of Mohammad et al. (2016); 2,026 arguments with 4,235 annotated reasons, which is six times larger than the 340 documents of Habernal and Gurevych (2017); and 4,235 summarized reason gists we are not aware of any other handcrafted dataset for abstractive argument summarization built upon authentic arguments.", "Based on the dataset, we evaluated human performance in argument reasoning comprehension.", "Our findings suggest that the task is harder for people without formal argumentation training, while being solvable without knowing the topic.", "We also found that neural attention models outperform language models on the task.", "In the short run, we plan to draw more attention to this topic by running a SemEval 2018 shared task.", "12 A deep qualitative analysis of the warrants from the theoretical perspective of reasoning 11 In our case, the total costs were about $6,000 including bonuses and experiments with the workflow set-up.", "patterns or argumentation schemes is also necessary.", "In the long run, an automatic generation and validation warrants can be understood as the ultimate goal in argument evaluation.", "It has been claimed that for reconstructing and evaluating natural language arguments, one has to fully roll out' their implicit premises (van Eemeren et al., 2014, Chap. 3.2) and leverage knowledge bases (Wyner et al., 2016).", "We believe that a system that can distinguish between the wrong and the right warrant given its context will be helpful in filtering out good candidates in argument reconstruction.", "For the moment, we just made a first empirical step towards exploring how much common-sense reasoning is necessary in argumentation and how much common sense there might be at all.", "This work has been supported by the ArguAna Project GU 798/20-1 (DFG), and by the DFG-funded research training group Adaptive Preparation of Information form Heterogeneous Sources (AIPHES, GRK 1994/1)." ]
[ "abstain", "abstain", "abstain", "abstain", "objective", "result", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "result", "abstain", "abstain", "objective", "objective", "method", "method", "objective", "objective", "abstain", "objective", "objective", "method", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "objective", "other", "other", "abstain", "other", "other", "other", "other", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "objective", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "method", "objective", "abstain", "abstain", "other", "method", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "other" ]
[ "Recent works on Lottery Ticket Hypothesis have shown that pre-trained language models (PLMs) contain smaller matching subnetworks (winning tickets) which are capable of reaching accuracy comparable to the original models.", "However, these tickets are proved to be not robust to adversarial examples, and even worse than their PLM counterparts.", "To address this problem, we propose a novel method based on learning binary weight masks to identify robust tickets hidden in the original PLMs.", "Since the loss is not differentiable for the binary mask, we assign the hard concrete distribution to the masks and encourage their sparsity using a smoothing approximation of L 0 regularization.", "Furthermore, we design an adversarial loss objective to guide the search for robust tickets and ensure that the tickets perform well both in accuracy and robustness.", "Experimental results show the significant improvement of the proposed method over previous work on adversarial robustness evaluation.", "Large-scale pre-trained language models (PLMs), such as BERT (Devlin et al., 2019), Roberta (Liu et al., 2019) and T5 (Raffel et al., 2019) have achieved great success in the field of natural language processing.", "As more transformer layers are stacked with larger self-attention blocks, the complexity of PLMs increases rapidly.", "Due to the over-parametrization of PLMs, some Transformer heads and even layers can be pruned without significant losses in performance (Michel et al., 2019; Kovaleva et al., 2019; Rogers et al., 2020).", "The Lottery Ticket Hypothesis suggests an over-parameterized network contains certain subnetworks (i.e., winning tickets) that can match the performance of the original model when trained in isolation (Frankle and Carbin, 2019).", "Chen Equal contribution.", "et al. (2020); Prasanna et al. (2020) also find these winning tickets exist in PLMs.", "Chen et al. (2020) prune BERT in an unstructured fashion and obtain winning tickets at sparsity from 40% to 90%.", "Prasanna et al. (2020) aim at finding structurally sparse tickets for BERT by pruning entire attention heads and MLP.", "Previous works mainly focused on using winning tickets to reduce model size and speed up training time (Chen et al., 2021), while little work has been done to explore more benefits, such as better adversarial robustness than the original model.", "As we all know, PLMs are vulnerable to adversarial examples that are legitimately crafted by imposing imperceptible perturbations on normal examples (Jin et al., 2020; Garg and Ramakrishnan, 2020; Wang et al., 2021).", "Recent studies have shown that pruned subnetworks of PLMs are even less robust than their PLM counterparts (Xu et al., 2021; Du et al., 2021).", "Xu et al. (2021) observe that when fine-tuning the pruned model again, the model yields a lower robustness.", "Du et al. (2021) clarify the above phenomenon further: the compressed models overfit on shortcut samples and thus perform consistently less robust than the uncompressed large model on adversarial test sets.", "In this work, our goal is to find robust PLM tickets that, when fine-tuned on downstream tasks, achieve matching test performance but are more robust than the original PLMs.", "In order to make the topology structure of tickets learnable, we assign binary masks to pre-trained weights to determine which connections need to be removed.", "To solve discrete optimization problem of binary masks, we assume the masks follow a hard concrete distribution (a soft version of the Bernoulli distribution), which can be solved using Gumbel-Softmax trick (Louizos et al., 2018).", "We then use an adversarial loss objective to guide the search for robust tickets and an approximate LO regularization is used to encourage the sparsity 2211 of robust tickets.", "Robust tickets can be used as a robust substitute of original PLMs to fine-tune downstream tasks.", "Experimental results show that robust tickets achieve a significant improvement in adversarial robustness on various tasks and maintain a matching accuracy.", "Our codes are publicly available at Github 1 .", "We demonstrate that PLMs contain robust tickets with matching accuracy but better robustness than the original network.", "We propose a novel and effective technique to find the robust tickets based on learnable binary masks rather than the traditional iterative magnitude-based pruning.", "We provide a new perspective to explain the vulnerability of PLMs on adversarial examples: some weights of PLMs do not contribute to the accuracy but may harm the robustness.", "Textual attacks typically generate explicit adversarial examples by replacing the components of sentences with their counterparts and maintaining a high similarity in semantics (Ren et al., 2019) or embedding space (Li et al., 2020).", "These adversarial attackers can be divided into character-level (Gao et al., 2018), word-level (Ren et al., 2019; Zang et al., 2020; Jin et al., 2020; Li et al., 2020) and multi-level (Li et al., 2018).", "In response to adversarial attackers, various adversarial defense methods are proposed to improve model robustness.", "Adversarial training solves a min-max robust optimization and is generally considered as one of the strongest defense methods (Madry et al., 2018; Zhu et al., 2020; Li and Qiu, 2020).", "Adversarial data augmentation (ADA) has been widely adopted to improve robustness by adding textual adversarial examples during training (Jin et al., 2020; Si et al., 2021).", "However, ADA is not sufficient to cover the entire perturbed search space, which grows exponentially with the length of the input text.", "Some regularization methods, such as smoothness-inducing regularization (Jiang et al., 2020) and information bottleneck regularization (Wang et al., 1 https://github.com/ruizheng20/robust_ticket 2020), are also beneficial for robustness.", "Different from the above methods, we dig robust tickets from original BERT, and the subnetworks we find have better robustness after fine-tuning.", "Lottery Ticket Hypothesis (LTH) suggests the existence of certain sparse subnetworks (i.e., winning tickets) at initialization that can achieve almost the same test performance compared to the original model (Frankle and Carbin, 2019).", "In the field of NLP, previous works find that the winning tickets also exist in Transformers and LSTM (Yu et al., 2020; Renda et al., 2020).", "Evci et al. (2020) propose a method to optimize the topology of the sparse network during training without sacrificing accuracy relative to existing dense-to-sparse training methods.", "Chen et al. (2020) find that PLMs such as BERT contain winning tickets with a sparsity of 40% to 90%, and the winning tickets found in the mask language modeling task can universally be transfered to other downstream tasks.", "Prasanna et al. (2020) find structurally sparse winning tickets for BERT, and they notice that all subnetworks (winning tickets and randomly pruned subnetworks) have comparable performance when fine-tuned on downstream tasks.", "Chen et al. (2021) propose an efficient BERT training method using Early-bird lottery tickets to reduce the training time and inference time.", "Some recent studies have tried to dig out more features of winning tickets.", "Zhang et al. (2021) demonstrate that even in biased models (which focus on spurious correlations) there still exist unbiased winning tickets.", "Liang et al. (2021) observe that at a certain sparsity, the generalization performance of the winning tickets can not only match but also exceed that of the full model.", "(Du et al., 2021; Xu et al., 2021) show that the winning tickets that only consider accuracy are over-fitting on easy samples and generalize poorly on adversarial examples.", "Our work makes the first attempt to find the robust winning tickets for PLMs.", "Learning to identify a subnetwork with high adversarial robustness is widely discussed in the field of computer vision.", "Post-train pruning approaches require a pre-trained model with adversarial robustness before pruning (Sehwag et al., 2019; Gui et al., 2019).", "In-train pruning methods integrate the pruning process into the robust learning process, which jointly optimize the model 2212 parameters and pruning connections (Vemparala et al., 2021; Ye et al., 2019).", "Sehwag et al. (2020) integrate the robust training objective into the pruning process and remove the connections based on importance scores.", "In our work, we focus on finding robust tickets hidden in original PLMs rather than pruning subnetworks from a robust model.", "In this section, we propose a novel pruning method to extract robust tickets of PLMs by learning binary weights masks with an adversarial loss objective.", "Furthermore, we articulate the Robust Lottery Ticket Hypothesis: the full PLM contains subnetworks (robust tickets) that can achieve better adversarial robustness and comparable accuracy.", "Denote f ( ) as a PLM with parameters that has been fine-tuned on a downstream task.", "A subnetwork of f ( ) can be denoted as f ( m ) , where m are binary masks with the same dimension as and is the Hadamard product operator.", "LTH suggests that, for a network initialized with 0 , the Iterative Magnitude Pruning (IMP) can identify a mask m , such that the subnetwork f ( x ; m 0 ) can be trained to almost the same performance to the full model f ( 0 ) in a comparable number of iterations.", "Such a subnetwork f ( x ; m 0 ) is called as winning tickets , including both the structure mask m and initialization 0 .", "IMP iteratively removes the weights with the smallest magnitudes from m until a certain sparsity is reached.", "However, the magnitude-based pruning is not suitable for robustness-aware techniques (Vemparala et al., 2021; Sehwag et al., 2020).", "Our goal is to learn the sparse subnetwork, however, the training loss is not differentiable for the binary masks.", "A simple choice is to adopt a straight-through estimator to approximate the derivative (Bengio et al., 2013).", "Unfortunately, this approach ignores the Heaviside function in the likelihood and results in biased gradients.", "Thus, we resort to a practical method to learn sparse neural networks (Louizos et al., 2018).", "i U (0 , 1) , (1) s i = (cid:18) 1 i (cid:18) log i 1 i + log i (cid:19)(cid:19) , (2) m i = min (1 , max (0 , s i ( ) + )) , (3)", "where denotes the sigmoid function, = 0 .", "1 , = 1 .", "1 are constants, and u i is the sample drawn from uniform distribution U (0 , 1) .", "The random variable s i follows a binary concrete (or Gumbel-Softmax) distribution, which is a smoothing approximation of the discrete Bernoulli distribution (Maddison et al., 2017; Jang et al., 2017).", "Samples from the binary concrete distribution are identical to samples from a Bernoulli distribution with probability i as i 0 .", "The location i in (2) allows for gradient-based optimization through reparametrization tricks.", "Using (3), the s i larger than 1 is rounded to 1 , whereas the value smaller than is rounded to 0 .", "To encourage the sparsity, we penalize the L 0 complexity of masks based on the probability which are non-zero: R ( m ) = 1 | m | | m | (cid:88) i =1 (cid:18) log i i log (cid:19) .", "3.2.1 Adversarial Loss Objective To find the connections responsible for adversarial robustness, we incorporate the adversarial loss into the mask learning objective:", "where ( x, y ) is a data point from dataset D , is the perturbation that constrained within the ball.", "The inner maximization problem in (6) is to find the worst-case adversarial examples to maximize the classification loss, while the outer minimization problem in (6) aims at optimizing the masks to minimize the loss of adversarial examples, i.e., L adv ( m ) .", "problem.", "PGD applies the K -step stochastic gradient descent to search for the perturbation (Madry et al., 2018): k +1 = (cid:89) (cid:18) k + g ( k ) g ( k ) (cid:19) , (7) where g ( k ) = x L ( f ( x + k ; m ) , y ) , k is the perturbation in k -th step and (cid:81) ( ) projects the perturbation back onto the Frobenius normalization ball.", "Then robust training optimizes the network on adversarially perturbed input x + K .", "Through the above process, we can conveniently obtain a large number of adversarial examples for training.", "By integrating the L 0 complexity regularizer into the training process of masks, our adversarial loss objective becomes: min m L adv ( m ) + R ( m ) , (8) where denotes regularization strength.", "The selection of the regularization strength decides the quality of robust tickets.", "Results carried on SST-2 in Fig.1 show that eventually more than 90% of the masks will be very close to 0 or 1 , and the L 0 complexity regularizer R ( m ) will converge to a fixed value.", "As increases, R ( m ) decreases (the sparsity of the subnetwork increases).", "The training of the adversarial loss objective in (8) is insensitive to the , and in all experiments, is chosen in the range [0 . 1 , 1] .", "In the Appendix A, we show more details about mask learning process.", "After training the masks m , we use the location parameters log of masks to extract robust tickets.", "For the Gumbel-Softmax distribution in (2), i is the expectation (confidence) of random variable s i , i.e, E { s i } = i .", "Thus, we prune the weights whose masks have the smallest expectation.", "We prune all attention heads and intermediate neurons in an unstructured manner, which empirically has better performance than structured pruning.", "Unlike the Lottery Ticket Hypothesis that requires iterative magnitude pruning, the proposed method is a one-shot pruning method that can obtain subnetworks of any sparsity.", "Then we retrain (i.e., fine-tune) the robust tickets f ( m 0 ) on downstream tasks.", "In the context of adversarial robustness, we seek winning tickets that balance accuracy and robustness, and then we state and demonstrate Robust Lottery Tickets Hypothesis.", "Robust Lottery Tickets Hypothesis: A pre-trained language model, such as BERT, contains some subnetworks (robust tickets) initialized by pre-trained weights, and when these subnetworks are trained in isolation, they can achieve better adversarial robustness and comparable accuracy.", "In addition, robust tickets retain an important characteristic of traditional lottery tickets the ability to speed up the training process.", "The practical merits of Robust Lottery Ticket Hypothesis: 1) It provides an effective pruning method that can reduce memory constraints during inference time by identifying well-performing smaller networks which can fit in memory.", "2) Our proposed robust ticket is more robust than the existing defense methods, so it can be used as a defense method.", "We conduct several experiments to demonstrate the effectiveness of our method.", "We first compare the proposed method with baseline methods in terms of clean accuracy and robust evaluation.", "Then, we perform an ablation study to illustrate the role of sparse mask learning and adversarial loss objective in our method.", "In addition, we try to further flesh out our method with several additional analysis experiments.", "Following the official BERT implementation (Devlin et al., 2019; Wolf et al., 2020), we use BERTBASE as our backbone model for all experiments.", "We evaluate our method mainly on three text classification datasets: Internet Movie Database (IMDB, Maas et al., 2011) , AG News corpus (AGNEWS, Zhang et al., 2015) and Stanford Sentiment Treebank of binary classification (SST-2, Socher et al., 2013).", "We also test our method on other types of tasks in GLUE, such as MNLI, QNLI, QQP.", "The labels of GLUE test sets are not available, so GLUE test sets cannot be used for adversarial attacks.", "The results of GLUE tasks are tested on the official development set, and we divide 10% training data as the development set.", "We compare our RobustT ( Robust T ickets) with recently proposed adversarial defense methods and the standard lottery ticket.", "Fine-tune (Devlin et al., 2019): The official BERT implementation on downstream tasks.", "FreeLB (Zhu et al., 2020): An enhanced gradient-based adversarial training method which is not targeted at specific attack methods.", "InfoBERT (Wang et al., 2020): A learning framework for robust model fine-tuning from an information-theoretic perspective.", "This method claims that it has obtained a better representation of data features.", "LTH (Chen et al., 2020): For a range of downstream tasks, BERT contains winning lottery tickets at 40% to 90% sparsity.", "Random : Subnetworks with the same layer-wise sparsity of the above RobustT, but their structures are randomly pruned from the original BERT.", "Three widely accepted attack methods are used to verify the ability of our proposed method against baselines (Li et al., 2021).", "BERT-Attack (Li et al., 2020) is a method using BERT to generate adversarial text, and thus the generated adversarial examples are fluent and semantically preserved.", "TextFooler (Jin et al., 2020) first identify the important words in the sentences, and then replace them with synonyms that are semantically similar and grammatically correct until the prediction changes.", "TextBugger (Li et al., 2018) is an adversarial attack method that generates misspelled words by using character-level and word-level perturbations.", "accuracy (Clean % ) denotes the accuracy on the clean test dataset.", "Accuracy under attack (Aua % ) refers to the model's prediction accuracy facing specific adversarial attacks.", "Attack success rate (Suc % ) is the ratio of the number of texts successfully perturbed by an attack method to the total number of texts to be attempted.", "Number of Queries (#Query) is the average number of times the attacker queries the model, which means the more the average query number is, the harder the defense model is to be compromised.", "For a robust method, higher clean accuracy, accuracy under attack, and query times are expected, as well as lower attack success rate.", "We fine-tune the original BERT using the default settings on downstream tasks.", "We train 20 epochs to discover the robust tickets from the fine-tuned BERT, and then we retrain the robust tickets using default settings of BERT-base.", "The K -step PGD requires K forward-backward passes through the network, which is time consuming.", "Thus, we turn to FreeLB, which accumulates gradients in multiple forward passes and then passing gradients backward once.", "For our approach, we prune robust tickets in the range of 10% and 90% sparsity and report the best one in terms of robustness in our main experiments.", "For a fair comparison, the sparsity of LTH is the same as that of robust tickets.", "All experimental results are the average of 5 trials with different seeds.", "More implementation details and hyperparameters are provided in the Appendix B. We implement all models in MindSpore.", "Table 1 shows the results of robust tickets and other baselines under adversarial attack.", "We can observe that: 1) Original BERT and BERT-tickets fail to perform well on adversarial robustness evaluation, and the BERT-tickets even show lower robustness than BERT, indicating that it is difficult for the pruned subnetworks to fight against adversarial attacks when only test accuracy is considered.", "This result is consistent with the results in (Du et al., 2021; Xu et al., 2021).", "2) The proposed robust ticket achieves a significant improvement of robustness over the original BERT and other adversarial defense methods.", "Robust tickets use a better robust structure to resist adversarial attacks, which is different from the previous methods aimed at solving robust optimization problems.", "3) In 2215 Dataset Method Clean % BERT-Attack TextFooler TextBugger Aua % Suc % #Query Aua % Suc % #Query Aua % Suc % #Query IMDB Fine-tune 94 .", "both AGNEWS and IMDB, the randomly pruned subnetwork loses only about 1 performance point in test accuracy, but performs poorly in adversarial robustness.", "This suggests that robust tickets are more difficult to discovered than traditional lottery tickets.", "4) Robust tickets sacrifice accuracy performance in SST-2 and IMDB.", "We speculate that this may be due to the trade-off between accuracy and robustness (Tsipras et al., 2019).", "posed method on more tasks.", "From Table 2, we can see that our proposed method yields significant improvements of robustness over the original BERT on QNLI, MNLI and QQP datasets.", "There is a significant improvement even compared with InfoBERT and FreeLB.", "To better illustrate the contribution of each component of our method, we perform the ablation study by removing the following components: sparse mask learning (but with IMP instead) and adversarial loss objective (Adv).", "The test results are shown in Table 3. We can observe that: 1) 2216 \u0000\u0013\u0000\u0011\u0000\u0014 \u0000\u0013\u0000\u0011\u0000\u0015 \u0000\u0013\u0000\u0011\u0000\u0016 \u0000\u0013\u0000\u0011\u0000\u0017 \u0000\u0013\u0000\u0011\u0000\u0018 \u0000\u0013\u0000\u0011\u0000\u0019 \u0000\u0013\u0000\u0011\u0000\u001a \u0000\u0013\u0000\u0011\u0000\u001b \u0000\u0013\u0000\u0011\u0000\u001c \u00006\u0000S\u0000D\u0000U\u0000V\u0000L\u0000W\u0000\\ \u0000\u0013 \u0000\u0014\u0000\u0013 \u0000\u0015\u0000\u0013 \u0000\u0016\u0000\u0013 \u0000\u0017\u0000\u0013 \u0000\u0018\u0000\u0013 \u0000$ \u0000F\u0000F \u0000X \u0000U \u0000D \u0000F\u0000\\ \u0000\u0003 \u0000X\u0000Q\u0000G\u0000H \u0000U \u0000\u0003 \u0000$ \u0000W\u0000W \u0000D \u0000F\u0000N \u0000\u0003 \u0000\u000b \u0000\b \u0000\f \u00005\u0000R\u0000E\u0000X\u0000V\u0000W\u0000\u0003\u00007\u0000L\u0000F\u0000N\u0000H\u0000W\u0000/\u0000R\u0000W\u0000W\u0000H\u0000U\u0000\\\u0000\u0003\u00007\u0000L\u0000F\u0000N\u0000H\u0000W\u0000)\u0000U\u0000H\u0000H\u0000/\u0000%\u0000)\u0000L\u0000Q\u0000H\u0000\u0010\u0000W\u0000X\u0000Q\u0000H \u0000\u0013\u0000\u0011\u0000\u0014 \u0000\u0013\u0000\u0011\u0000\u0015 \u0000\u0013\u0000\u0011\u0000\u0016 \u0000\u0013\u0000\u0011\u0000\u0017 \u0000\u0013\u0000\u0011\u0000\u0018 \u0000\u0013\u0000\u0011\u0000\u0019 \u0000\u0013\u0000\u0011\u0000\u001a \u0000\u0013\u0000\u0011\u0000\u001b \u0000\u0013\u0000\u0011\u0000\u001c \u00006\u0000S\u0000D\u0000U\u0000V\u0000L\u0000W\u0000\\ \u0000\u001b\u0000\u001b \u0000\u001b\u0000\u001c \u0000\u001c\u0000\u0013 \u0000\u001c\u0000\u0014 \u0000\u001c\u0000\u0015 \u0000\u001c\u0000\u0016 \u0000\u001c\u0000\u0017 \u0000\u001c\u0000\u0018 \u0000& \u0000O \u0000H\u0000D\u0000Q \u0000\u0003 \u0000\u000b \u0000\b \u0000\f \u00005\u0000R\u0000E\u0000X\u0000V\u0000W\u0000\u0003\u00007\u0000L\u0000F\u0000N\u0000H\u0000W\u0000/\u0000R\u0000W\u0000W\u0000H\u0000U\u0000\\\u0000\u0003\u00007\u0000L\u0000F\u0000N\u0000H\u0000W\u0000)\u0000U\u0000H\u0000H\u0000/\u0000%\u0000)\u0000L\u0000Q\u0000H\u0000\u0010\u0000W\u0000X\u0000Q\u0000H", "Mask learning is important for performance and IMP does not identify robust subnetworks well (Vemparala et al., 2021).", "2) Without adversarial loss objective, the proposed method identifies subnetworks that perform well in terms of clean accuracy, but does not provide any improvement in terms of robustness.", "The proposed method can prune out a subnetwork with arbitrary sparsity based on the confidence of masks.", "In Fig.2, we compare the robust tickets and traditional lottery tickets across all sparsities.", "When the sparsity increases to a certain level, the robustness decreases faster than the accuracy, which indicates that the robustness is more likely to be affected by the model structure than the accuracy.", "Therefore, it is more difficult to find a robust ticket from BERT.", "The accuracy of the subnetwork is slowly decreasing with increasing sparsity, but the robustness shows a different trend.", "The change in robustness can be roughly divided into three phases: The robustness improves as the sparsity grows until a certain threshold; beyond this threshold, the robustness deteriorates but is still better than that of the lottery tickets.", "In the end, when being highly compressed, the robust network collapses into a lottery network.", "A similar phenomenon is also be observed (Liang et al., 2021).", "The robustness performance curve is not as smooth as the accuracy, this may be due to the gap between the adversarial loss objective and the real textual attacks.", "Fig.3 shows the sparsity patterns of robust tickets on all six datasets.", "We can clearly find that the pruning rate increases from bottom to top on the text classification tasks (IMDB, SST2, AGNEWS), while it is more uniform in the natural language inference tasks (MNLI and QNLI) and Quora question pairs (QQP).", "Recent works show that BERT encodes a rich hierarchy of linguistic information.", "Taking the advantage of the probing task, Jawahar et al. (2019) indicate that the surface information features are encoded at the bottom, syntactic information features are in the middle network, and semantic information features in the top.", "Therefore, we speculate that the sparsity pattern of robust tickets is task-dependent.", "An important property of winning tickets is to accelerate the convergence of the training process (Chen et al., 2021; You et al., 2020).", "The training curve in Fig.4 shows that the convergence speed of robust tickets is much faster compared with the default fine-tuning and FreeLB.", "Moreover, the convergence rate of both accuracy and robustness 2217 0 1 2 3 4 5 6 7 8 9 10 11 0 1 2 3 4 5 6 7 8 9 10 11 L a y e r 0.55 0.62 0.67 0.67 0.56 0.67 0.57 0.56 0.60 0.63 0.68 0.65 0.57 0.73 0.56 0.58 0.72 0.62 0.66 0.59 0.59 0.61 0.70 0.56 0.71 0.71 0.59 0.59 0.54 0.67 0.59 0.63 0.58 0.71 0.62 0.69 0.54 0.66 0.59 0.70 0.57 0.73 0.64 0.65 0.63 0.73 0.70 0.73 0.66 0.42 0.65 0.50 0.60 0.74 0.69 0.70 0.66 0.72 0.71 0.71 0.69 0.56 0.49 0.68 0.68 0.49 0.65 0.71 0.65 0.71 0.73 0.64 0.63 0.64 0.63 0.61 0.63 0.71 0.70 0.64 0.63 0.71 0.66 0.69 0.68 0.69 0.55 0.56 0.66 0.60 0.54 0.37 0.71 0.66 0.70 0.48 0.43 0.61 0.50 0.48 0.51 0.60 0.37 0.58 0.67 0.64 0.64 0.67 0.53 0.54 0.48 0.40 0.42 0.56 0.30 0.43 0.47 0.27 0.42 0.47 0.09 0.26 0.04 0.10 0.27 0.06 0.08 0.06 0.03 0.04 0.11 0.23 0.12 0.02 0.03 0.03 0.38 0.32 0.41 0.09 0.05 0.15 0.33 0.04 MLP 0 1 2 3 4 5 6 7 8 9 10 11 0.80 0.80 0.81 0.82 0.83 0.83 0.84 0.84 0.83 0.79 0.71 0.73 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8", "is accelerating.", "The traditional lottery tickets converge faster than our method, which may be due to the fact that robust tickets require maintaining a trade-off between robustness and accuracy.", "To better understand which factor, initialization or structure, has a greater impact on the robust ticket, we conduct corresponding analysis studies.", "We avoid the effect of initializations by re-initializing the weights of robust tickets.", "To avoid the effect of structures and preserve the effect of initializations, we use the full BERT and re-initialize the weights that are not contained in the robust tickets.", "Aua % is obtained after using TextFooler attack.", "The results are shown in Table 4. 5.4.1 Importance of initialization LTH suggests that the winning tickets can not be learned effectively without its original initialization.", "For our robust BERT tickets, their initializations are pre-trained weights.", "Table 4 shows the failure of robust tickets when the random re-initialization is performed.", "Frankle and Carbin (2019) hypothesize that the structure of winning tickets encodes an inductive bias customized for the learning task at hand.", "Although removing this inductive bias reduces performance compared to the robust tickets, it still outperforms the original BERT, and its 2218 \u0000\u0013 \u0000\u0014\u0000\u0013\u0000\u0013 \u0000\u0015\u0000\u0013\u0000\u0013 \u0000\u0016\u0000\u0013\u0000\u0013 \u0000\u0017\u0000\u0013\u0000\u0013 \u0000\u0018\u0000\u0013\u0000\u0013 \u0000\u0019\u0000\u0013\u0000\u0013 \u0000\u001a\u0000\u0013\u0000\u0013 \u0000\u001b\u0000\u0013\u0000\u0013 \u00006\u0000S\u0000D\u0000U\u0000V\u0000L\u0000W\u0000\\ \u0000\u0013 \u0000\u0014\u0000\u0013 \u0000\u0015\u0000\u0013 \u0000\u0016\u0000\u0013 \u0000\u0017\u0000\u0013 \u0000\u0018\u0000\u0013 \u0000\u0019\u0000\u0013 \u0000$ \u0000F\u0000F \u0000X \u0000U \u0000D \u0000F\u0000\\ \u0000\u0003 \u0000X\u0000Q\u0000G\u0000H \u0000U \u0000\u0003 \u0000$ \u0000W\u0000W \u0000D \u0000F\u0000N \u0000\u0003 \u0000\u000b \u0000\b \u0000\f \u00005\u0000R\u0000E\u0000X\u0000V\u0000W\u0000\u0003\u00007\u0000L\u0000F\u0000N\u0000H\u0000W\u0000/\u0000R\u0000W\u0000W\u0000H\u0000U\u0000\\\u0000\u0003\u00007\u0000L\u0000F\u0000N\u0000H\u0000W\u0000)\u0000U\u0000H\u0000H\u0000/\u0000%\u0000)\u0000L\u0000Q\u0000H\u0000\u0010\u0000W\u0000X\u0000Q\u0000H \u0000\u0013 \u0000\u0014\u0000\u0013\u0000\u0013 \u0000\u0015\u0000\u0013\u0000\u0013 \u0000\u0016\u0000\u0013\u0000\u0013 \u0000\u0017\u0000\u0013\u0000\u0013 \u0000\u0018\u0000\u0013\u0000\u0013 \u0000\u0019\u0000\u0013\u0000\u0013 \u0000\u001a\u0000\u0013\u0000\u0013 \u0000\u001b\u0000\u0013\u0000\u0013 \u00006\u0000S\u0000D\u0000U\u0000V\u0000L\u0000W\u0000\\ \u0000\u0018\u0000\u0013 \u0000\u0019\u0000\u0013 \u0000\u001a\u0000\u0013 \u0000\u001b\u0000\u0013 \u0000\u001c\u0000\u0013 \u0000& \u0000O \u0000H\u0000D\u0000Q \u0000\u0003 \u0000\u000b \u0000\b \u0000\f \u00005\u0000R\u0000E\u0000X\u0000V\u0000W\u0000\u0003\u00007\u0000L\u0000F\u0000N\u0000H\u0000W\u0000/\u0000R\u0000W\u0000W\u0000H\u0000U\u0000\\\u0000\u0003\u00007\u0000L\u0000F\u0000N\u0000H\u0000W\u0000)\u0000U\u0000H\u0000H\u0000/\u0000%\u0000)\u0000L\u0000Q\u0000H\u0000\u0010\u0000W\u0000X\u0000Q\u0000H", "performance improves further with longer training time (3 epochs 10 epochs).", "It can be seen that the initializations of some pre-training weights may lead to a decrease in the robustness of the model.", "In this paper, we articulate and demonstrate the Robust Lottery Ticket Hypothesis for PLMs: the full PLM contains subnetworks (robust tickets) that can achieve a better robustness performance.", "We propose an effective method to solve the ticket selection problem by encouraging weights that are not responsible for robustness to become exactly zero.", "Experiments on various tasks corroborate the effectiveness of our method.", "We also find that pre-trained weights may be a key factor affecting the robustness on downstream tasks.", "The authors wish to thank the anonymous reviewers for their helpful comments.", "This work was partially funded by National Natural Science Foundation of China (No. 62076069, 61976056).", "This research was supported by Meituan, Beijing Academy of Artificial Intelligence(BAAI), and CAAI-Huawei MindSpore Open Fund." ]
[ "abstain", "abstain", "objective", "method", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "abstain", "objective", "abstain", "abstain", "abstain", "objective", "objective", "objective", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "objective", "objective", "abstain", "result", "other", "other", "other" ]
[ "There exist biases in individual's language use; the same word ( e.g. , cool ) is used for expressing different meanings ( e.g. , temperature range ) or different words ( e.g. , cloudy , hazy ) are used for describing the same meaning.", "In this study, we propose a method of modeling such personal biases in word meanings (here-after, semantic variations ) with personalized word embeddings obtained by solving a task on subjective text while regarding words used by different individuals as different words.", "To prevent personalized word embeddings from being contaminated by other irrelevant biases, we solve a task of identifying a review-target (objective output) from a given review.", "To stabilize the training of this extreme multi-class classification, we perform a multi-task learning with metadata identification.", "Experimental results with reviews retrieved from RateBeer confirmed that the obtained personalized word embeddings improved the accuracy of sentiment analysis as well as the target task.", "Analysis of the obtained personalized word embeddings revealed trends in semantic variations related to frequent and adjective words.", "When we verbalize what we have sensed, there exist inevitable personal biases in word meanings (hereafter, ( personal ) semantic variations ).", "For example, when we say this pizza is greasy, how greasy can vary widely among individuals.", "When we see the same beer, we may use different words ( e.g. , red , amber ) to refer its color.", "The semantic variations will thereby cause problems not only in communicating with each other, but also in building natural language processing (NLP) systems.", "Several studies have attempted to personalize models to improve the performance on NLP tasks such as sentiment analysis (Gao et al., 2013) and dialogue systems (Li et al., 2016; Zhang et al., 2018).", "All of these studies, however, tried to estimate subjective output from subjective input ( e.g. , estimating sentiment scores given by reviewers).", "These personalized models are thereby affected by not only semantic variations in subjective input but also annotation bias (deviation of outputs given by the annotators) and selection bias (devi-ation of outputs caused by the deviation of input) ( 2).", "This makes it difficult to understand the pure impact of the personal semantic variations.", "In this study, aiming at understanding semantic variations and their impact on NLP tasks, we propose a method for modeling personal semantic variations with personalized word embeddings obtained through the review-target identification task.", "This task estimates the review-target ( objective output) from a given review ( subjective input) ( 3), and is free from annotation bias since output labels are given a priori.", "Also, selection bias can be suppressed by using a dataset in which the same reviewer evaluates the same target only once, so as not to learn the deviation of output labels caused by the choice of inputs.", "To stabilize the training of this extreme multi-class classification, we apply multi-task learning (MTL) with metadata estimation of the review-target to effectively learn a reliable model (personalized word embeddings).", "We validate our hypothesis that words related to the five senses have large semantic variations.", "We first confirm the impact of personalized word embeddings in the review-target identification using a review dataset obtained from RateBeer, and next evaluate their usefulness in sentiment analysis ( 4.2).", "Analysis of the obtained personalized word embeddings on three metrics (frequency, dissemination and polysemy) reveals trends on which words have large semantic variations ( 4.3).", "We established a method to obtain personal semantic variations via multi-task learning on a task with objective outputs ( 3).", "We categorized personal biases in NLP ( 2).", "We confirmed the usefulness of personalized word embeddings in review-target identification and sentiment analysis tasks ( 4.2).", "We revealed trends in personal semantic variations ( 4.3).", "As discussed in 1, biases considered by personalization in NLP tasks have three facets: (1) semantic variation in task inputs (biases in how people use words; our target) (2) annotation bias of output labels (biases in how annotators label) and (3) selection bias of output labels (biases in how people choose perspectives ( e.g., review-targets) that directly affects outputs ( e.g., polarity labels)).", "Existing studies have modeled (2) and (3) with or without (1) for NLP tasks such as sentiment analysis (Li et al., 2011; Gao et al., 2013; Tang et al., 2015a,b; Chen et al., 2016), machine translation (Mirkin and Meunier, 2015; Michel and Neubig, 2018; Wuebker et al., 2018), and dialogue systems (Li et al., 2016; Zhang et al., 2018).", "However, it is difficult to untangle the different facets of personal biases, there is no study aiming to analyze solely personal semantic variations.", "Meanwhile, word embeddings induced for a simple NLP task such as sentiment classification conveys less information, which are not suitable for analyzing semantic variations.", "Computational linguists have utilized word embeddings to capture semantic variations of words caused by diachronic (Hamilton et al., 2016; Szy-manski, 2017; Rosenfeld and Erk, 2018; Jaidka et al., 2018), geographic (Bamman et al., 2014; Garimella et al., 2016) or domain (Tredici and Fernandez, 2017) differences.", "In these studies, they have mainly discussed relationships between semantic variations of words and their frequency, dissemination (the number of users), or polysemy of the words.", "Hamilton et al. (2016) report that the meanings of more frequent words are more stable over time, and the meanings of polysemous words are likely to change over time since polysemous words appear in diverse contexts (Winter et al., 2014; Breal, 1897).", "Tredici and Fernandez (2017) report that the meanings of words used by more people are more stable.", "In this study, we analyze the personal semantic variations by inducing personalized word embeddings, mainly focusing on how frequent, disseminated or polysemous words are biased, following these studies.", "This section describes our neural network-based model (Figure 1) designed for inducing personalized word embeddings via review-target identification.", "This model estimates the review-target from a given review.", "Model Overview: The whole process is as follows.", "First, a given review, represented as a sequence of words, is transformed to a sequence of their word embeddings via an embedding layer.", "Here, our model regards words written by different reviewers as different words for personalization.", "Next, we apply bi-directional long-short term memory (Bi-LSTM) (Gers et al., 1999) to the sequence of word embeddings and use the concatenation of outputs from the forward and backward LSTMs as a review representation.", "Finally, a feed-forward layer computes an output probability distribution from the encoded representation of the review.", "Multi-task Learning (MTL): The extremely large number of labels (review-targets) makes it difficult to stably train the target identification model.", "To mitigate this, we jointly train auxiliary tasks that estimate the metadata of the review-target along with the target task.", "This approach assumes that understanding metadata contributes the performance of the target identification.", "Concretely, our MTL model contains a task-shared embedding layer, a task-shared LSTM-encoder, and task-private feed-forward layers similarly to (Dong et al., 2015; Luong et al., 2016).", "In our experiments, these task-private layers consist of three layers for classification and one layer for regression (Figure 1).", "In the classification tasks, the model computes log probability over target labels as the output and cross-entropy is used as the loss function.", "Here, multi-task learning raises a new problem.", "In auxiliary tasks, since the same reviewer can select the same label multiple times, the personalized word embeddings trained through the multitask learning may implicitly include the selection bias of the output labels depending on the reviewers.", "Therefore, to exclude those irrelevant biases from the personalized embeddings, we introduce personalized bias terms to feed-forward layers of each task.", "These bias terms are fixed to the prior distributions of outputs in the training set depending on reviewers so that they absorb selection biases instead of personalized word embeddings.", "We first evaluate the effect of personalization in the target identification task.", "Next, to confirm the usefulness of the obtained personalized embeddings, we exploit them to solve a sentiment analysis task for extrinsic evaluation.", "Finally, we analyze the degree and tendencies of semantic variations captured by the obtained personalized word embeddings.", "Data For training and intrinsic evaluation, we use a huge review dataset about beers constructed from RateBeer 1 (McAuley and Leskovec, 2013).", "It contains 2,924,163 reviews about 110,369 types of beers with various metadata ( e.g., brewery name, style, rating, etc.) written by 29,265 reviewers.", "From this dataset, we extracted 527,157 reviews about 83,049 types of beers written by the top-100 reviewers who wrote the most reviews, to guarantee enough data size per reviewer.", "After that, we randomly divided these reviews into training (421,725), development (52,716), and test sets (52,716) in the ratio of 8:1:1.", "We refer to this dataset as RateBeer dataset .", "Tasks Our target task takes a beer review and estimates the target beer reviewed in it.", "Regarding the metadata estimated in multi-task learning (MTL), we chose style with 89 types and brewery with 6,208 types for classification tasks and alcohol by volume (ABV) for a regression task.", "Note that these metadata are objective and our MTL is free from annotation bias.", "1 https://www.ratebeer.com/ # Layers of Bi-LSTM 1 Dimensions of LSTM output 200 Dimensions of word embeddings 200 Dropout rate 0.5 Mini-batch size 400 Initial learning rate 0.005 Vocabulary size (w/o personalization) 23,556 Vocabulary size (w/ personalization) 469,346 Table 1: Hyperparameters of our model.", "In the sentiment analysis task, we estimate the ratings of given reviews annotated by the reviewers.", "The ratings are integers and range from 1 to 20.", "Here, we solve this task as a regression task since it is natural to treat the fine-grained rating as continuous values.", "Models and Hyperparameters In the review-target and its metadata identification tasks, we compare our model described in 3 with five models with different settings.", "2 Their differences are, (1) whether the model is trained through MTL, (2) whether personalization is applied to the embeddings, and (3) whether personalization is applied to the bias term in the output layers.", "When MTL is not employed, multiple models are independently trained by tasks without sharing layers.", "Table 1 shows major hyperparameters.", "We initialize the embedding layer by pretrained skip-gram embeddings (Mikolov et al., 2013) induced from the training set of RateBeer dataset.", "The vocabulary is defined by all the words that appeared more than or equal to 10 times in the training set, and the top-100 reviewers have used at least once.", "For optimization, we train the models up to 100 epochs with Adam (Kingma and Ba, 2015) and select the one at the epoch with the best results on the development set.", "3 In the sentiment analysis task for extrinsic evaluation of the obtained personalized word embeddings, we train another set of models with the same architecture and hyperparameters as the review-target identification models in Figure 1 except that they have only one feed-forward layer for the target regression task.", "The embedding layers of the models are kept fixed after initialized by the word embeddings extracted from the corresponding review-target identification models with the same settings of personalization and MTL.", "2 All of our models were implemented by PyTorch ( https://pytorch.org/ ) in the version of 0.4.0.", "3 Regarding MTL, we select the model at the epoch with the best results in the target task.", "Table 2 shows the accuracies on the three classification tasks (product, style, and brewery) and RMSE on the regression task (ABV) through the test sets.", "We can see two insights from the results: (1) In the target task, the model adopted all the methods outperformed others, (2) In the auxiliary tasks, MTL and personalization had no effect.", "As for the first one, since the identification of the review-target requires both detailed understandings of all the related metadata and capturing biases of word meanings, our proposed method considering both elements achieved the best performance as a natural consequence.", "The second one is not surprising since the metadata estimated in the auxiliary tasks are weakly related to each other.", "Thus multi-task learning and personalization did not contribute to the improvement of these auxiliary tasks.", "Finally, Table 3 shows the results of the sentiment analysis task for extrinsic evaluation.", "Similarly to the review-target identification, the model with both MTL and personalization performed the best.", "The personalization of output bias term also slightly improved RMSE.", "These results confirm that the personalized word embeddings trained through our methods successfully learned task-independent personal semantic variations.", "In other words, they were helpful even for solving tasks other than the review-target identification.", "In this section, we analyze the personalized word embeddings extracted from the best model with MTL and personalization to confirm what kind of personal biases exist in each word.", "Here, we target on only the words used by more than or equal to 30% of the reviewers excluding stopwords to remove the influences of low frequent words.", "We first define the personal semantic variations of word w , to determine how the representations of the word are different by individuals, as:", "where U ( w i ) is the set of the reviewers who used the word w i in the training set, e u j w i is the word embedding of w i personalized to reviewer u j , and e w i is the average of e u j w i for u j U ( w i ) .", "Here, we focus on the three factors, frequency , dissemination , and polysemy which have been studied on semantic variations caused by diachronic, geographical or domain differences of text (see 2).", "Figure 2 shows the semantic variations of words against the degree of the three metrics.", "The x-axes correspond to", "(a) log frequency of the word,", "(b) the ratio of the reviewers who used the word, and", "(c) the number of synsets found in WordNet (Miller, 1995) ver. 3.0, respectively.", "Interestingly, in contrast to the results reported in previous studies (Hamilton et al., 2016; Tredici and Fernandez, 2017), personal semantic variations correlate highly with frequency and dissemination, and poorly with polysemy in our results.", "This tendency can be explained as follows: In the dataset used in our experiments, words related to five senses such as mild , dry and soapy frequently appear and their usages depend on the feelings and experiences of each individual.", "Therefore, these words show high semantic variations.", "Regarding polysemy, although the semantic variations acquired through our method might change the degree or nuance of the word sense, they do not change its synset.", "This is because those words are still used only in highly skewed contexts related to beer where word senses and their meanings do not significantly fluctuate.", "surprisingly, nice , quite, light , pleasant , actually, though, buttery, grassy , really, bready , dusty , fruity , decent , mild , rather, little, toffee , earthy , woody , subtle , nutty , strange , even, still, dry , tasty , maybe, medium , bit, soapy , interesting , somewhat, malt, pretty , brewery, character, solid , lovely, floral , herbal , grainy , big , yet, nose, fruit, fairly, aroma, good , almost, metallic", "bottom-50lasted , primary, system, secondary , personal , test, acquired, ii, greater , standout, roof, england, flow, scored, purchase, partly, colorado, spare, rocks, ounce, se, jug, source, shipping, fullness, denmark, center, diminished , greatly, met, spirits, burns, comments, surrounded , scores, expectations, carmel, crew, die, annual , laces, reading, consumed, handpump, disappeared , suits, husks, duck, rise, meal, hall", "Table 4 shows the top-50 words with the largest (and smallest) semantic variations.", "As can be seen from the table, the top-50 words contain much more adjectives (58%) compared with the bottom-50 ones (16%), which are likely to be used to represent our feelings depending on the five senses.", "To see more precisely what kind of words have large semantic variations, we manually classify the adjectives of the top-50 (and bottom-50) by the five senses.", "From the results, on the RateBeer dataset, there were more words representing each sense except hearing in the top-50 words compared with the bottom-50 ones.", "Finally, we analyze the relationships between words beyond the analysis focusing on the single word.", "We visualized the obtained personalized word embeddings of the word mild and some closest words in the embedding space as an example in Figure 3. From the results, intersection of the clusters ( e.g. , grainy and grassy ) means that the same meaning can be represented in different ways by individuals.", "In this study, we proposed a method of modeling personal semantic variations with personalized word embeddings induced through the review-target identification task.", "The experimental results on the large-scale beer review dataset showed that personalized word embeddings obtained by multi-task learning with metadata identification improved the accuracy of sentiment analysis as well as the target task.", "Our analysis revealed that words related to the five senses and adjectives have large semantic variations.", "We plan to analyze relationships between semantic variations and user factors of writers who used the target words such as age and gender.", "We will develop a generic method of inducing personalized word embeddings for any subjective text.", "This work was partially supported by Commissioned Research (201) of the National Institute of Information and Communications Technology of Japan." ]
[ "abstain", "objective", "objective", "method", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "method", "abstain", "objective", "abstain", "objective", "objective", "method", "result", "method", "other", "other", "other", "other", "other", "other", "other", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "other", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "abstain", "result", "result", "abstain", "abstain", "result", "objective", "abstain", "result", "method", "objective", "other" ]
[ "A central quest of probing is to uncover how pre-trained models encode a linguistic property within their representations.", "An encoding, however, might be spuriousi.e., the model might not rely on it when making predictions.", "In this paper, we try to find an encoding that the model actually uses, introducing a usage-based probing setup.", "We first choose a behavioral task which cannot be solved without using the linguistic property.", "Then, we attempt to remove the property by intervening on the model's representations.", "We contend that, if an encoding is used by the model, its removal should harm the performance on the chosen behavioral task.", "As a case study, we focus on how BERT encodes grammatical number, and on how it uses this encoding to solve the number agreement task.", "Experimentally, we find that BERT relies on a linear encoding of grammatical number to produce the correct behavioral output.", "We also find that BERT uses a separate encoding of grammatical number for nouns and verbs.", "Finally, we identify in which layers information about grammatical number is transferred from a noun to its head verb.", "Pre-trained language models have enabled researchers to build models that achieve impressive performance on a wide array of natural language processing (NLP) tasks (Devlin et al., 2019; Liu et al., 2019; Raffel et al., 2020).", "How these models encode and use the linguistic information necessary to perform these tasks, however, remains a mystery.", "Over recent years, a number of works have tried to demystify the inner workings of various pre-trained language models (Alain and Bengio, 2016; Adi et al., 2017; Elazar et al., 2021), but no comprehensive understanding of how the models work has emerged.", "Such analysis methods are typically termed probing , and are methodologically diverse.", "In our assessment, most research in probing can be taxonomized into three distinct paradigms.", "In the first paradigm, diagnostic probing , researchers typically train a supervised classifier to predict a linguistic property from the models' representations.", "High accuracy is then interpreted as an indication that the representations encode information about the property (Alain and Bengio, 2016; Adi et al., 2017; Hupkes et al., 2018; Conneau et al., 2018).", "A second family of methods, behavioral probing , consists in observing a model's behavior directly, typically studying the model's predictions on hand-picked evaluation datasets (Linzen et al., 2016; Goldberg, 2019; Warstadt et al., 2020; Ettinger, 2020).", "Finally, causal probing methods rely on interventions to evaluate how specific components impact a model's predictions (Giulianelli et al., 2018; Vig et al., 2020b; Elazar et al., 2021).", "In this paper, we will investigate how linguistic properties are encoded in a model's representations, where we use the term encoding to mean the subspace on which a model relies to extractor decodethe information.", "While probing has been extensively used to investigate whether a linguistic property is encoded in a set of representations, it still cannot definitively answer whether a model actually uses a certain encoding.", "Diagnostic probes, for instance, may pick up on a spurious encoding of a linguistic property, i.e., an encoding that allows us to extract our target property from the representation, but which the model being probed may not actually use to make a prediction.", "Combining the three paradigms above, we instead seek to find encodings that are actually used by a pre-trained model, which we term functional encodings .", "To that end, we take a usage-based perspective on probing.", "Under this perspective, a researcher first identifies a linguistic property to investigate (e.g., grammatical number), and selects a behavioral task which requires knowledge of this 8818 property (e.g., selecting a verb's inflection which agrees in number with its subject).", "The researcher then performs a causal intervention with the goal of removing a specific encoding (of the linguistic property under consideration) from the model's representations.", "If the encoding is a functional encoding, i.e., an encoding that the model indeed uses to make a prediction, then the intervention should prevent the model from solving the task.", "1 Finally, once a functional encoding is discovered, we can use it to track how the property's information flows through the model under investigation.", "As a case study, we examine how BERT (Devlin et al., 2019) uses grammatical number to solve a number agreement task.", "In English, grammatical number is a binary morpho-syntactic property: A word is plural or singular.", "In turn, subjectverb number agreement is a behavioral task; it inspects whether a model can predict the correct verbal inflection given its subject's number.", "For a model to solve the task, it thus requires information about the grammatical number of the subject and the verb.", "Our goal is to find how the model encodes this information when using it to make predictions.", "In other words, we want to find the structure from which the model decodes number information when solving the task.", "In our experiments, we make three findings.", "First, our experiments provide us with strong evidence that BERT relies on a linear functional encoding of grammatical number to solve the number agreement task.", "Second, we find that nouns and verbs do not have a shared functional encoding of number; in fact, BERT relies on disjoint sub-spaces to extract their information.", "Third, our usage-based perspective allows us to identify where number information (again, as used by our model to make predictions) is transferred from a noun to its head verb.", "Specifically, we find that this transfer occurs between BERT's 3 rd and 8 th layers, and that most of this information is passed indirectly through other tokens in the sentence.", "A variety of approaches to probing have been proposed in the literature.", "In this paper, we taxonomize them into three paradigms:", "(i) diagnostic probing,", "(ii) behavioral probing, and", "(iii) causal probing.", "Diagnostic Probing.", "Traditionally, probing papers focus on training supervised models on top of fixed pre-trained representations (Adi et al., 2017; Hall Maudslay et al., 2020).", "The general assump-tion behind the work is that, if a probe achieves high accuracy, then the property of interest is encoded in the representations.", "Many researchers have expressed a preference for linear classifiers in probing (Alain and Bengio, 2016; Ettinger et al., 2016; Hewitt and Manning, 2019), suggesting that a less complex classifier gives us more insight into the model.", "Others, however, called this criterion into question (Tenney et al., 2019b,a; Voita and Titov, 2020; Papadimitriou et al., 2021; Sinha et al., 2021; Pimentel et al., 2020a; Pimentel and Cotterell, 2021).", "Notably, Hewitt and Liang (2019) proposed that complex classifiers may learn to extract a property by themselves, and may thus not reflect any true pattern in the representations.", "Further, Pimentel et al. (2020b) showed that, under a weak assumption, contextual representations encode as much information as the original sentences.", "Ergo, it is not clear what we can conclude from diagnostic probing alone.", "Behavioral Probing.", "Another probing paradigm analyzes the behavior of pre-trained models on carefully curated datasets.", "By avoiding the use of diagnostic probes, they do not fall prey to the criticism abovetasks are directly performed by the model, and thus must reflect the pre-trained models' acuity.", "One notable example is Linzen et al. (2016), who evaluate a language model's syntactic ability via a careful analysis of a number agreement task.", "By controlling the evaluation, Linzen et al. could disentangle the model's syntactic knowledge from a heuristic based on linear ordering.", "In a similar vein, a host of recent work makes use of carefully designed test sets to perform behavioral analysis (Ribeiro et al., 2020; Warstadt et al., 2020; Warstadt and Bowman, 2020; Lovering et al., 2021; Newman et al., 2021).", "While behavioral probing often yields useful insights, the paradigm typically treats the model itself as a blackbox, thus failing to explain how individual components of the model work.", "Causal Probing.", "Finally, a third probing paradigm relies on causal interventions (Vig et al., 2020b; Tucker et al., 2021; Ravfogel et al., 2021).", "In short, the researcher performs causal interventions that modify parts of the network during a forward pass (e.g., a layer's hidden representations) 8819 to determine their function.", "For example, Vig et al. (2020a) fix a neuron's value while manipulating the model's input to evaluate this neuron's role in mediating gender bias.", "Relatedly, Elazar et al. (2021) propose a method to erase a target property from a model's intermediate layers.", "They then analyze the effect of such interventions on a masked language model's outputs.", "Under our usage-based perspective, our goal is to find a functional encodingi.e., an encoding that the model actually uses when making predictions.", "We achieve this by relying on a combination of the paradigms discussed in 2.", "To this end, we first need a behavioral task that requires the model to use information about the target property.", "We then perform a causal intervention to try to remove this property's encoding.", "We explain both these components in more detail now.", "Behavioral Task.", "We first require a behavioral task which can only be solved with information about the target property.", "The choice of task and target property are thus co-dependent.", "Further, we require our model to perform well on this task.", "On one hand, if the model cannot achieve high performance on the behavioral task, we cannot be sure the model encodes the target property, e.g., grammatical number, at all.", "On the other hand, if the model can perform the task, it must make use of the property.", "Causal Intervention.", "Our goal in this work is to answer a causal question: Can we identify a property's functional encoding?", "We thus require a way to intervene in the model's representations.", "If a model relies on an encoding to make predictions, removing it should harm the model's performance on the behavioral task.", "If follows that, by measuring the impact of our interventions on the model's behavioral output, we can assess whether our model was indeed decoding information from our targeted encoding.", "The empirical portion of this paper focuses on a study of how BERT encodes grammatical number in English.", "We choose number as our object of study because it is a well understood morpho-syntactic property in English.", "Thus, we are able to formulate simple hypotheses about how BERT passes information about number when performing number agreement.", "We use Linzen et", "al.'s (2016) number agreement task as our behavioral task.", "In English, a verb and its subject agree in grammatical number (Corbett, 2006).", "Consider, for instance, the sentences: (1)", "In the above sentences, both (1-a) and (1-c) are grammatical, but (1-b) and (1-d) are not; this is because, in the latter two sentences, the highlighted verb does not agree in number with its subject.", "The subjectverb number agreement task evaluates a model's ability to predict the correct verbal inflection, measuring its preference for the grammatical sentence.", "In this task, the probed model is typically asked to predict the verb's number given its context.", "The model is then considered successful if it assigns a larger probability to the correct verb inflection: context : The boy that holds the keys [MASK] to the movies.", "success : p ( goes | context ) > p ( go | context ) failure : p ( go | context ) > p ( goes | context ) In this setting, the subject is usually called the cue of the agreement, and the verb is called the target .", "Examples similar to the above are often designed to study the impact of distractors (the word keys in (1-c) and (1-d)) on the model's ability to predict the correct verb form.", "Success on the task is usually taken as evidence that a model is able to track syntactic dependencies.", "In this regard, this phenomena has been studied in a variety of settings to investigate the syntactic abilities of neural language models (Gulordava et al., 2018; Marvin and Linzen, 2018; Newman et al., 2021; Lasri et al., 2022).", "In this work, however, we do not use this task to make claims about the syntactic abilities of the model, as done by Linzen et al. (2016).", "Instead, we employ it as a case study to investigate how BERT encodes and uses grammatical number.", "A number of studies have investigated how grammatical number is encoded in neural language mod-8820", "els.", "2 Most of this work, however, focuses on diagnostic probes (Klafka and Ettinger, 2020; Torroba Hennigen et al., 2020).", "These studies are thus agnostic about whether the probed models actually use the encodings of number they discover.", "Some authors, however, do consider the relationship between how the model encodes grammatical number and its predictions.", "Notedly, Giulianelli et al. (2018) use a diagnostic probe to investigate how an LSTM encodes number in a subjectverb number agreement setting.", "Other approaches (Lakretz et al., 2019; Finlayson et al., 2021) have been proposed to apply interventions at the neuron level and track their effect on number agreement.", "In this work, we look for functional encodings of grammatical numberencodings which are in fact used by our probed model when solving the task.", "We discuss how to identify and remove an encoding from a set of contextual representations using diagnostic probing.", "Our use of diagnostic probing is thus twofold.", "For a model to rely on an encoding of our property when making predictions, the property must be encoded in its representations.", "We thus first use diagnostic probing to measure the amount of information a representation contains about the target linguistic property.", "In this sense, diagnostic probing serves to sanity-check our experimentsif we cannot extract information from the representations, there is no point in going forward with our analysis.", "Second, we make use of diagnostic probing in the context of amnesic probing (Elazar et al., 2021), which allows us to determine whether this probe finds a functional or a spurious encoding of the target property.", "In this section, we discuss how to estimate the amount of extractable number information in our probed model's representations.", "This is the probing perspective taken by Pimentel et al. (2020b) and Hewitt et al. (2021) in their diagnostic probing analyses.", "The crux of our analysis relies on the fact that the encoding extracted by diagnostic probes is not necessarily the functional encoding used by our probed model.", "Nevertheless, for a model to use a property in its predictions, this property should 2 We focus on grammatical number here.", "There is, however, also a vast literature investigating how BERT encodes number from a numeracy point of view (Wallace et al., 2019; Geva et al., 2020; Spithourakis and Riedel, 2018).", "at least be extractable, which is true due to the data processing inequality.", "In other words, extractability is a necessary, but not sufficient, condition for a property to be used by the model.", "We quantify the amount of extractable information in a set of representations in terms of a diagnostic probe's V -information (Xu et al., 2020), where the V -information is a direct measure of the amount of extractable information in a random variable.", "We compute the V -information as: 3 IV ( R N ) = HV ( N ) HV ( N | R ) (1) where R and N are, respectively, a representation-valued and a number-valued random variables, V is a variation family determined by our diagnostic probe, and the V -entropies are defined as: HV ( N ) = inf q V E n p ( n ) log 1 q ( n ) (2) HV ( N | R ) = inf q V E n, r p ( n, r ) log 1 q ( n | r ) (3) Further, if we denote our analyzed model's (i.e., BERT's) hidden representations as: r t,l = model( sentence ) t,l (4) we define our linear diagnostic probe as: p ( n t = SING | sentence ) = ( r t,l + b ) (5) where r t,l R 768 , t is a sentence position and l is a layer, n t is the binary number label associated with the word at position t , is the sigmoid function, is a real-valued column parameter vector and b is a bias term.", "In this case, we can define our variational family as V = { p | R 768 } .", "We now discuss how we perform a causal intervention to prevent the analyzed model from using a given encoding.", "The goal is to damage the model and make it forget a property's information.", "This allows us to analyze whether that encoding actually influences the probed model's predictions i.e., whether this encoding is indeed functional.", "To this end, we employ amnesic probing (Elazar et al., 2021).", "4 In short, we first learn a linear diagnostic classifier, following eq.", "(5).", "We then compute 3 See App.", "B for a detailed description of V -information.", "4 In particular, this intervention consists in applying iterative null-space projection to the representations, originally proposed by Ravfogel et al. (2020).", "We note that Ravfogel et al. (2022a,b) recently proposed two new methods to remove information from a set of representations.", "By iterating this process, we store a set of parameter vectors ( k ) and their associated projectors P ( k ) null until we are unable to extract the property.", "The composition of these projectors makes it possible to remove all linearly extractable number information from the analyzed representations.", "We can then apply the resulting composition to the said representations to get a new set of vectors: r t,l = P ( k ) null P (2)null P (1)null r t,l (7) After learning the projectors, we can measure how erasing a layer's encoding impacts:", "(i) the subsequent layers, and", "(ii) our model's performance on the number agreement task.", "Removing a functional encoding of grammatical number should cause a performance drop on the number agreement task.", "Further, looking at both", "(i) and", "(ii) allows us to make a connection between the amount of information we can extract from our probed model's layers and its behavior.", "We are thus able to determine whether the encodings revealed by our diagnostic probes are valid from a usage-based perspectiveare they actually used by the probed model on a task that requires them?", "5 6 Experimental Setup Data.", "We perform our analysis on Linzen et", "al.'s (2016) number agreement dataset, which consists in sentences extracted from Wikipedia.", "In this dataset, each sentence has been labeled with the position of the cue and target, along with their grammatical number.", "We assume here that this dataset is representative of the number agreement task; this may not be true in general, however.", "Model.", "In our experiments, we probe BERT (De-vlin et al., 2019).", "6 Specifically, BERT is a bidirectional transformer model with 12 layers, trained using a masked language modeling objective.", "As BERT has been shown to perform well on this dataset (Goldberg, 2019), we already know that our probed model passes our first requirement; BERT does use number information in its predictions.", "5 Our method differs from amnesic probing mostly in that all our analyses are based on a behavioral task which we know a priori to require the property we investigate.", "6 We focus on bert-base-uncased , as implemented in the transformers library (Wolf et al., 2020).", "Distinguishing Nouns and Verbs.", "While number is a morpho-syntactic property common to nouns and verbs, we do not know a priori if BERT relies on a single subspace to encode number in their representations.", "Though it is possible for BERT to use the same encoding, it is equally plausible that each part of speech would get its own number encoding.", "This leads us to perform our analyses using independent sets of representations for nouns and verbs; as well as a mixed set which merges both of them.", "Further, verbs are masked when performing the number agreement task, so their representations differ from those of unmasked verbs.", "Ergo, we analyze both unmasked, and masked tokens at the target verb's positionwhich for simplicity we call verbs and masked verbs, respectively.", "This leaves us with four probed categories: nouns, verbs, masked verbs, and mixed.", "In our experiments, we focus on answering two questions:", "(i) How is number information encoded in BERT's representations?", "and", "(ii) How is number information transferred from a noun to its head verb for the model to use it on the behavioral task?", "We answer question", "(i) under both extractability and usage-based perspectives.", "In 7.1, we present our sanity-check experiments that demonstrate that grammatical number is indeed linearly extractable from BERT's representations.", "In 7.2 and 7.3, we use our causal interventions: we identify BERT's functional encodings of number; and analyze whether these functional encodings are shared across parts of speech.", "Finally, in 7.4 and 7.5 we investigate question", "(ii), taking a closer look at the layers in which information is passed.", "Fig. 1 presents diagnostic probing results in all four of our analyzed settings.", "7 A priori , we expect that verbs' and nouns' representations should already contain a large amount of V -information about their grammatical number at the type-level.", "As expected, we see that the V -information is near its maximum for both verbs and nouns in all layers; this means that nearly 100% of the uncertainty about grammatical number is eliminated given BERT's representations.", "Further, the mixed category results also reach a maximal V -information, which indicates that it is possible to extract information linearly about both categories at the same time.", "On the other hand, the V -information of masked verbs is 0 at the non-contextual layer and it progressively grows as we get to the upper layers.", "8 As we go to BERT's deeper layers, the V -information steadily rises, with nearly all of the original uncertainty eliminated in the mid layers.", "This suggests that masked verbs' representations acquire number information in the first 7 layers.", "However, from these results alone we cannot confirm whether the encoding that nouns and verbs use for number is shared or disjoint.", "We thus inspect the encoding found by our diagnostic probes, evaluating the cosine similarity between their learned parameters (ignoring the probes' bias terms b here).", "If there is a single shared encoding across categories, these cosine similarities should be high.", "If not, they should be roughly zero.", "Fig. 2 (left) shows that nouns and verbs might encode number along different directions.", "Specifically, noun 7 We further present accuracy results in App.", "A. 8 We note that, in Fig. 1, layer 0 corresponds to the noncontextual representations (i.e. the word embeddings before being summed to BERT's position embeddings).", "Noncontextual layers thus contain no information about the number of a masked verb, as the mask token contains no information about its replaced verb's number.", "representations on the first 6 layers seem to have a rather opposite encoding from verbs, while the later layers are mostly orthogonal.", "Further, while masked verbs and verbs do not seem to share an encoding in the first few layers, they are strongly aligned from layer 6 on (Fig. 2; center).", "We now know that there are encodings from which we can extract number from nouns and verbs, and that these encodings are disjoint.", "However, we still do not know whether the encoding is spurious or functional.", "The patterns previously observed suggest there is a linear encoding, from which grammatical number can be extracted from BERT's representations.", "We, however, cannot determine whether these encodings are actually those used by the model to make predictions.", "We now answer this question taking our proposed usage-based perspective, studying the impact of linearly removing number information at both the cue and target positions.", "9 We evaluate the model's change in behavior, as evaluated by its performance on the number agreement (NA) task.", "Fig. 3a and Fig. 3c show the decrease in how much information is extractable at the target position after the interventions are applied.", "Fig. 3b and Fig. 3d show BERT's accuracy drops on the NA task (as measured at the output level).", "By comparing these results, we find a strong alignment between the information lost across layers and the damage caused to the performance on the task irreversible information losses resulting from our intervention are mirrored by a performance decrease on the NA task.", "This alignment confirms that the model indeed uses the linear information erased by our probes.", "In other words, we have found the probed property's functional encoding.", "We now return to the question of whether nouns and verbs share a functional encoding of number, or whether BERT encodes number differently for them.", "To answer this question, we investigate the impact of removing a category's encoding from another category, e.g. applying an amnesic projector learned on verbs to a noun.", "In particular, we measure how these interventions decrease BERT's performance in our behavioral task.", "Figs.", "3b and 3d presents these results.", "We observe that each category's projector has a different impact on performance depending on whether it is applied to the cue or the target.", "Fig. 3b, for instance, shows that using the verb's, or masked verb's, projector to erase information at the cue's (i.e., the noun's) position does not hurt the model.", "It is similarly unimpactful (as shown in Fig. 3d) to use the noun's projectors to erase a target's (i.e., the masked verb's) number information.", "Further, the projector learned on the mixed set of representations does affect the cue, but has little effect on the target.", "Together, these results confirm that BERT relies on rather distinct encodings of number information for nouns and verbs.", "10 10 A potential criticism of amnesic probing is that it may remove more information than necessary.", "Cross-testing our amnesic probes, however, results in little effect on BERT's behavior.", "It is thus likely that they are not overly harming our model.", "Further, we also run a control experiment proposed by Elazar et al., removing random directions at each layer (instead of the ones found by our amnesic probes).", "These results are displayed in the appendix in Tab.", "1.", "These experiments allow us to make stronger claims about BERT's encoding of number information.", "First, the fact that our interventions have a direct impact on BERT's behavioral output confirms that the encoding we erase actually bears number information as used by the model when making predictions .", "Second, the observation from Fig. 2that number information could be encoded orthogonally for nouns and verbsis confirmed from a usage-based perspective.", "Indeed, using amnesic probes trained on nouns has no impact when applied to masked verbs, and amnesic probes trained on verbs have no impact when applied to nouns.", "These fine-grained differences in encoding may affect larger-scale probing studies if one's goal is to understand the inner functioning of a model.", "Together, these results invite us to employ diagnostic probes more carefully, as the encoding found may not be actually used by the model.", "Once we have found which encoding the model uses, we can pinpoint at which layers the information is passed from the cue to the target.", "To that end, we observe how interventions applied in each layer affect performance.", "We know number information must be passed from the cue to the target's representationsotherwise the model cannot solve the task.", "Therefore, applying causal interventions to remove number information should harm the model's behavioral performance when applied to:", "(i) the cue's representations before the trans-8824 fer occurs;", "Interestingly, we observe that target interventions are only harmful after the 9 th layer; while noun interventions only hurt up to the 8 th layer (again, shown in Fig. 3).", "This suggests that the cue passes its number information in the first 8 layers, and that the target stops acquiring number information in the last three layers.", "While we see a clear stop in the transfer of information after layer 8, Fig. 3a shows that the previous layers' contribution decreases slowly up to that layer.", "We thus conclude that information is passed in the layers before layer 8; however, we concede that our analysis alone makes it difficult to pinpoint exactly which layers.", "Finally, in our last experiments, we complement our analysis by performing attention removal to investigate how and where information is transmitted from the cue to the target position.", "This causal intervention first serves the purpose of identifying the layers where information is transmitted.", "Further, we wish to understand whether information is passed directly, or through intermediary tokens.", "To this end, we look at the effect on NA performance after:", "(i) cutting direct attention from the target to the cue at specific layers,", "(ii) cutting attention from all tokens to the cue (as information could be first passed to intermediate tokens, which the target could attend to in subsequent layers).", "11 Specifically, we perform these interventions in ranges of layers (from layer i up to j ).", "We report number agreement accuracy drops in Fig. 4.", "12 The diagonals from this figure show that removing attention from a single layer has basically no effect.", "Further, cutting attention from layers 6 to 10 suffices to observe near-maximal effect for direct attention.", "Interestingly, it is at those layers where we see a transition from it being more harmful to apply amnesic projectors to the cue or to the target (in 7.4).", "However, while those layers play a role in carrying number information to the target position, the drop is relatively modest when cutting only direct attention ( 10% ).", "Cutting attention from all tokens to the cue, in turn, has a significant effect on performance (up to 40% ), and is maxi-11 Klafka and Ettinger (2020), for instance, showed that number information of a given token was distributed to neighboring tokens in the upper layers 12 We detail these interventions in App.", "mal for layers 2 to 8.", "This first suggests that, while other clues in the sentence could indicate the target verb's number (such as a noun's determiner), the noun itself is the core source of number information.", "Further, this shows the target can get information from intermediate tokens, instead of number being passed exclusively through direct attention.", "13 8 Conclusion Our analysis of grammatical number allows us to track how a simple morpho-syntactic property, grammatical number, is encoded across BERT's layers and where it is transferred between them before being used on the model's predictions.", "Using carefully chosen causal interventions, we demonstrate that forgetting number information impacts both:", "(i) BERT's behavior and", "(ii) how much information is extractable from BERT's inner layers.", "Further, the effects of our interventions on these two, i.e., behavior and information extractability, line up satisfyingly, and reveal the encoding of number to be orthogonal for nouns and verbs.", "Finally, we are also able to identify the layers in which the transfer of information occurs, and find that the information is not passed directly but through intermediate tokens.", "Our ability to concretely evaluate our interventions' impact is due to our focus on grammatical number and the number agreement taskwhich directly align probed information and behavioral performance.", "The authors foresee no ethical concerns with the work presented in this paper.", "We thank Josef Valvoda, the anonymous reviewers, and the meta-reviewer, for their invaluable feedback in improving this paper.", "Karim Lasri's work is funded by the French government under management of Agence Nationale de la Recherche as part of the Investissements d'avenir program, reference ANR-19-P3IA-0001 (PRAIRIE 3IA Insti-tute).", "Ryan Cotterell acknowledges support from the Swiss National Science Foundation (SNSF) as part of the The Forgotten Role of Inductive Bias in Interpretability project.", "Tiago Pimentel is supported by a Meta Research PhD Fellowship." ]
[ "abstain", "abstain", "result", "objective", "method", "method", "objective", "result", "result", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "method", "result", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "objective", "result", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "other", "method", "other", "other", "other", "other", "objective", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "other", "other", "other", "other" ]
[ "There has been much recent work on training neural attention models at the sequence-level using either reinforcement learning-style methods or by optimizing the beam.", "In this paper, we survey a range of classical objective functions that have been widely used to train linear models for structured prediction and apply them to neural sequence to sequence models.", "Our experiments show that these losses can perform surprisingly well by slightly outperforming beam search optimization in a like for like setup.", "We also report new state of the art results on both IWSLT'14 German-English translation as well as Gigaword abstractive summarization.", "On the large WMT'14 English-French task, sequence-level training achieves 41.5 BLEU which is on par with the state of the art. 1 1 Introduction Sequence to sequence models are usually trained with a simple token-level likelihood loss (Sutskever et al., 2014; Bahdanau et al., 2014).", "However, at test time, these models do not produce a single token but a whole sequence.", "In order to resolve this inconsistency and to potentially improve generation, recent work has focused on training these models at the sequence-level, for instance using REINFORCE (Ranzato et al., 2015), actor-critic (Bahdanau et al., 2016), or with beam search optimization (Wiseman and Rush, 2016).", "Before the recent work on sequence level training for neural networks, there has been a large body of research on training linear models at the Equal contribution.", "sequence level.", "For example, direct loss optimization has been popularized in machine translation with the Minimum Error Rate Training algorithm (MERT; Och 2003) and expected risk minimization has an extensive history in NLP (Smith and Eisner, 2006; Rosti et al., 2010; Green et al., 2014).", "This paper revisits several objective functions that have been commonly used for structured prediction tasks in NLP (Gimpel and Smith, 2010) and apply them to a neural sequence to sequence model (Gehring et al., 2017b) ( 2).", "Specifically, we consider likelihood training at the sequence-level, a margin loss as well as expected risk training.", "We also investigate several combinations of global losses with token-level likelihood.", "This is, to our knowledge, the most comprehensive comparison of structured losses in the context of neural sequence to sequence models ( 3).", "We experiment on the IWSLT'14 German-English translation task (Cettolo et al., 2014) as well as the Gigaword abstractive summarization task (Rush et al., 2015).", "We achieve the best reported accuracy to date on both tasks.", "We find that the sequence level losses we survey perform similarly to one another and outperform beam search optimization (Wiseman and Rush, 2016) on a comparable setup.", "On WMT'14 English-French, we also illustrate the effectiveness of risk minimization on a larger translation task.", "Classical losses for structured prediction are still very competitive and effective for neural models ( 5, 6).", "The general architecture of our sequence to sequence models follows the encoder-decoder approach with soft attention first introduced in (Bah-danau et al., 2014).", "As a main difference, in most of our experiments we parameterize the encoder and the decoder as convolutional neural 355 networks instead of recurrent networks (Gehring et al., 2017a,b).", "Our use of convolution is motivated by computational and accuracy considerations.", "However, the objective functions we present are model agnostic and equally applicable to recurrent and convolutional models.", "We demonstrate the applicability of our objective functions to recurrent models (LSTM) in our comparison to Wiseman and Rush (2016) in 6.6.", "Notation.", "We denote the source sentence as x , an output sentence of our model as u , and the reference or target sentence as t .", "For some objectives, we choose a pseudo reference u instead, such as a model output with the highest BLEU or ROUGE score among a set of candidate outputs, U , generated by our model.", "Concretely, the encoder processes a source sentence x = ( x 1 , . . . , x m ) containing m words and outputs a sequence of states z = ( z 1 . . . . , z m ) .", "The decoder takes z and generates the output sequence u = ( u 1 , . . . , u n ) left to right, one element at a time.", "For each output u i , the decoder computes hidden state h i based on the previous state h i 1 , an embedding g i 1 of the previous target language word u i 1 , as well as a conditional input c i derived from the encoder output z .", "The attention context c i is computed as a weighted sum of ( z 1 , . . . , z m ) at each time step.", "The weights of this sum are referred to as attention scores and allow the network to focus on the most relevant parts of the input at each generation step.", "Attention scores are computed by comparing each encoder state z j to a combination of the previous decoder state h i and the last prediction u i ; the result is normalized to be a distribution over input elements.", "At each generation step, the model scores for the V possible next target words u i by transforming the decoder output h i via a linear layer with weights W o and bias b o : s i = W o h i + b o .", "This is turned into a distribution via a softmax: p ( u i | u 1 , . . . , u i 1 , x ) = softmax( s i ) .", "Our encoder and decoder use gated convolutional neural networks which enable fast and accurate generation (Gehring et al., 2017b).", "Fast generation is essential to efficiently train on the model output as is done in this work as sequence-level losses require generating at training time.", "Both encoder and decoder networks share a simple block structure that computes intermediate states based on a fixed number of input tokens and we stack several blocks on top of each other.", "Each block contains a 1-D convolution that takes as input k feature vectors and outputs another vector; subsequent layers operate over the k output elements of the previous layer.", "The output of the convolution is then fed into a gated linear unit (Dauphin et al., 2017).", "In the decoder network, we rely on causal convolution which rely only on states from the previous time steps.", "The parameters of our model are all the weight matrices in the encoder and decoder networks.", "Further details can be found in Gehring et al. (2017b).", "We compare several objective functions for training the model architecture described in 2.", "The corresponding loss functions are either computed over individual tokens ( 3.1), over entire sequences ( 3.2) or over a combination of tokens and sequences ( 3.3).", "An overview of these loss functions is given in Figure 1.", "Most prior work on sequence to sequence learning has focused on optimizing token-level loss functions, i.e., functions for which the loss is computed additively over individual tokens.", "Token-level likelihood ( TokNLL , Equation", "1) minimizes the negative log likelihood of individual reference tokens t = ( t 1 , . . . , t n ) .", "It is the most common loss function optimized in related work and serves as a baseline for our comparison.", "Likelihood training forces the model to make extreme zero or one predictions to distinguish between the ground truth and alternatives.", "This may result in a model that is too confident in its training predictions, which may hurt its generalization performance.", "Label smoothing addresses this by acting as a regularizer that makes the model less confident in its predictions.", "Specifically, we smooth the target distribution with a prior distribution f that is independent of the current input x (Szegedy et al., 2015; Pereyra et al., 2017; Vaswani et al., 2017).", "We use a uniform prior distribution over all words in the vocabulary, f = 1 V .", "One may also use a unigram distribution which has been shown to work better on some tasks (Pereyra et al., 2017).", "Label smoothing is equivalent to adding the KL divergence between f and the model prediction 356 L TokNLL = n X i =1 log p ( t i | t 1 , . . . , t i 1 , x ) (1) L TokLS = n X i =1 log p ( t i | t 1 , . . . , t i 1 , x ) DKL ( f k p ( t i | t 1 , . . . , t i 1 , x )) (2) L SeqNLL = log p ( u | x ) + log X u U ( x ) p ( u | x ) (3) L Risk = X u U ( x ) cost( t , u ) p ( u | x ) P u 0 U ( x ) p ( u 0 | x ) (4) L MaxMargin = max [0 , cost( t , u ) cost( t , u ) s ( u | x ) + s ( u | x )] (5) L MultiMargin = X u U ( x ) max [0 , cost( t , u ) cost( t , u ) s ( u | x ) + s ( u | x )] (6) L SoftmaxMargin = log p ( u | x ) + log X u U ( x ) exp [ s ( u | x ) + cost( t , u )] (7) Figure 1: Token and sequence negative log-likelihood (Equations 1 and 3), token-level label smoothing (Equa-tion 2), expected risk (Equation 4), max-margin (Equation 5), multi-margin (Equation 6), softmax-margin (Equa-tion 7).", "p ( u | x ) to the negative log likelihood ( TokLS , Equation 2).", "In practice, we implement label smoothing by modifying the ground truth distribution for word u to be q ( u ) = 1 (cid:15) and q ( u 0 ) = (cid:15)V for u 0 6 = u instead of q ( u ) = 1 and q ( u 0 ) = 0 where (cid:15) is a smoothing parameter.", "We also consider a class of objective functions that are computed over entire sequences, i.e., sequence-level objectives.", "Training with these objectives requires generating and scoring multiple candidate output sequences for each input sequence during training, which is computationally expensive but allows us to directly optimize task-specific metrics such as BLEU or ROUGE.", "Unfortunately, these objectives are also typically defined over the entire space of possible output sequences, which is intractable to enumerate or score with our models.", "Instead, we compute our sequence losses over a subset of the output space, U ( x ) , generated by the model.", "We discuss approaches for generating this subset in 4.", "Similar to TokNLL , we can minimize the negative log likelihood of an entire sequence rather than individual tokens ( SeqNLL , Equation 3).", "The log-likelihood of sequence u is the sum of individual token log probabilities, normalized by the number of tokens to avoid bias towards shorter sequences: p ( u | x ) = exp 1 n n X i =1 log p ( u i | u 1 , . . . , u i 1 , x ) As target we choose a pseudo reference 2 amongst the candidates which maximizes either BLEU or ROUGE with respect to t , the gold reference: u ( x ) = arg max u U ( x ) BLEU( t , u ) As is common practice when computing BLEU at the sentence-level, we smooth all initial counts to one (except for unigram counts) so that the geometric mean is not dominated by zero-valued n gram match counts (Lin and Och, 2004).", "This objective minimizes the expected value of a given cost function over the space of candidate sequences ( Risk , Equation 4).", "In this work we use task-specific cost functions designed to maximize BLEU or ROUGE (Lin, 2004), e.g., cost( t , u ) = 2 Another option is to use the gold reference target, t , but in practice this can lead to degenerate solutions in which the model assigns low probabilities to nearly all outputs.", "This is discussed further in 4.", "1 BLEU( t , u ) , for a given a candidate sequence u and target t .", "Different to SeqNLL ( 3.2), this loss may increase the score of several candidates that have low cost, instead of focusing on a single sequence which may only be marginally better than any alternatives.", "Optimizing this loss is a particularly good strategy if the reference is not always reachable, although compared to classical phrase-based models, this is less of an issue with neural sequence to sequence models that predict individual words or even sub-word units.", "The Risk objective is similar to the REINFORCE objective used in Ranzato et al. (2015), since both objectives optimize an expected cost or reward (Williams, 1992).", "However, there are a few important differences: (1) whereas REINFORCE typically approximates the expectation with a single sampled sequence, the Risk objective considers multiple sequences; (2) whereas REINFORCE relies on a baseline reward 3 to determine the sign of the gradients for the current sequence, for the Risk objective we instead estimate the expected cost over a set of candidate output sequences (see 4); and (3) while the baseline reward is different for every word in REINFORCE, the expected cost is the same for every word in risk minimization since it is computed on the sequence level based on the actual cost.", "MaxMargin (Equation", "5) is a classical margin loss for structured prediction (Taskar et al., 2003; Tsochantaridis et al., 2005) which enforces a margin between the model scores of the highest scoring candidate sequence u and a reference sequence.", "We replace the human reference t with a pseudo-reference u since this setting performed slightly better in early experiments; u is the candidate sequence with the highest BLEU score.", "The size of the margin varies between samples and is given by the difference between the cost of u and the cost of u .", "In practice, we scale the margin by a hyper-parameter determined on the validation set: (cost( t , u ) cost( t , u )) .", "For this loss we use the unnormalized scores computed by the model before the final softmax: s ( u | x ) = 1 n n X i =1 s ( u i | u 1 , . . . , u i 1 , x ) 3 Ranzato et al. (2015) estimate the baseline reward for REINFORCE with a separate linear regressor over the model's current hidden state.", "MaxMargin only updates two elements in the candidate set.", "We therefore consider MultiMargin (Equation", "6) which enforces a margin between every candidate sequence u and a reference sequence (Herbrich et al., 1999), hence the name Multi-Margin.", "Similar to MaxMargin , we replace the reference t with the pseudo-reference u .", "Finally, SoftmaxMargin (Equation", "7) is another classic loss that has been proposed by Gimpel and Smith (2010) as another way to optimize task-specific costs.", "The loss augments the scores inside the exp of SeqNLL (Equation", "3) by a cost.", "The intuition is that we want to penalize high cost outputs proportional to their cost.", "We also experiment with two variants of combining sequence-level objectives ( 3.2) with token-level objectives ( 3.1).", "First, we consider a weighted combination ( Weighted ) of both a sequence-level and token-level objective (Wu et al., 2016), e.g., for TokLS and Risk we have: L Weighted = L TokLS + (1 ) L Risk (8) where is a scaling constant that is tuned on a held-out validation set.", "Second, we consider a constrained combination ( Constrained ), where for any given input we use either the token-level or sequence-level loss, but not both.", "The motivation is to maintain good token-level accuracy while optimizing on the sequence-level.", "In particular, a sample is processed with the sequence loss if the token loss under the current model is at least as good as the token loss of a baseline model L b TokLS .", "Otherwise, we update according to the token loss: L Constrained = ( L Risk L TokLS L b TokLS L TokLS otherwise (9) In this work we use a fixed baseline model that was trained with a token-level loss to convergence.", "The sequence-level objectives we consider ( 3.2) are defined over the entire space of possible output sequences, which is intractable to enumerate or", "score with our models.", "We therefore use a subset of K candidate sequences U ( x ) = { u 1 , . . . , u K } , which we generate with our models.", "We consider two search strategies for generating the set of candidate sequences.", "The first is beam search , a greedy breadth-first search that maintains a beam of the topK scoring candidates at each generation step.", "Beam search is the de facto decoding strategy for achieving state-of-the-art results in machine translation.", "The second strategy is sampling (Chatterjee and Cancedda, 2010), which produces K independent output sequences by sampling from the model's conditional distribution.", "Whereas beam search focuses on high probability candidates, sampling introduces more diverse candidates (see comparison in 6.5).", "We also consider both online and offline candidate generation settings in 6.4.", "In the online setting, we regenerate the candidate set every time we encounter an input sentence x during training.", "In the offline setting, candidates are generated before training and are never regenerated.", "Offline generation is also embarrassingly parallel because all samples use the same model.", "The disadvantage is that the candidates become stale.", "Our model may perfectly be able to discriminate between them after only a single update, hindering the ability of the loss to correct eventual search errors.", "4 Finally, while some past work has added the reference target to the candidate set, i.e., U 0 ( x ) = U ( x ) { t } , we find this can destabilize training since the model learns to assign low probabilities nearly everywhere, ruining the candidates generated by the model, while still assigning a slightly higher score to the reference (cf.", "Shen et al. (2016)).", "Accordingly we do not add the reference translation to our candidate sets.", "We experiment on the IWSLT'14 German to English (Cettolo et al., 2014) task using a similar setup as Ranzato et al. (2015), which allows us to compare to other recent studies that also adopted this setup, e.g., Wiseman and Rush (2016).", "5 The training data consists of 160K sentence pairs and the validation set comprises 7K sentences ran-4 We can mitigate this issue by regenerating infrequently, i.e., once every b batches but we leave this to future work.", "5 Different to Ranzato et al. (2015) we train on sentences of up to 175 rather than 50 tokens.", "domly sampled and held-out from the train data.", "We test on the concatenation of tst2010, tst2011, tst2012, tst2013 and dev2010 which is of similar size to the validation set.", "All data is lowercased and tokenized with a byte-pair encoding (BPE) of 14,000 types (Sennrich et al., 2016) and we evaluate with case-insensitive BLEU.", "We also experiment on the much larger WMT'14 English-French task.", "We remove sentences longer than 175 words as well as pairs with a source/target length ratio exceeding 1.5 resulting in 35.5M sentence-pairs for training.", "The source and target vocabulary is based on 40K BPE types.", "Results are reported on both newstest2014 and a validation set held-out from the training data comprising 26,658 sentence pairs.", "We modify the fairseq-py toolkit to implement the objectives described in 3.", "6 Our translation models have four convolutional encoder layers and three convolutional decoder layers with a kernel width of 3 and 256 dimensional hidden states and word embeddings.", "We optimize these models using Nesterov's accelerated gradient method (Sutskever et al., 2013) with a learning rate of 0.25 and momentum of 0.99.", "Gradient vectors are renormalized to norm 0.1 (Pascanu et al., 2013).", "We train our baseline token-level models for 200 epochs and then anneal the learning by shrinking it by a factor of 10 after each subsequent epoch until the learning rate falls below 10 4 .", "All sequence-level models are initialized with parameters of a token-level model before annealing.", "We then train sequence-level models for another 10 to 20 epochs depending on the objective.", "Our batches contain 8K tokens and we normalize gradients by the number of non-padding tokens per mini-batch.", "We use weight normalization for all layers except for lookup tables (Salimans and Kingma, 2016).", "Besides dropout on the em-beddings and the decoder output, we also apply dropout to the input of the convolutional blocks at a rate of 0.3 (Srivastava et al., 2014).", "We tuned the various parameters above and report accuracy on the test set by choosing the best configuration based on the validation set.", "We length normalize all scores and probabilities in the sequence-level losses by dividing by the number of tokens in the sequence so that scores are comparable between different lengths.", "Ad-6 https://github.com/facebookresearch/ fairseq-py .", "ditionally, when generating candidate output sequences during training we limit the output sequence length to be less than 200 tokens for efficiency.", "We generally use 16 candidate sequences per training example, except for the ablations where we use 5 for faster experimental turnaround.", "For summarization we use the Gigaword corpus as training data (Graff et al., 2003) and pre-process it identically to Rush et al. (2015) resulting in 3.8M training and 190K validation examples.", "We evaluate on a Gigaword test set of 2,000 pairs identical to the one used by Rush et al. (2015) and report F1 ROUGE similar to prior work.", "Our results are in terms of three variants of ROUGE (Lin, 2004), namely, ROUGE-1 (RG-1, unigrams), ROUGE-2 (RG-2, bigrams), and ROUGE-L (RG-L, longest-common substring).", "Similar to Ayana et al. (2016) we use a source and target vocabulary of 30k words.", "Our models for this task have 12 layers in the encoder and decoder each with 256 hidden units and kernel width 3.", "We train on batches of 8,000 tokens with a learning rate of 0.25 for 20 epochs and then anneal as in 5.1.", "First, we compare all objectives based on a weighted combination with token-level label smoothing (Equation 8).", "We also show the likelihood baseline (MLE) of Wiseman and Rush (2016), their beam search optimization method (BSO), the actor critic result of Bahdanau et al. (2016) as well as the best reported result on this dataset to date by Huang et al. (2017).", "We show a like-for-like comparison to Wiseman and Rush (2016) with a similar baseline model below ( 6.6).", "Table 1 shows that all sequence-level losses outperform token-level losses.", "Our baseline token-level results are several points above other figures in the literature and we further improve these results by up to 0.61 BLEU with Risk training.", "Next, we compare various strategies to combine sequence-level and token-level objectives (cf. 3.3).", "For these experiments we use 5 candidate sequences per training example for faster experimental turnaround.", "We consider Risk as test std MLE (W & R, 2016) [T] 24.03 BSO (W & R, 2016) [S] 26.36 Actor-critic (B, 2016) [S] 28.53 Huang et al. (2017) [T] 28.96 Huang et al. (2017) (+LM) [T] 29.16 TokNLL [T] 31.78 0.07 TokLS [T] 32.23 0.10 SeqNLL [S] 32.68 0.09 Risk [S] 32.84 0.08 MaxMargin [S] 32.55 0.09 MultiMargin [S] 32.59 0.07 SoftmaxMargin [S] 32.71 0.07 Table 1: Test accuracy in terms of BLEU on IWSLT'14 German-English translation with various loss functions cf.", "sequence-level loss and label smoothing as token-level loss.", "Table 2 shows that combined objectives perform better than pure Risk .", "The weighted combination (Equation", "8) with = 0 .", "3 performs best, outperforming constrained combination (Equation 9).", "We also compare to randomly choosing between token-level and sequence-level updates and find it underperforms the more principled constrained strategy.", "In the remaining experiments we use the weighted strategy.", "So far we initialized sequence-level models with parameters from a token-level model trained with label smoothing.", "Table 3 shows that initializing weighted Risk with token-level label smoothing 360 valid test TokNLL 32.96 31.74 Risk init with TokNLL 33.27 32.07 +0.31 +0.33 TokLS 33.11 32.21 Risk init with TokLS 33.91 32.85 +0.8 +0.64 Table 3: Effect of initializing sequence-level training ( Risk ) with parameters from token-level likelihood ( TokNLL ) or label smoothing ( TokLS ).", "achieves 0.7-0.8 better BLEU compared to initializing with parameters from token-level likelihood.", "The improvement of initializing with TokNLL is only 0.3 BLEU with respect to the TokNLL baseline, whereas, the improvement from initializing with TokLS is 0.6-0.8 BLEU.", "We believe that the regularization provided by label smoothing leads to models with less sharp distributions that are a better starting point for sequence-level training.", "Next, we consider the question if refreshing the candidate subset at every training step (online) results in better accuracy compared to generating candidates before training and keeping the set static throughout training (offline).", "Table 4 shows that offline generation gives lower accuracy.", "However the online setting is much slower, since regenerating the candidate set requires incremental (left to right) inference with our model which is very slow compared to efficient forward/backward over large batches of pre-generated hypothesis.", "In our setting, offline generation has 26 times higher throughput than the online generation setting, despite the high inference speed of fairseq (Gehring et al., 2017b).", "So far we generated candidates with beam search, however, we may also sample to obtain a more diverse set of candidates (Shen et al., 2016).", "Fig-33.1 33.2 33.3 33.4 33.5 33.6 33.7 33.8 33.9 34 2 4 8 16 32 64 100 BLEU Candidate set size beam sampleTokLS Figure 2: Candidate set generation with beam search and sampling for various candidate set sizes during sequence-level training in terms of validation accuracy.", "ure 2 compares beam search and sampling for various candidate set sizes on the validation set.", "Beam search performs better for all candidate set sizes considered.", "In other experiments, we rely on a candidate set size of 16 which strikes a good bal-ance between efficiency and accuracy.", "Next, we compare classical sequence-level training to the recently proposed Beam Search Optimization (Wiseman and Rush, 2016).", "To enable a fair comparison, we re-implement their baseline, a single layer LSTM encoder/decoder model with 256-dimensional hidden layers and word embed-dings as well as attention and input feeding (Lu-ong et al., 2015).", "This baseline is trained with Adagrad (Duchi et al., 2011) using a learning rate of 0 .", "05 for five epochs, with batches of 64 sequences.", "For sequence-level training we initialize weights with the baseline parameters and train 361 RG-1 RG-2 RG-L ABS+ [T] 29.78 11.89 26.97 RNN MLE [T] 32.67 15.23 30.56 RNN MRT [S] 36.54 16.59 33.44 WFE [T] 36.30 17.31 33.88 SEASS [T] 36.15 17.54 33.63 DRGD [T] 36.27 17.57 33.62 TokLS 36.53 18.10 33.93 + Risk RG-1 36.96 17.61 34.18 + Risk RG-2 36.65 18.32 34.07 + Risk RG-L 36.70 17.88 34.29 Table 6: Accuracy on Gigaword abstractive summarization in terms of F-measure Rouge-1 (RG-1), Rouge-2 (RG-2), and Rouge-L (RG-L) for token-level label smoothing, and Risk optimization of all three ROUGE F1 metrics.", "with Adam (Kingma and Ba, 2014) for another 10 epochs with learning rate 0 .", "00003 and 16 candidate sequences per training example.", "We conduct experiments with Risk since it performed best in trial experiments.", "Different from other sequence-level experiments ( 5), we rescale the BLEU scores in each candidate set by the difference between the maximum and minimum scores of each sentence.", "This avoids short sentences dominating the sequence updates, since candidate sets for short sentences have a wider range of BLEU scores compared to longer sentences; a similar rescaling was used by Bahdanau et al. (2016).", "Table 5 shows the results from Wiseman and Rush (2016) for their token-level likelihood baseline (MLE), best beam search optimization results (BSO), as well as our reimplemented baseline.", "Risk significantly improves BLEU compared to our baseline at +2.75 BLEU, which is slightly better than the +2.33 BLEU improvement reported for Beam Search Optimization (cf.", "Wiseman and Rush (2016)).", "This shows that classical objectives for structured prediction are still very competitive.", "Next, we experiment on the much larger WMT'14 English-French task using the same model setup as Gehring et al. (2017b).", "We TokLS for 15 epochs valid test TokLS 34.06 40.58 + Risk 34.20 40.95 TokLS + selfatt 34.24 41.02 + in domain 34.51 41.26 + Risk 34.30 41.22 + Risk in domain 34.50 41.47 Table 7: Test and valid BLEU on WMT'14 English-French with and without decoder self-attention.", "and then switch to sequence-level training for another epoch.", "Table 7 shows that sequence-level training can improve an already very strong model by another +0.37 BLEU.", "Next, we improve the baseline by adding self-attention (Paulus et al., 2017; Vaswani et al., 2017) to the decoder network ( TokLS + selfatt) which results in a smaller gain of +0.2 BLEU by Risk .", "If we train Risk only on the news-commentary portion of the training data, then we achieve state of the art accuracy on this dataset of 41.5 BLEU (Xia et al., 2017).", "Our final experiment evaluates sequence-level training on Gigaword headline summarization.", "There has been much prior art on this dataset originally introduced by Rush et al. (2015) who experiment with a feed-forward network (ABS+).", "Ayana et al. (2016) report a likelihood baseline (RNN MLE) and also experiment with risk training (RNN MRT).", "Different to their setup we did not find a softmax temperature to be beneficial, and we use beam search instead of sampling to obtain the candidate set (cf. 6.5).", "Suzuki and Na-gata (2017) improve over an MLE RNN baseline by limiting generation of repeated phrases.", "Zhou et al. (2017) also consider an MLE RNN baseline and add an additional gating mechanism for the encoder.", "Li et al. (2017) equip the decoder of a similar network with additional latent variables to accommodate the uncertainty of this task.", "Table 6 shows that our baseline ( TokLS ) outperforms all prior approaches in terms of ROUGE-2 and ROUGE-L and it is on par to the best previous result for ROUGE-1.", "We optimize all three ROUGE metrics separately and find that Risk can further improve our strong baseline.", "We also compared Risk only training to Weighted on this dataset (cf. 6.2) but accuracy was generally lower on the validation set: RG-1 362 (36.59 Risk only vs. 36.67 Weighted ), RG-2 (17.34 vs. 18.05), and RG-L (33.66 vs. 33.98).", "We present a comprehensive comparison of classical losses for structured prediction and apply them to a strong neural sequence to sequence model.", "We found that combining sequence-level and token-level losses is necessary to perform best, and so is training on candidates decoded with the current model.", "We show that sequence-level training improves state-of-the-art baselines both for IWSLT'14 German-English translation and Gigaword abstractive sentence summarization.", "Structured prediction losses are very competitive to recent work on reinforcement or beam optimization.", "Classical expected risk can slightly outperform beam search optimization (Wiseman and Rush, 2016) in a like-for-like setup.", "Future work may investigate better use of already generated candidates since invoking generation for each batch slows down training by a large factor, e.g., mixing with fresh and older candidates inspired by MERT (Och, 2003)." ]
[ "abstain", "objective", "result", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "abstain", "method", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "method", "method", "method", "method", "method", "method", "other", "method", "method", "result", "method", "result", "method", "abstain", "method", "abstain", "result", "result", "abstain", "result", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "method", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "result", "method", "method", "result", "result", "abstain", "abstain", "abstain" ]
[ "Personalized news recommendation is a critical technology to improve users' online news reading experience.", "The core of news recommendation is accurate matching between user's interests and candidate news.", "The same user usually has diverse interests that are reflected in different news she has browsed.", "Meanwhile, important semantic features of news are implied in text segments of different granularities.", "Existing studies generally represent each user as a single vector and then match the candidate news vector, which may lose fine-grained information for recommendation.", "In this paper, we propose FIM, a Fine-grained Interest Matching method for neural news recommendation.", "Instead of aggregating user's all historical browsed news into a unified vector, we hierarchically construct multilevel representations for each news via stacked dilated convolutions.", "Then we perform fine-grained matching between segment pairs of each browsed news and the candidate news at each semantic level.", "High-order salient signals are then identified by resembling the hierarchy of image recognition for final click prediction.", "Extensive experiments on a real-world dataset from MSN news validate the effectiveness of our model on news recommendation.", "Recently, people's news reading habits have gradually shifted to digital content services.", "Many online news websites, such as Google News 1 and MSN News 2 , aim to collect news from various sources and distribute them for users (Das et al., 2007; Lavie et al., 2010).", "However, the overwhelming number of newly-sprung news makes it difficult for users to find their interested content (Wu et al., 2019c).", "Therefore, personalized news recommendation becomes an important technology to 1 https://news.google.com/ 2 https://www.msn.com/news Historical Browsed News D 1 Watch: Philip Rivers hilariously trolls Chiefs fans after win Dog's hilarious reaction to carrot NFL playoff picture: Saints close to clinching; Patriots fall behind Texans This woman lost 245 pounds over 5 years.", "The key to news recommendation lies in the accurate matching of user's interests and candidate news.", "The same user usually has diverse interests, which are reflected in different news she has browsed.", "Meanwhile, the important semantic features of news are implied in text segments of different granularities.", "Figure 1 illustrates the challenges with an example.", "As demonstrated, different historical browsed news can reveal user's interests about different topics or events.", "The first and second historical news are about pet dogs and the issue of weight loss respectively.", "Naturally, they provide critical clues to select the candidate news C 2 and C 3 which reveal relevant information.", "However, they are less informative to identify the candidate news C 1 , which is about the competition of National Football League (NFL).", "Besides, the matched segment pairs across browsed news and candidate news lie in different granularities, such as the words Dog's-puppy and phrases lost 245 pounds-Weight Loss.", "Moreover, different segments in news texts have different importance for selecting proper news candidates.", "For example, in the third historical browsed news D 3 , Philip Rivers and Chiefs are more important than other words like hilariously and after for inferring that the user is a fan of NFL, since they refer to the famous quarterback and team of this sport.", "Existing work, however, usually learns a single representation for each user by integrating all historical news that the user has browsed, then recommendations are performed by matching the final user vector and the candidate news vector (Okura et al., 2017; Wu et al., 2019e,b).", "For instance, Okura et al. (2017) encode news via denoising auto-encoders, and learn representations of users from their browsed news via a GRU network.", "Wu et al. (2019e) apply multi-head self-attentions to learn news representations, then learn user representations by modeling the relatedness between browsed news.", "Wu et al. (2019b) enhance personalized news and user representations by exploiting the embedding of user's ID to generate a query vector for attending to important words and news.", "Despite the improvements of these methods in news recommendation performance, they are limited in capturing fine-grained user-news matching signals, since user's various latent interests implied in distinct historical readings cannot match with the candidate news until the final step of click prediction.", "In this paper, we propose a Fine-grained Interest Matching network (FIM), which is a new architecture for news recommendation that can tackle the above challenges.", "The advantages of FIM lie in two cores: the multi-level user/news representation and the fine-grained interest matching.", "Instead of representing each user as a single abstract vector, we employ hierarchical dilated convolutions in a unified module to construct multi-level representations of each news article based on the title and category annotations.", "By hierarchically stacking the dilated convolutions, the receptive input width at each layer grows exponentially, while the number of parameters increases only linearly.", "Meanwhile, the outputs of each layer are preserved as feature maps across different length of text segments, with no loss in coverage since any form of pooling or stride convolution is not applied.", "In this way, we can gradually obtain the semantic features of news from local correlation and long-term dependency at different granularities, including word, phrase, and sentence levels.", "Furthermore, to avoid information loss, FIM matches the text segments of the candidate news and each historical news browsed by the user at each semantic granularity.", "In practice, for each pair of news, the model constructs a segment-segment similarity matrix from word-level to sentence-level based on the hierarchical news representations.", "By this means, user's reading interests implied in the browsing history can be recognized under the supervision of candidate news, and carried into matching with minimal loss, so as to provide sufficient clues about the content relevance for recommending proper news.", "Afterwards, we merge the multiple matching matrices of each news pair at each granularity into a 3D image, whose channels indicate the relevant degrees of different kinds of user-news matching patterns.", "By resembling the CNN-based hierarchy of image recognition, higher-order salient signals are identified to predict the probability of the user clicking the candidate news.", "We conducted extensive experiments on a real-world dataset collected from MSN news.", "Experimental results validate that our approach can effectively improve the performance of news recommendation compared with the state-of-the-art methods.", "With the explosive growth of digital news, building personalized news recommender systems has drawn more attentions in both natural language processing and data mining fields (Phelan et al., 2011; Zheng et al., 2018; Wu et al., 2019a).", "Conventional news recommendation methods focus on utilizing manual feature engineering to build news and user representations for matching (Phelan et al., 2009; Li et al., 2010; Liu et al., 2010; Son et al., 2013; Li et al., 2014; Bansal et al., 2015).", "For example, Liu et al. (2010) used topic categories and interest features generated by a Bayesian model to build news and user representations.", "Son et al. (2013) extracted topic and location features from Wikipedia pages to build news representations for location-based news recommendation.", "In recent years, deep learning based models have achieved better performance than traditional methods for news recommendation, due to their capabilities of distilling implicit semantic features in news content (Okura et al., 2017; Wang et al., 2018; An et al., 2019; Wu et al., 2019e,d).", "For example, Okura et al. (2017) learned news representations via denoising auto-encoders, then used recurrent neural networks to aggregate historical browsed ... ... ... ... 3D CNN Word Embedding Multi-grained News Representation News-by-NewsMatching Matching Matrices Aggregation News Representation Module Cross Matching Module Click Prediction Module HistoricalBrowsedNews CandidateNews 3D Matching Image Q ...", "news to learn user representations.", "Wang et al. (2018) enhanced the representation of news by exploiting the embeddings of extracted entities in a knowledge graph as a separate channel of the CNN input.", "Wu et al. (2019e) leveraged multi-head self-attentions to construct news representations based on the interactions between words, and constructed user representations based on the relatedness between news.", "An et al. (2019) proposed to learn long-term user preferences from the embeddings of their IDs, and learn short-term user interests from their recently browsed news via GRU network.", "(Wu et al., 2019a) proposed an attentive multi-view learning model to learn unified news representations from titles, bodies and topic categories by regarding them as different views of news.", "Different from these existing methods, in FIM, the representations of user's multiple browsed news are not fused into an abstract user vector before matching with the candidate news.", "Instead, we perform matching between each pair of segments in the news texts from multiple semantic levels.", "Therefore, more fine-grained information can be distilled for the final recommendation.", "The news recommendation problem can be formulated as follows.", "Given a user u , the set of historical news she has browsed at the online news platform is formulated as s u = { d 1 , . . . , d n } .", "For a news candidate c i , a binary label y i { 0 , 1 } is adopted to indicate whether u will click c i in latter impressions.", "The aim is to build a prediction model g ( , ) .", "For each pair of user and candidate news ( u, c ) , we can predict the probability that u would like to click c using the function g : s u , c y .", "Recommendations are performed based on the ranking of candidate news according to their click scores.", "We present a Fine-grained Interest Matching network (FIM) to model g ( , ) .", "The architecture of FIM is illustrated in Figure 2, which contains three major components, i.e., a news representation module to construct hierarchical semantic features for news text segments, a cross interaction module to exploit and aggregate matching information from each pair of news at each level of granularity, and a prediction module to calculate the probability that the user will click the candidate news.", "Next, we introduce each component in detail.", "We design a hierarchical dilated convolution (HDC) encoder to learn representations of news from multiple semantic views.", "Besides titles that can reflect the central information of news, at many digital platforms such as MSN, news articles are usually labeled with a category annotation (e.g., sports, entertainment) and a subcategory annotation (e.g., football nba, movies celebrity) to help indicate news topics and target users' in-2020/4/21 dilated_cnn.drawio 2/2 dilation=1 dilation=2 dilation=3 word embedding Figure 3: Hierarchical Dilated Convolution (HDC).", "terests.", "HDC encodes each news by connecting its title, category and subcategory annotations into a sequence of words as input.", "Given the word sequence d = [ x 1 , . . . , x N ] , where N is the sequence length, the model first looks up an embedding table to transform d into a matrix d 0 = [ x 1 , . . . , x N ] , where x j R d is a d -dimensional word embedding.", "Then hierarchical dilated convolution layers are applied to capture multi-grained semantic features in news texts.", "Different from standard convolution that convolves a contiguous subsequence of the input at each step, dilated convolution (Yu and Koltun, 2016) has a wider receptive field by skipping over input elements at a time, where is the dilation rate.", "For a context of x j and a convolution kernel W of size 2 w + 1 , the dilated convolution operation is: F ( x t ) = ReLU ( W w (cid:77) k =0 x j k + b ) , (1) where (cid:76) is the vector concatenation, b is the bias and ReLU (Nair and Hinton, 2010) is the nonlinear activation function.", "As shown in Figure 3, the darker output of each convolution layer is a weighted combination of the lighter regular spaced inputs in the previous layer.", "We start with = 1 (equals to standard convolution) for the first layer to ensure that no element of the input sequence is excluded.", "Afterwards, by hierarchically stacking the dilated convolutions with wider dilation rates, the length of convolved text segments expands exponentially, and the semantic features of different n-grams can be covered using only a few layers and a modest number of parameters.", "Moreover, to prevent vanishing or exploding of gradients, we apply layer normalization (Ba et al., 2016) at the end of each convolution layer.", "Since there may be irrelevant information introduced to semantic units at a long distance, we practically design the multi-level dilation rates based on the performance in validation.", "The output of each stacked layer l is preserved as feature maps of the news text at a specific level of granularity, formulated as d l = [ x lj ] Nj =1 RN f s , where f s is the number of filters for each layer.", "Suppose there are L layers stacked, the multi-grained news representations can be defined as [ d 0 , d 1 , . . . , d L ] .", "By this means, HDC gradually harvests lexical and semantic features from word and phrase levels with small dilation rates, and captures long dependences from sentence level with larger dilation rates.", "Meanwhile, the computational path is greatly shortened, and the negative effects of information loss caused by down-sampling methods such as max-pooling can be reduced.", "Our news encoder is superior to the recurrent units in parallel ability and the entirely attention-based approach in reducing token-pair memory consumptions.", "Given representations of the k -th browsed news [ d lk ] Ll =0 and the candidate news [ c l ] Ll =0 , a segment-segment matching matrix is constructed for each granularity, i.e., M lk,c RN dk N c , where l { 0 , L } is the semantic level, N d k and N c are the length of the news d k and c .", "The ( i, j ) -th element of M lk,c is calculated by scaled dot product as: M lk,c [ i, j ] = d lk [ i ] c l [ j ] T f s , (2) indicating the relevance between the i -th segment in d k and the j -th segment in c according to the l -th representation type.", "The L + 1 matching matrices for the news pair <d k , c> can be viewed as different feature channels of their matching information.", "To summarize the information of user's entire reading sequence, FIM fuses all interaction matrices across each browsed news and the candidate news into a 3D matching image Q , formulated as: Q = { Q k,i,j } n N d k N c , (3) where n denotes the total number of browsed news in user history, and each pixel Q k,i,j is defined as: Q k,i,j = [ M lk,c [ i, j ]] Ll =0 .", "(4) Specifically, each pixel is a concatenated vector with L + 1 channels, indicating the matching degrees between a certain segment pair of the news content at different levels of granularity.", "As user's click behaviors may be driven by personalized interests or temporary demands and events, different historical browsed news has different usefulness and representativeness for matching and recommending the proper candidate news.", "Inspired by Zhou et al. (2018) in the issue of dialogue system, we resemble the compositional hierarchy of image recognition, and employ a layered 3D convolution & max-pooling neural network to identify the salient matching signals from the whole image.", "The 3D convolution is the extension of typical 2D convolution, whose filters and strides are 3D cubes.", "Formally, the higher-order pixel at ( k, i, j ) on the z -th feature map of the t -th layer is computed as: Q ( t,z ) k,i,j = ELU (cid:32)(cid:88) z (cid:48) W t 1 (cid:88) w =0 H t 1 (cid:88) h =0 R t 1 (cid:88) r =0 K ( t,z ) w,h,r Q ( t 1 ,z (cid:48) ) k + w,i + h,j + r + b ( t ) (cid:33) , (5) where z (cid:48) denotes each feature map of the previous layer, K ( t,z ) RW t H t R t is a 3D convolution kernel with the size of W t H t R t , and b ( t ) is the bias for the t -th layer.", "A max pooling operation is then adopted to extract salient signals as follows: (cid:92) Q ( t,z ) k,i,j = max (cid:18) Q ( t,z ) [ k : k + P ( t,z ) w 1] , [ i : i + P ( t,z ) h 1] , [ j : j + P ( t,z ) r 1] (cid:19) , (6) where P ( t,z ) w , P ( t,z ) h and P ( t,z ) r are sizes of 3D max-pooling.", "Outputs of the final layer are concatenated as the integrated matching vector between the user and the candidate news, denoted as s u,c R v .", "In the recommendation scenario studied in this paper, recommendations are made based on ranking the candidate news articles according to their probabilities of being clicked by a user in an impression.", "Given the integrated matching vector s u,c of a user and candidate news pair, the final click probability is calculated as: y u,c = W To s u,c + b o , (7) where W o and b o are learned parameters.", "Motivated by (Huang et al., 2013b) and (Wu et al., 2019e), we leverage the negative sampling technique for model training.", "For each news browsed by a user (regarded as a positive sample), we randomly sample K news which are showcased in the same impression but not clicked by the user as negative samples.", "Besides, the orders of these news are shuffled to avoid positional biases.", "FIM jointly predicts the click probability scores of the positive news and the K negative news during training.", "By this means, the news click prediction problem is reformulated as a ( K +1) -way classification task.", "The loss function is designed to minimize the summation of negative log-likelihood of all positive samples, which is defined as: S (cid:88) i =1 log exp( y + u i ,c i ) exp( y + u i ,c i ) + (cid:80) Kk =1 exp( y u i ,c i,k ) , (8) where S is the number of positive training samples, and c i,k is the k -th negative sample in the same impression with the i -th positive sample.", "We conducted experiments on the Microsoft News dataset used in (Wu et al., 2019b) 3 , which was built from the user click logs of Microsoft News 4 .", "The detailed statistics are shown in Table 1.", "Logs in the last week were used for test, and the rest for model training.", "Besides, we randomly sampled 10% of logs in the training data for validation.", "In our experiments, the word embeddings are 300-dimensional and initialized using pre-trained Glove embedding vectors (Pennington et al., 2014).", "Due to the limitation of GPU memory, the maximum length of the concatenated word sequence of news title and category is set to 20, and at most 50 browsed news are kept for representing the user's recently reading behaviors.", "We tested stacking 1-5 HDC layers with different dilation rates.", "The reported results utilize [1-2-3] hierarchy (dilation rate for each convolution layer) as it gains the best performance on the validation set.", "The window size and number of convolution filters for news representation are 3 and 150 respectively.", "For the cross interaction module, we use two-layered composition to distill higher-order salient features of the 3D matching image, and the number and window size of 3D convolution filters are 32-[3,3,3] for the first layer and 16-[3,3,3] for the second layer, with [1,1,1] stride.", "The followed max-pooling size is [3,3,3] with [3,3,3] stride.", "Meanwhile, the negative sampling ratio K is set to 4. Adam (Kingma and Ba, 2014) is used as the optimizer, the mini-batch size is 100, and the initial learning rate is 1e-3.", "Following the settings of state-of-the-art methods (Okura et al., 2017; Wu et al., 2019e), we use popular ranking metrics to evaluate the performance of each model, including AUC (Area 3 A large-scale public version of Microsoft News dataset for news recommendation can be found at https://msnews. github.io 4 https://microsoftnews.msn.com # users 10,000 # topic categories 14 # news 42,255 # subtopic categories 284 # impressions 445,230 # positive samples 489,644 avg. # words per title 11.29 # negative samples 6,651,940 Table 1: Statistics of the dataset. Under the ROC Curve) (Bradley, 1997), MRR (Mean Reciprocal Rank) (Voorhees et al., 1999), and NDCG (Normalized Discounted Cumulative Gain) (Jarvelin and Kekalainen, 2002).", "We independently repeated each experiment for 10 times and reported the average performance.", "Manual Feature-based Methods: Traditional recommendation methods which rely on manual feature engineering to build news and user representations, including (1) LibFM (Rendle, 2012), a feature-based matrix factorization model that is widely used in recommendations.", "We extract TF-IDF features from users' browsed news and candidate news, and concatenate them as the input for LibFM ; (2) DSSM (Huang et al., 2013a), a deep structured semantic model with word hashing via character trigram and multiple dense layers.", "All browsed news are merged into a long document as the query; (3) Wide & Deep (Cheng et al., 2016), a popular recommendation method that combines a wide channel for linear transformations and a deep channel with multiple dense layers.", "The same features with LibFM are used for both channels; (4) DeepFM (Guo et al., 2017), combining factorization machines and deep neural networks with the same features as LibFM .", "Neural Recommendation Methods: Neural networks specially designed for news recommendation, including (1) DFM (Lian et al., 2018), a deep fusion model combining dense layers with different depths and using attention mechanism to select important features; (2) DKN (Wang et al., 2018), incorporating entity information in knowledge graphs with Kim CNN (Kim, 2014) to learn news representations and using news-level attention network to learn user representations; (3) GRU (Okura et al., 2017), using auto-encoders to represent news and a GRU network to represent users; (4) NRMS (Wu et al., 2019e), leveraging multi-head self-attentions for news and user representation learning; (5) Hi-Fi Ark (Liu et al., 2019), summarizing user history into highly compact and complementary vectors as archives, and learning candidate-dependent user Methods AUC MRR NDCG@5 NDCG@10 LibFM 0.5661 0.2414 0.2689 0.3552 DSSM 0.5949 0.2675 0.2881 0.3800 Wide&Deep 0.5812 0.2546 0.2765 0.3674 DeepFM 0.5830 0.2570 0.2802 0.3707 DFM 0.5861 0.2609 0.2844 0.3742 DKN 0.6032 0.2744 0.2967 0.3873 GRU 0.6102 0.2811 0.3035 0.3952 NRMS 0.6275 0.2985 0.3217 0.4139 Hi-Fi Ark 0.6027 0.3162 0.3335 0.4204 NPA 0.6243 0.3321 0.3535 0.4380 FIM 0.6359 (cid:63) 0.3354 (cid:63) 0.3582 (cid:63) 0.4436 (cid:63) FIM first 0.6258 0.3266 0.3484 0.4348 FIM last 0.6319 0.3323 0.3549 0.4407 Table 2: The performance of different methods on news recommendation.", "representation via attentive aggregation of such archives; (6) NPA (Wu et al., 2019b), using personalized attention with user ID's embedding as the query vector to select important words and news.", "Ablation Variants: To verify the effects of multi-grained representation and sequential matching, we further setup two comparing ablation models, i.e., (1) FIM first : a variant in which we use feature maps of the first news representation layer for matching and recommendation.", "In this scenario, the HDC module degenerates into a one-layer standard CNN encoder.", "(2) FIM last : a variant using the outputs of the last layer in HDC (namely, the L -th embedding type) to represent each news for matching.", "Due to the hierarchical representation architecture, higher-level features synthesize information from lower-level features, and can model more complex lexical and semantic clues.", "Table 2 shows the results of our model and all comparative methods.", "Several observations can be made.", "First, neural news recommendation methods (e.g., GRU , NRMS , Hi-Fi Ark , NPA ) are generally better than traditional methods (e.g., LibFM , DeepFM ) that are based on manual feature engineering.", "The reason might be that handcrafted features are usually not optimal, and deep neural networks take the advantages of extracting implicit semantic features and modeling latent relationships between user and news representations.", "Second, our model FIM consistently outperforms other baselines in terms of all metrics, including the state-of-the-art deep learning based mod-8 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 ACL 2020 Submission **1111*.", "Third, both FIM first and FIM last show a decrease of performance compared to FIM.", "The latter is better than the former, indicating the effectiveness of constructing higher-level representations on the basis of low levels via the hierarchical mechanism of HDC .", "Besides, compared with DKN that utilizes knowledge-enhanced CNNs to learn news representations, FIM first has a better performance, illustrating the advantage of pair-wise matching fashion.", "Another notable thing is that while FIM last underperforms FIM, it can outperform all other competitors on all metrics.", "However, the benefit of interacting news pairs at multi-grained semantic levels is still significant.", "Figure", "4(a) shows the experimental results.", "We can find that the performance consistently improves when K is lower than 5, then begins to decline.", "The possible reason is that with a too small K , the useful information exploited from negative samples is limited.", "However, when too many negative samples are incorporated, they may become dominant and the imbalance of training data will be increased.", "Thus it is more difficult for the model to precisely recognize the positive samples, which will also affect the recommendation performance.", "Overall, the optimal setting of K is moderate (e.g., K = 4).", "els.", "This validates the advantage of the pair-wise multi-level matching architecture in synthetically detecting fine-grained matching information from news segment pairs to predict the probability of a user clicking a candidate news.", "In this section, we further investigate the impacts of different parameters and inputs on the model performance, and discuss the contribution of multi-grained representation and matching architecture.", "We then explore the influence of the 3D convolution & max-pooling neural network for processing the matching image Q .", "Comparing results are illustrated in Figure", "4(b), where the CNN hierarchy a b means that the number of filters for the first layer and the second layer are set to a and b , separately.", "As shown, given the filter number a for the first layer, the performance first increases with a larger filter number b for the second layer, since more high-order information can be extracted.", "Then the performance begins to decrease, possibly because", "more noisy patterns are introduced to the model (e.g., the group of [32 8, 32 16, 32 32]).", "Besides, a similar trend exists in the hierarchies with the same value b and different value a (e.g., the group of [16 8, 32 8, 64 8]).", "We conduct other experiments by changing the window size in [2,3,4,5] and the number of convolution layers in [1,2,3].", "Results show that the optimal hierarchy is two-layered CNNs, with 32 [3,3,3] filters for the first layer and 16 [3,3,3] filters for the second layer.", "We further compare different combinations of the number of dilated convolution filters and stacked layers in the HDC news representation module.", "Figure", "4(c) demonstrates the results, where darker areas represent larger values.", "We observe a consistent trend over settings with different number of filters at each layer, i.e., there is a sig-nificant improvement during the first few stacked layers, and then the performance decreases a lot when the depth grows to 5. The results indicate that depth of representation layers indeed matters in terms of matching and recommendation accuracy.", "The optimal setting of the number of stacked layers and convolution filters is 3 and 150 respectively.", "We think the reason might be that in this scenario, the perceived field of dilated convolution filters at each layer ranges among [3-7-13] (with dilation rates as [1-2-3]), which is sufficient for modeling multi-grained n-gram features through hierarchical composition of local interactions, compared to the average length of news word sequences.", "We also investigate the effectiveness of incorporating two-level category annotations of news as inputs.", "The results are shown in Figure", "4(d).", "We can find that incorporating either categories or subcategories can benefit the performance of our model.", "This is interpretable since category annotations are helpful to reveal user's interested aspects more explicitly.", "In addition, enhancing news representations with subcategories is better than with categories.", "This is probably because compared to the general category labels, subcategories can provide more concrete and detailed information to indicate the core topic of news content.", "Overall, jointly incorporating the two-level category annotations can achieve the best performance.", "In this subsection, we further study the effectiveness of constructing hierarchical news representations and performing multi-grained interest matching.", "Figure 5 gives visualizations of the multi-grained matching matrices (defined as formula 2) between historical browsed news and candidate news for a user, where M l denotes a matching matrix of a news pair at the l -th representation level.", "We observe that the important matching information captured by the 1st-level matching matrix is mainly lexical relevance.", "For example, the words football, nfl, playoff, playoffs and quar-terbacks are more correlated and assigned higher matching values in M 1 , which may due to their similar co-occurrence information encoded in word embeddings.", "Differently, higher-level matching matrices have the ability to identify more sophisticated semantic structures and latent long-term dependencies.", "From Figure", "5(b), the interactive areas between the segments weight loss in the candidate news and lost pounds in the browsed news significantly gain larger matching scores among the 2-nd level semantic representations.", "In the matching matrix M 3 in Figure", "5(c), the subsequences about trump walks out are distinguished, since the expressions have correlated meanings.", "Meanwhile, the results also indicate that our model has the ability to identify important segments of a sentence and ignore the parts with less information, which is helpful to capture user's interested topics or events more accurately.", "In this paper, we propose a new architecture for neural news recommendation based on multi-grained representation and matching.", "Different from previous work that first integrates user's reading history into a single representation vector and then matches the candidate news representation, our model can capture more fine-grained interest matching signals by performing interactions between each pair of news at multi-level semantic granularities.", "Extensive experiments on a real-world dataset collected from MSN news show that our model significantly outperforms the state-of-the-art methods.", "In the future, we will do more tests and surveys on the improvement of business objectives such as user experience, user engagement and service revenue." ]
[ "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "other", "other", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "result", "objective" ]
[ "Generating from Abstract Meaning Representation (AMR) is an underspecified problem, as many syntactic decisions are not constrained by the semantic graph.", "To explicitly account for this underspecification, we break down generating from AMR into two steps: first generate a syntactic structure, and then generate the surface form.", "We show that decomposing the generation process this way leads to state-of-the-art single model performance generating from AMR without additional unlabelled data.", "We also demonstrate that we can generate meaning-preserving syntactic paraphrases of the same AMR graph, as judged by humans.", "Abstract Meaning Representation (AMR) (Ba-narescu et al., 2013) is a semantic annotation framework which abstracts away from the surface form of text to capture the core who did what to whom' structure.", "As a result, generating from AMR is underspecified (see Figure 1 for an exam-ple).", "Single-step approaches to AMR generation (Flanigan et al., 2016; Konstas et al., 2017; Song et al., 2016, 2017) therefore have to decide the syntax and surface form of the AMR realisation in one go.", "We instead explicitly try and capture this syntactic variation and factor the generation process through a syntactic representation (Walker et al., 2001; Dusek and Jurcicek, 2016; Gardent and Perez-Beltrachini, 2017; Currey and Heafield, 2018).", "First, we generate a delexicalised constituency structure from the AMR graph using a syntax model.", "Then, we fill out the constituency structure with the semantic content in the AMR graph using a lexicalisation model to generate the final surface form.", "Breaking down the AMR generation process this way provides us with several advantages: we disentangle the variance caused by the choice of syntax from that caused by the choice of words.", "We can therefore realise the same AMR graph with a variety of syntactic structures by sampling from the syntax model, and deterministically decoding using the lexicalisation model.", "We hypothesise that this generates better paraphrases of the reference realisation than sampling from a singlestep model.", "We linearise both the AMR graphs (Konstas et al., 2017) and constituency trees (Vinyals et al., 2015b) to allow us to use sequence-to-sequence models (Sutskever et al., 2014; Bahdanau et al., 2015) for the syntax and lexicalisation models.", "Further, as the AMR dataset is relatively small, we have issues with data sparsity causing poor parameter estimation for rarely seen words.", "We deal with this by anonymizing named entities, and including a copy mechanism (Vinyals et al., 2015a; See et al., 2017; Song et al., 2018) into our decoder, which allows open-vocabulary token generation.", "We show that factorising the generation process in this way leads to improvements in AMR generation, setting a new state of the art for single-model AMR generation performance training only on labelled data.", "We also verify our diverse generation hypothesis with a human annotation study.", "Abstract Meaning Repreentation Abstract Meaning Representation is a semantic annotation formalism which represents the meaning of an English utterance as a rooted directed acyclic graph.", "Nodes in the graph represent entities, events, properties and states mentioned in the text, while leaves of the graph label the nodes with concepts (which do not have to be aligned to spans in the text).", "Re-entrant nodes correspond to coreferent entities.", "Edges in the graph represent (g / give-01 :ARG0 (i / I) :ARG1 (b / ball) :ARG2 (d / dog)) give :arg0 i :arg1 ball :arg2 dog I [gave] VP [the dog] NP [a ball] NPI [gave] VP [the ball] NP [to a dog] PP Figure 1: An example AMR graph, with variable names and verb senses, followed by the input to our system after preprocessing, and finally two sample realisations different in syntax.", "relations between entities in the text.", "See Figure 1 for an example of an AMR graph, together with sample realisations.", "Konstas et al. (2017) outline a set of preprocessing procedures for AMR graphs to both render them suitable for sequence-to-sequence learning and to ameliorate data sparsity; we follow the same pipeline.", "We train our models on the two most recent AMR releases.", "LDC2017T10 has roughly 36k training sentences, while LDC2015E86 is about half this size.", "Both share dev and test sets, facilitating comparison.", "Constituency syntax While there are many syntactic annotation formalisms, we use delexicalised Penn treebank-style constituency trees to represent syntax.", "Constituency trees have the advantage of a well-defined linearization order compared to dependency trees.", "Further, constituency trees may be easier to realise, as they effectively correspond to a bracketing of the surface form.", "Unfortunately, AMR annotated data does not come with syntactic annotation.", "We therefore parse the training and dev splits of both corpora with the Stanford parser (Manning et al., 2014) to provide silver-standard reference parse trees.", "We then delexicalise the parse trees by trimming the trees of the surface words; after this stage, the leaves of the tree are the preterminal POS tags.", "After this, we linearise the delexicalised constituency trees with depth-first traversal, following Vinyals et al. (2015b).", "We wish to estimate P ( Y, Z | X ) , the joint probability of a parse Y and surface form Z given an AMR graph X .", "We model this in two parts, using the chain rule to decompose the joint distribution.", "The first model, which we call the syntax model, approximates P ( Y | X ) , the probability of a particular syntactic structure for a meaning representation.", "The second is P ( Z | X, Y ) , the lexicalisation model.", "This calculates the probability of a surface realisation given a parse tree and an AMR graph.", "We implement both as recurrent sequence-to-sequence models.", "As we are able to linearise both the AMR graph and the parse tree, we use LSTMs (Hochreiter and Schmidhuber, 1997) both as the encoder and the decoder of our seq2seq models.", "Given an input sequence X 1 , . . . , X n , which can either be an AMR graph or a parse tree, we first embed the tokens to obtain a dense vector representation of each token x 1 , . . . , x n .", "Then we feed this into a stacked bidirectional LSTM encoder to obtain contextu-alised representations of each input token c i .", "As far as possible, we share parameters between our two models.", "Concretely, this means that the syntax model uses the same AMR and parse embeddings, and AMR encoder, as the lexicalisation model.", "We find that this speeds up model inference, as we only have to encode the AMR sequence once for both models.", "Further, it regularises the joint model by reducing the number of parameters.", "In our decoder, we use the dot-product formulation of attention (Luong et al., 2015): the attention potentials a i at timestep t are given by a i = h Tt 1 W att c i where h t 1 is the decoder hidden state at the previous timestep, and c i is the context representation at position i given by the encoder.", "The attention weight w i is then given by a softmax over the attention potentials, and the overall context representation s t is given by (cid:80) w i c i .", "The syntax model only attends over the input AMR graph; the linearisation model attends over both the input AMR and syntax tree independently, and the resulting context representation s t is given by the concatenation of the AMR context representation and the syntax tree context representation (Libovicky and Helcl, 2017).", "We use s t to augment the input to the LSTM: (cid:101) y t = W in tanh([ y t ; s t ]) .", "Then the LSTM hidden and cell state are updated according to the LSTM equations: h t , c t = LST M ( h t 1 , c t 1 , (cid:101) y t ) .", "Finally, we again concatenate s t to h t before calculating the logits over the distribution of tokens: (cid:101) h t = tanh( W out [ h t ; s t ]) (1) p ( y t | y <t ) = softmax( W (cid:101) h t ) (2) For the syntax model, we further constrain the decoder to only produce valid parse trees; as we build the parse tree left-to-right according to a depth-first traversal, the permissible actions at any stage are to open a new constituent, produce a terminal (i.e. a POS tag), or close the currently open constituent.", "We implement this constraint by setting the logits of all impermissible actions to negative infinity before taking the softmax.", "We find that this improves both training speed and final model performance, as we imbue the decoder with an intrinsic bias towards producing well-formed parse trees.", "Despite the preprocessing procedures referred to in Section 2, we found that the lexicalisation model still had trouble with out-of-vocabulary words, due to the small size of the training corpus.", "This led to poor vocabulary coverage on the unseen test portions of the dataset.", "On closer inspection, many out-of-vocabulary words in the dev split are open-class nouns and verbs, which correspond to concept nodes in the AMR graph.", "We therefore incorporate a copy mechanism (Vinyals et al., 2015a; See et al., 2017) into our lexicalisation model to make use of these alignments.", "We implement this by decomposing the word generation probability into a weighted sum of two terms.", "One is the vocabulary generation term.", "This models the probability of generating the next token from the model vocabulary, and is calculated in the same way as the base model.", "The other is a copy term, which calculates the probability of generating the next token by copying a token from the input.", "This uses the attention distribution over the input tokens calculated in the decoder to decide which input token to copy.", "The weighting between these two terms is calculated as a function of the current decoder input token, the decoder hidden state, and the AMR and parse context vectors.", "To sum up, the per-word generation probability in the decoder is given by p ( y t | y <t ) = (1 t ) p lex ( y t | y <t ) + t (cid:88) i : X i = y t w i (3) where p lex ( y t | y <t ) is as in Equation 2 and w i is the attention weight on the input token X i .", "the weighting between the generation term and the copy term: this is implemented as a 2-layer MLP.", "The AMR training corpus, together with the automatically derived parse trees, give us aligned triples of AMR graph, parse tree and realisation.", "We train our model to minimise the sum of the parse negative log-likelihood from the syntax model and the text negative log-likelihood from the lexicalisation model.", "We use the ADAM optimizer (Kingma and Ba, 2015) with batch size 40 for 200 epochs.", "We evaluate model BLEU score on the dev set during training, and whenever this did not increase after 5 epochs, we multiplied the learning rate by 0.8.", "We select the model with the highest dev BLEU score during training as our final model.", "We apply layer normalization (Ba et al., 2016) to all matrix multiplications inside our network, including in the LSTM cell, and drop out all nonrecurrent connections with probability 0.5 (Srivas-tava et al., 2014).", "We also drop out recurrent connections in both encoder and decoder LSTMs with probability 0.3, tying the mask across timesteps as suggested by Gal and Ghahramani (2016).", "All model hidden states are size 500, and token embeddings are size 300.", "Word embeddings are initialised with pretrained word2vec embeddings (Mikolov et al., 2013).", "We replace words with count 1 in the training corpus with the UNK token with probability 0.5, and replace POS tags in the parse tree and AMR concepts with the UNK token with probability 0.1 regardless of count.", "the most likely text realisation of an AMR, marginalising out over the possible parses.", "To do this, we heuristically find the n best parses Y 1 , . . . , Y n from the syntax model, generate a realisation Z i for each parse Y i , and take the highest scoring parse-realisation pair as the model output.", "We use beam search with width 2 for both steps, removing complete hypotheses from the active beam and appending them to a k -best list.", "We terminate search after a predetermined number of steps, or if there are no active beam items left.", "After termination, if k > n , we return the top n items of the k -best list; otherwise we return additional Model Unlabelled F1 Labelled F1 Text-to-parse 87.5 85.8 AMR-to-parse 60.4 54.8 Unconditional 38.5 31.7 Table 1: Parsing scores on LDC2017T10 dev set.", "items from the beam.", "In our experiments, we find that considering realisations of the 2 best parses (i.e. setting n = 2 above) gives the highest BLEU score on the dev set.", "We first investigate how much information AMR contains about possible syntactic realisations.", "We train two seq2seq models of the above architecture to predict the delexicalised constituency tree of an example given either the AMR graph or the text.", "We then evaluate both models on labelled and unlabelled F1 score on the dev split of the corpus.", "As neither model is guaranteed to produce trees with the right number of terminals, we first run an insert/delete aligner between the predicted and reference terminals (i.e. POS tags) before calculating span F1s.", "We also report the results of running our aligner on the most probable parse tree as estimated by an unconditional LSTM as a baseline both to control for our aligner and also to see how much extra signal is in the AMR graph.", "The results in Table 1 show that predicting a syntactic structure from an AMR graph is a much harder task than predicting from the text, but there is information in the AMR graph to improve over a blind baseline.", "Table 3 shows the results of our model on the AMR generation task.", "We evaluate using BLEU score (Papineni et al., 2002) against the reference realisations.", "As a baseline, we train a straight AMR-to-text model with the same architecture as above to control for the extra regularisation in our model compared to previous work.", "Our results Model Dev BLEU Test BLEU Trained on LDC2017T10 Our model 26.1 26.8 Our model + oracle parse 57.5 Baseline s2s + copy 23.7 23.5 Beck et al. (2018) -23.3 Trained on LDC2015E86 Our model 23.6 23.5 Our model + oracle parse 53.1 Konstas et al. (2017) 21.7 22.0 Song et al. (2018) 22.8 23.3 Trained on LDC2015E86 or earlier + additional unlabelled data Song et al. (2018) -33.0 Konstas et al. (2017) 33.1 33.8 Pourdamghani et al. (2016) 27.2 26.9 Song et al. (2017) 25.2 25.6 Table 3: BLEU results for generation.", "show that adding syntax into the model dramatically boosts performance, resulting in state-of-the-art single model performance on both datasets without using external training data.", "As an oracle experiment, we also generate from the realisation model conditioned on the ground truth parse.", "The outstanding result here BLEU scores in the 50s demonstrates that being able to predict the gold reference parse tree is a bottleneck in the performance of our model.", "However, given the inherent difficulty of predicting a single syntax realisation (cf. Section 4), we suspect that there is an intrinsic limit to how well generating from an AMR graph can replicate the reference realisation.", "We further note that we do not use models tailored to graph-structured data or character-level features as in Song et al. (2018); Beck et al. (2018), or additional unlabelled data to perform semi-supervised learning (Konstas et al., 2017).", "We believe that we can improve our results even further if we use these techniques.", "Our model explicitly disentangles variation caused by syntax choice from that caused by lexical choice.", "This means that we can generate diverse realisations of the same AMR graph by sampling from the syntax model and deterministically decoding from the realisation model.", "We hypothesise that this procedure generates more meaning-preserving realisations than just sampling from a straight AMR-to-text model, which can result in incoherent output (Cao and Clark, 2017).", "We selected the first 50 AMR graphs in the dev set on linearised length between 15 and 40 with coherent reference realisations and generated 5 different realisations with our joint model and our baseline model.", "For our joint model, we first sampled 3 parse structures from the syntax model with temperature 0.3.", "This means we divide the per-timestep logits of the syntax decoder by 0.3; this serves to sharpen the outputs of the syntax model and constrains the sampling process to produce relatively high-probability syntactic structures for the given AMR.", "Then, we realised each parse deterministically with the lexicalisation model.", "For the baseline model, we sample 3 realisations from the decoder with the same temperature.", "This gave us 100 examples in total.", "We then crowdsourced acceptability judgments for each example from 100 annotators: we showed the reference realisation of an AMR graph, together with model realisations, and asked each annotator to mark all the grammatical realisations which have the same meaning as the reference realisation.", "Each annotator was presented 30 examples selected randomly.", "Our results in Table 2 show that the joint model can generate more meaning-preserving realisations compared to a syntax-agnostic baseline.", "This shows the utility of separating out syntactic and lexical variation: we model explicitly meaning-preserving in-variances, and can therefore generate better paraphrases.", "We present an AMR generation model that factors the generation process through a syntactic decision, and show that this leads to improved AMR generation performance.", "In addition, we show that separating the syntactic decisions from the lexicalisation decisions allows the model to generate higher quality paraphrases of a given AMR graph.", "In future work, we would like to integrate a semantic parser into our model (Yin et al., 2018).", "Annotating data with AMR is expensive, and existing AMR treebanks are small.", "By integrating a component which parses into AMR into our model, we can do semi-supervised learning on plentiful unannotated natural language sentences, and improve our AMR generation performance even further.", "In addition, we would be able to generate text-to-text paraphrases by parsing into AMR first and then carrying out the paraphrase generation procedure described in this paper (Iyyer et al., 2018).", "This opens up scope for data augmentation for downstream NLP tasks, such as machine translation.", "The authors would like to thank Amandla Mabona and Guy Emerson for fruitful discussions.", "KC is funded by an EPSRC studentship." ]
[ "abstain", "objective", "result", "objective", "abstain", "abstain", "abstain", "abstain", "objective", "method", "method", "method", "method", "method", "result", "result", "objective", "result", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "result", "result", "result", "abstain", "abstain", "result", "abstain", "abstain", "other", "other" ]
[ "Abstract We present a gradient-tree-boosting-based structured learning model for jointly disambiguating named entities in a document.", "Gradient tree boosting is a widely used machine learning algorithm that underlies many top-performing natural language processing systems.", "Surprisingly, most works limit the use of gradient tree boosting as a tool for regular classification or regression problems, despite the structured nature of language.", "To the best of our knowledge, our work is the first one that employs the structured gradient tree boosting (SGTB) algorithm for collective entity disambiguation.", "By defining global features over previous disambiguation decisions and jointly modeling them with local features, our system is able to produce globally optimized entity assignments for mentions in a document.", "Exact inference is prohibitively expensive for our globally normalized model.", "To solve this problem, we propose Bidirectional Beam Search with Gold path (BiBSG), an approximate inference algorithm that is a variant of the standard beam search algorithm.", "BiBSG makes use of global information from both past and future to perform better local search.", "Experiments on standard benchmark datasets show that SGTB significantly improves upon published results.", "Specifically, SGTB outperforms the previous state-of-the-art neural system by near 1% absolute accuracy on the popular AIDA-CoNLL dataset.", "1 1 Introduction Entity disambiguation (ED) refers to the process of linking an entity mention in a document to its corresponding entity record in a reference knowledge base (e.g., Wikipedia or Freebase).", "As a core information extraction task, ED plays an important role in the language understanding pipeline, underlying a variety of downstream applications 1 When ready, the code will be published at https:// github.com/bloomberg/sgtb .", "such as relation extraction (Mintz et al., 2009; Riedel et al., 2010), knowledge base population (Ji and Grishman, 2011; Dredze et al., 2010), and question answering (Berant et al., 2013; Yih et al., 2015).", "This task is challenging because of the inherent ambiguity between mentions and the referred entities.", "Consider, for example, the mention Washington', which can be linked to a city, a state, a person, an university, or a lake (Fig. 1).", "Fortunately, simple and effective features have been proposed to capture the ambiguity that are designed to model the similarity between a mention (and its local context) and a candidate entity, as well as the relatedness between entities that co-occur in a single document.", "These are typically statistical features estimated from entity-linked corpora, and similarity features that are pre-computed using distance metrics such as cosine.", "For example, a key feature for ED is the prior probability of an entity given a specific mention, which is estimated from mention-entity co-occurrence statistics.", "This simple feature alone can yield 70% to 80% accuracy on both news and Twitter texts (Lazic et al., 2015; Guo et al., 2013).", "To capture the non-linear relationships between the low-dimensional dense features like statistical features, sophisticated machine learning models such as neural networks and gradient tree boosting are preferred over linear models.", "In particular, gradient tree boosting has been shown to be highly competitive for ED in recent work (Yang and Chang, 2015; Yamada et al., 2016).", "However, although achieving appealing results, existing gradient-tree-boosting-based ED systems typically operate on each individual mention, without attempting to jointly resolve entity mentions in a document together.", "Joint entity disambiguation has been shown to significantly boost performance when used in conjunction with other machine learning techniques (Ratinov et al., 2011; Hoffart et al., 2011).", "However, how to train a 777 global gradient tree boosting model that produces coherent entity assignments for all the mentions in a document is still an open question.", "In this work, we present, to the best of our knowledge, the first structured gradient tree boosting (SGTB) model for collective entity disambiguation.", "Building on the general SGTB framework introduced by Yang and Chang (2015), we develop a globally normalized model for ED that employs a conditional random field (CRF) objective (Lafferty et al., 2001).", "The model permits the utilization of global features defined between the current entity candidate and the entire decision history for previous entity assignments, which enables the global optimization for all the entity mentions in a document.", "As discussed in prior work (Smith and Johnson, 2007; Andor et al., 2016), globally normalized models are more expressive than locally normalized models.", "As in many other global models, our SGTB model suffers from the difficulty of computing the partition function (normalization term) for training and inference.", "We adopt beam search to address this problem, in which we keep track of multiple hypotheses and sum over the paths in the beam.", "In particular, we propose Bidirectional Beam Search with Gold path (BiBSG) technique that is specifically designed for SGTB model training.", "Compared to standard beam search strategies, BiBSG reduces model variance and also enjoys the advantage in its ability to consider both past and future information when predicting an output.", "Our contributions are: We propose a SGTB model for collectively disambiguating entities in a document.", "By jointly modeling local decisions and global structure, SGTB is able to produce globally optimal entity assignments for all the mentions.", "We present BiBSG, an efficient algorithm for approximate bidirectional inference.", "The algorithm is tailored to SGTB models, which can reduce model variance by generating more point-wise functional gradients for estimating the auxiliary regression models.", "SGTB achieves state-of-the-art (SOTA) results on various popular ED datasets, and it outperforms the previous SOTA systems by 1 2% absolute accuracy on the AIDA-CoNLL (Hoffart et al., 2011) dataset.", "In this section, we present a SGTB model for collective entity disambiguation.", "We first formally define the task of ED, and then describe a structured learning formalization for producing globally coherent entity assignments for mentions in a document.", "Finally, we show how to optimize the model using functional gradient descent.", "For an input document, assume that we are given all the mentions of named entities within it.", "Also assume that we are given a lexicon that maps each mention to a set of entity candidates in a given reference entity database (e.g., Wikipedia or Freebase).", "The ED system maps each mention in the document to an entry in the entity database.", "Since a mention is often ambiguous on its own (i.e., the lexicon maps the mention to multiple entity candidates), the ED system needs to leverage two types of contextual information for disambiguation: local information based on the entity mention and its surrounding words, and global information that exploits the document-level coherence of the predicted entities.", "Note that modeling entity-entity coherence is very challenging, as the long-range dependencies between entities correspond to exponentially large search space.", "We formalize this task as a structured learning problem.", "Let x be a document with T target mentions, and y = { y t } Tt =1 be the entity assignments of the mentions in the document.", "We use S ( x , y ) to denote the joint scoring function between the input document and the output structure.", "In traditional NLP tasks, such as part-of-speech tagging and named entity recognition, we often rely on low-order Markov assumptions to decompose the global scoring function into a summation of local functions.", "ED systems, however, are often required to model nonlocal phenomena, as any pair of entities is potentially interdependent.", "Therefore, we choose the following decomposition: S ( x , y ) = TX t =1 F ( x , y t , y 1: t 1 ) , (1) where F ( x , y t , y 1: t 1 ) is a factor scoring function.", "Specifically, a local prediction y t depends on all the previous decisions , y 1: t 1 in our model, which resembles recurrent neural network (RNN) models (Elman, 1990; Hochreiter and Schmidhuber, 1997).", "We adopt a CRF loss objective, and define a 778 Figure 1:", "p ( y | x ) = exp { P Tt =1 F ( x , y t , y 1: t 1 ) } Z ( x ) , (2) where Z ( x ) = X y 0 Gen ( x ) exp { TX t =1 F ( x , y 0 t , y 0 1: t 1 ) }", "and Gen ( x ) is the set of all possible sequences of entity assignments depending on the lexicon.", "Z ( x ) is then a global normalization term.", "As shown in previous work, globally normalized models are very expressive, and also avoid the label bias problem (Lafferty et al., 2001; Andor et al., 2016).", "The inference problem is to find arg max y Gen ( x ) p ( y | x ) = arg max y Gen ( x ) TX t =1 F ( x , y t , y 1: t 1 ) .", "An overview of our SGTB model is shown in Fig.", "1. The model minimizes the negative log-likelihood of the data, L ( y , S ( x , y )) = log p ( y | x ) = log Z ( x ) S ( x , y ) , (4) where y is the gold output structure.", "In a standard CRF, the factor scoring function is typically assumed to have this form: F ( x , y t , y 1: t 1 ) = > ( x , y t , y 1: t 1 ) , where ( x , y t , y 1: t 1 ) is the feature function and are the model parameters.", "The key idea of SGTB is that, instead of defining a parametric model and optimizing its parameters, we can directly optimize the factor scoring function F ( ) iteratively by performing gradient descent in function space.", "In particular, suppose F ( ) = F m 1 ( ) in the m -th iteration, we will update F ( ) as follows: F m ( x , y t , y 1: t 1 ) = F m 1 ( x , y t , y 1: t 1 ) m g m ( x , y t , y 1: t 1 ) , (5) where g m ( x , y t , y 1: t 1 ) = L ( y , S ( x , y )) F ( x , y t , y 1: t 1 )) = p ( y 1: t | x ) 1 [ y 1: t = y 1: t ] (6) is the functional gradient, m is the learning rate, and 1 [ ] represents an indicator function, which returns 1 if the predicted sequence matches the gold one, and 0 otherwise.", "We initialize F ( ) to 0 ( F 0 ( ) = 0 ).", "We can approximate the negative functional gradient g m ( ) with a regression tree model h m ( ) by fitting the training data { ( x ( i ) , y ( i ) t , y ( i ) 1: t 1 ) } to the point-wise negative functional gradients (also known as residuals) { g m ( x ( i ) , y ( i ) t , y ( i ) 1: t 1 ) } .", "Then the factor scoring 779 function can be obtained by F ( x , y t , y 1: t 1 ) = MX m =1 m h m ( x , y t , y 1: t 1 ) , (7) where h m ( x , y t , y 1: t 1 ) is called a basis function.", "We set m = 1 in this work.", "Training the SGTB model requires computing the point-wise functional gradients with respect to training documents and candidate entity sequences.", "This is challenging, due to the exponential output structure search space.", "First, we are not able to enumerate all possible candidate entity sequences.", "Second, computing the conditional probabilities shown in Eq.", "6 is intractable, as it is prohibitively expensive to compute the partition function Z ( x ) in Eq.", "2. Beam search can be used to address these problems.", "We can compute point-wise functional gradients for candidate entity sequences in the beam, and approximately compute the partition function by summing over the elements in the beam.", "In this section, we present a bidirectional beam search training algorithm that always keeps the gold sequence in the beam.", "The algorithm is tailored to SGTB, and improves standard training methods in two aspects: (1) it reduces model variance by collecting more point-wise function gradients to train a regression tree; (2) it leverages information from both past and future to conduct better local search.", "The early update (Collins and Roark, 2004) and LaSO (Daume III and Marcu, 2005; Xu and Fern, 2007) strategies are widely adopted with beam search for updating model parameters in previous work.", "Both methods keep track of the location of the gold path in the beam while decoding a training sequence.", "A gradient update step will be taken if the gold path falls out of the beam at a specific time step t or after the last step T .", "Adapting the strategies to SGTB training is straightforward.", "We will compute point-wise functional gradients for all candidate entity sequences after time step T or when the gold sequence falls out the beam.", "Both early update and LaSO are typically applied to online learning scenarios, in which model parameters are updated after passing one or a few training sequences.", "SGTB training, however, fits the batch learning paradigm.", "In each training epoch, a SGTB model will be updated only once using the regression tree model fit on the point-wise negative functional gradients.", "The gradients are calculated with respect to the output sequences obtained from beam search.", "We propose a simple training strategy that computes and collects point-wise functional gradients at every step of a training sequence.", "In addition, instead of passively monitoring the gold path, we always keep the gold path in the beam to ensure that we have valid functional gradients at each time step.", "The new beam search training method, Beam Search with Gold path (BSG), generates much more point-wise functional gradients than early update or LaSO, which can reduce the variance of the auxiliary regression tree model.", "As a result, SGTB trained with BSG consistently outperforms early update or LaSO in our exploratory experiments, and it also requires fewer training epochs to converge.", "2 3.2 Bidirectional beam search During beam search, if we consider a decision made at time step t , the joint probability p ( y | x ) can be factorized around t as follows: p ( y | x ) = p ( y 1: t 1 | x ) p ( y t | y 1: t 1 , x ) p ( y t +1: T | y t , y 1: t 1 , x ) .", "(8) Traditional beam search performs inference in a unidirectional (left-to-right) fashion.", "Since the beam search at time step t considers only the beam sequences that were committed to so far, { y 1: t 1 } , it effectively approximates the above probability by assuming that all futures are equally likely, i.e. p ( y t +1: T | y t , y 1: t 1 , x ) is uniform.", "Therefore, at any given time, there is no information from the future when incorporating the global structure.", "In this work, we adopt a Bidirectional Beam Search (BiBS) methodology that incorporates multiple beams to take future information into account (Sun et al., 2017).", "It makes two simplifying assumptions that better approximate the joint probability above while remaining tractable: (1) future predictions are independent of past predictions given y t ; (2) p ( y t ) is uniform.", "These yield the following approximation: p ( y t +1: T | y t , y 1: t 1 , x ) = p ( y t +1: T | y t , x ) p ( y t | y t +1: T , x ) p ( y t +1: T | x ) .", "2 Early update and LaSO perform similarly, thus we only report results for early update in 5.", "which decomposes into multiplication of a forward probability and a backward probability.", "In (Sun et al., 2017), these are retrieved from forward and backward recurrent networks, whereas in our work we use the joint scores (log probabilities shown in Eq. 1) computed for partial sequences from forward and backward beams.", "The full inference algorithm, Bidirectional Beam Search with Gold path (BiBSG), is presented in Alg.", "1. When performing the forward pass to update the forward beam, forward joint scores, S ( x , y 1: t ) , are computed with respect to current forward beam, and backward joint scores, S ( x , y T : t ) , are computed with respect to previous backward beam.", "A similar procedure is used for the backward pass.", "The search converges very fast, and we use two rounds of bidirectional search as a good approximation.", "Finally, SGTB-BiBSG compares the conditional probabilities p ( y ( ) | x ) of the best scoring output sequences y (F) and y (B) obtained from the forward and backward beams.", "The final prediction is the sequence with the higher conditional probability score.", "adopted local and global features, and some efforts to make training and inference faster.", "We use a mention prior p ( y | x ) to select entity candidates for a mention x .", "Following Ganea and Hofmann (2017), the prior is computed by averaging mention prior probabilities built from mention-entity hyperlink statistics from Wikipedia 3 and a large Web corpus (Spitkovsky and Chang, 2012).", "Given a mention, we select the top 30 entity candidates according to p ( y | x ) .", "We also use a simple heuristic proposed by Ganea and Hofmann (2017) to improve candidate selection for persons: for a mention x , if there are mentions of persons that contain x as a continuous subsequence of words, then we consider the candidate set obtained from the longest mention for the mention x .", "The feature function ( x , y t , y 1 : t 1 ) can be decomposed into the summation of a local feature function L ( x , y t ) and a global feature function G ( y t , y 1: t 1 ) .", "Local features We consider standard local features that have been used in prior work, including mention priors p ( y | x ) obtained from different resources; entity popularity features based on Wikipedia page view count statistics; 4 named entity recognition (NER) type features given by an in-house NER system trained on the CoNLL 2003 NER data (Tjong Kim Sang and De Meulder, 2003); entity type features based on Freebase type information; and three textual similarity features proposed by Yamada et al. (2016).", "5 Global features Three features are utilized to characterize entity-entity relationships: entity-entity co-occurrence counts obtained from Wikipedia, and two cosine similarity scores between entity vectors based on entity embeddings from (Ganea and Hofmann, 2017) and Freebase entity embeddings released by Google 6 3 We use a Wikipedia snapshot as of Feb. 2017.", "respectively.", "We denote the entity-entity features between entities y t and y t 0 as E ( y t , y t 0 ) .", "At step t of a training sequence, we quantify the coherence of y t with respect to previous decisions y 1: y 1 by first extracting entity-entity features between y t and y t 0 where 1 t 0 t 1 , and then aggregating the information to have a global feature vector G ( y t , y 1: t 1 ) of a fixed length: G ( y t , y 1: t 1 ) = t 1 X t 0 =1 E ( y t , y t 0 ) t 1 t 1 max t 0 =1 E ( y t , y t 0 ) , where denotes concatenation of vectors.", "Global models are powerful and effective, but often at a cost of efficiency.", "We discuss ways to speed up training and inference for SGTB models.", "Many of the adopted features such as mention priors and entity-entity co-occurrences can be extracted once and retrieved later with just a hash map lookup.", "The most expensive features are the cosine similarity features based on word and entity embeddings.", "By normalizing the embeddings to have a unit norm, we can obtain the similarity features using dot products.", "We find this simple preprocessing makes feature extraction faster by two orders of magnitude.", "SGTB training can be easily parallelized, as the computation of functional gradients are independent for different documents.", "During each training iteration, we randomly split training documents into different partitions, and then calculate the point-wise functional gradients for documents of different partitions in parallel.", "In this section, we evaluate SGTB on some of the most popular datasets for ED.", "After describing the experimental setup, we compare SGTB with previous state-of-the-art (SOTA) ED systems and present our main findings in 5.3.", "We use six publicly available datasets to validate the effectiveness of SGTB.", "AIDA-CoNLL (Hof-fart et al., 2011) is a widely adopted dataset for ED based on the CoNLL 2003 NER dataset (Tjong Kim Sang and De Meulder, 2003).", "It is Dataset # mention # doc # mention per doc AIDA-train 18,448 946 19.5 AIDA-dev 4,791 216 22.1 AIDA-test 4,485 231 19.4 AQUAINT 727 50 14.5 MSNBC 656 20 32.8 ACE 257 36 7.1 CWEB 11,154 320 34.8 WIKI 6,821 320 21.3 Table 1: Statistics of the ED datasets used in this work.", "further split into training (AIDA-train), development (AIDA-dev), and test (AIDA-test) sets.", "7 AQUAINT (Milne and Witten, 2008), MSNBC (Cucerzan, 2007), and ACE (Ratinov et al., 2011) are three datasets for Wikification, which also contain Wikipedia concepts beyond named entities.", "These datasets were recently cleaned and updated by Guo and Barbosa (2016).", "WIKI and CWEB are automatically annotated datasets built from the ClueWeb and Wikipedia corpora by Guo and Barbosa (2016).", "The statistics of these datasets are available in Table", "1. 5.2 Experimental settings Following previous work (Guo and Barbosa, 2016; Ganea and Hofmann, 2017), we evaluate our models on both in-domain and cross-domain testing settings.", "In particular, we train our models on AIDA-train set, tune hyperparameters on AIDA-dev set, and test on AIDA-test set (in-domain testing) and all other datasets (cross-domain testing).", "We follow prior work and report in-KB accuracies for AIDA-test and Bag-of-Title (BoT) F1 scores for the other test sets.", "Two AIDA-CoNLL specific resources have been widely used in previous work.", "In order to have fair comparisons with these works, we also adopt them only for the AIDA datasets.", "First, we use a mention prior obtained from aliases to candidate entities released by Hoffart et al. (2011) along with the two priors described in 4.1.", "Second, we also experiment with PPRforNED, an entity candidate selection system released by Pershina et al. (2015).", "It is unclear how candidates were pruned, but the entity candidates generated by this system have high recall and low ambiguity, and they contribute to some of the best results reported for AIDA-test (Yamada et al., 2016; Sil et al., 2018).", "7 AIDA-dev and AIDA-test are also referred as AIDA-a and AIDA-b datasets in previous work.", "Competitive systems We implement four competitive ED systems, and three of them are based on variants of our proposed SGTB algorithm.", "8 Gradient tree boosting is a local model that employs only local features to make independent decisions for every entity mention.", "Note that our local model is different from that presented by Yamada et al. (2016), where they treat ED as binary classification for each mention-entity pair.", "SGTB-BS is a Structured Gradient Tree Boosting model trained with Beam Search with early update strategy.", "SGTB-BSG uses Beam Search with Gold path training strategy presented in 3.1.", "Finally, SGTB-BiBSG exploits Bidirectional Beam Search with Gold path to leverage information from both past and future for better local search.", "In addition, we compare against best published results on all the datasets.", "To ensure fair comparisons, we group results according to candidate selection system that different ED systems adopted.", "Parameter tuning We tune all the hyperparameters on the AIDA-dev set.", "We use recommended hyperparameter values from scikit-learn to train regression trees, except for the maximum depth of the tree, which we choose from { 3 , 5 , 8 } .", "After a set of preliminary experiments, we select the beam size from { 3 , 4 , 5 , 6 } .", "The best values for the two hyperparameters are 3 and 4 respectively.", "As mentioned in 2, the learning rate is set to 1 .", "We train SGTB for at most 500 epochs (i.e., fit at most 500 regression trees).", "During training, we check the performance on the development set every 25 epochs to perform early stopping.", "Training takes 3 hours for SGTB-BS and SGTB-BSG, and takes 9 hours for SGTB-BiBSG on 16 threads.", "In-domain results In-domain evaluation results are presented in Table", "2. As shown, SGTB achieves much better performance than all previously published results.", "Specifically, SGTB-BiBSG outperforms the previous SOTA system (Ganea and Hofmann, 2017) by 0 .", "8% accuracy, and improves upon the best published results when employing the PPRforNED candidate selection system by 1 .", "9% accuracy.", "Global information is clearly useful, as it helps to boost the performance by 2 4 points of accuracy, depending on the candidate generation system.", "In terms of beam 8 Our implementations are based on the scikit-learn package (Pedregosa et al., 2011).", "search training strategies, BiBSG consistently outperforms BSG and beam search with early update.", "By employing more point-wise functional gradients to train the regression trees and leveraging global information from both past and future to carry on local search, BiBSG is able to find better global solutions than alternative training strategies.", "Cross-domain results As presented in Table 3, cross-domain experimental results are a little more mixed.", "SGTB-BS and SGTB-BSG perform quite competitively compared with SGTB-BiBSG.", "In a cross-domain evaluation setting, the test data is drawn from a different distribution as the training data.", "Therefore, less expressive models may be preferred as they may learn more abstract representations that will generalize better to out-of-domain data.", "Nevertheless, our SGTB models achieve better performance than best published results on three of the five popular ED datasets.", "Specifically, SGTB-BS outperforms the prior SOTA system by absolute 4% F1 on the CWEB dataset, and SGTB-BiBSG performs consistently well across different datasets.", "Entity disambiguation Most ED systems consist of a local component that models relatedness between a mention and a candidate entity, as well as a global component that produces coherent entity assignments for all mentions within a document.", "Recent research has largely focused on joint resolution of entities, which is usually performed by maximizing the global topical coherence between entities.", "As discussed above, directly optimizing the coherence objective is computationally intractable, and several heuristics and approximations have been proposed to address the problem.", "Hoffart et al. (2011) use an iterative heuristic to remove unpromising mention-entity edges.", "Yamada et al. (2016) employ a two-stage approach, in which global information is incorporated in the second stage based on local decisions from the first stage.", "Approximate inference techniques have been widely adopted for ED.", "Cheng and Roth (2013) use an integer linear program (ILP) solver.", "Belief propagation (BP) and its variant loopy belief propagation (LBP) have been used by Ganea et al. (2016) and Ganea and Hofmann (2017) respectively.", "We employ another standard approximate inference algorithm, beam search, in this work.", "To make beam search a better fit for SGTB training, we propose BiBSG that improves beam search training on stability and effectiveness.", "Structured gradient tree boosting Gradient tree boosting has been used in some of the most accurate systems for a variety of classification and regression problems (Babenko et al., 2011; Wu et al., 2010; Yamada et al., 2016).", "However, gradient tree boosting is seldom studied in the context of structured learning, with only a few exceptions.", "Dietterich et al. (2004) propose TreeCRF that replaces the linear scoring function of a CRF with a scoring function given by a gradient tree boosting model.", "TreeCRF achieves comparable or better results than CRF on some linear chain structured prediction problems.", "Bagnell et al. (2007) extend the Maximum Margin Planning (MMP; Ratliff et al., 2006) algorithm to structured prediction problems by learning new features using gradient boosting machines.", "Yang and Chang (2015) present a general SGTB framework that is flex-ible in the choice of loss functions and specific structures.", "They also apply SGTB to the task of tweet entity linking with a special non-overlapping structure.", "By decomposing the structures into local substructures, exact inference is tractable in all the aforementioned works.", "Our work shows that we can train SGTB models efficiently and effectively even with approximate inference.", "This extends the utility of SGTB models to a wider range of interesting structured prediction problems.", "In this paper, we present a structured gradient tree boosting model for entity disambiguation.", "Entity coherence modeling is challenging, as exact inference is prohibitively expensive due to the pairwise entity relatedness terms in the objective function.", "We propose an approximate inference algorithm, BiBSG, that is designed specifically for SGTB to solve this problem.", "Experiments on benchmark ED datasets suggest that the expressive SGTB models are extremely good at dealing with the task of ED.", "SGTB significantly outperforms all previous systems on the AIDA-CoNLL dataset, 784 and it also achieves SOTA results on many other ED datasets even in the cross-domain evaluation setting.", "SGTB is a family of structured learning algorithms that can be potentially applied to other core NLP tasks.", "In the future, we would like to investigate the effectiveness of SGTB on other information extraction tasks, such as relation extraction and coreference resolution.", "We thank Prabhanjan Kambadur and other people in the Bloomberg AI team for their valuable comments on earlier version of this paper.", "We also thank the NAACL reviewers for their helpful feedback.", "This work also benefitted from discussions with Mark Dredze and Karl Stratos." ]
[ "method", "abstain", "abstain", "objective", "method", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "method", "method", "objective", "abstain", "objective", "objective", "method", "objective", "objective", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "method", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "method", "abstain", "objective", "abstain", "abstain", "abstain", "objective", "other", "other", "other" ]
[ "Neural Machine Translation (NMT) systems exhibit problematic biases, such as stereotypical gender bias in the translation of occupation terms into languages with grammatical gender.", "In this paper we describe a new source of bias prevalent in NMT systems, relating to translations of sentences containing person names.", "To correctly translate such sentences, a NMT system needs to estimate the gender of names.", "We show that leading systems are particularly poor at this task, especially for female given names.", "This bias is deeper than given name gender: we show that the translation of terms with ambiguous sentiment can also be affected by person names, and the same holds true for proper nouns denoting race.", "To mitigate these biases we propose a simple but effective data augmentation method based on randomly switching entities during translation, which effectively eliminates the problem without any effect on translation quality.", "Natural language processing systems are seeing widespread adoption, prompting careful study into cultural biases they exhibit, and methods for bias mitigation.", "Gender bias is common in automated systems (Park et al., 2018; Borkan et al., 2019; Stanovsky et al., 2019; Saunders and Byrne, 2020), with a leading cause being training corpora that include far more sentences referring to men than to women.", "A neural machine translation (NMT) system navely trained on such data is more likely to translate text that should be feminine into masculine when translating into a language with grammatical gender.", "Previously, researchers (Stanovsky et al., 2019; Escud Font and Costa-juss, 2019; Saunders and Byrne, 2020; Stafanovics et al., 2020) have demonstrated that NMT systems can still be biased even when there are explicit gender pronouns in the input sentences.", "NMT systems are not only biased for gender, and gender bias is not limited to gender pronouns.", "Other biases include racial biases, professional biases, and individual biases, among others.", "In this paper, we focus on two kinds of biases of person name translations by NMT systems: gender biases and sentiment biases.", "As an important category of named entity, person names are particularly sensitive to translation errors since they refer to real-world individuals, and systematic biases may cause serious distress to users, and reputational damage, libel or other legal consequences for vendors.", "Gender bias in the translation of person names is a natural extension of gender biases in previous work.", "For instance, (Stanovsky et al., 2019; Escud Font and Costa-juss, 2019; Saunders and Byrne, 2020; Stafanovics et al., 2020) considered whether translation systems can translate keywords such as occupation terms into the correct form when there is explicit gender information in the text.", "This paper can be seen as replacing this explicit gender information (pronouns) with implicit gender information (person names), to test whether an NMT system can correctly determine the gender of a name.", "Our results indicate that NMT systems often mistakes female names for males, but the reverse is rarely seen; a situation that may cause widespread offence.", "Biases pertaining to sentiment of sentences containing person names have been studied in sentiment analysis (Kiritchenko and Mohammad, 2018), where model predictions of sentiment are sensitive to changing the person name.", "We present a method for detecting sentiment bias in translation based on the translation of sentiment ambiguous words, where the system must choose between a commendatory and derogatory translation ( e.g. , proud can mean either satisfied or arrogant about one's achievements).", "When the correct translation is not clear from the context, NMT systems use the person name to decide.", "When this occurs consistently 2576 towards a specific sentiment, this can result in insidious bias against (or towards) individuals (or as we also show, racial groups.) To mitigate the above biases against person names in translation, we propose a data-augmentation method switch-entity' ( SE ), which works by altering training sentences containing named entities by randomly switching the entities for other entities of the same type ( e.g. , with matching gender).", "This simple strategy normalises the distribution of named entities, such that all names are observed sufficiently many times and in a diverse range of contexts.", "This ensures gender signals are learned correctly, and also stops the translation system from associating the name with idiosyncracies of the contexts in which is appears, thus mitigating sentiment bias.", "Modifying the training data carries the risk of degrading sentence quality, and thus degrading accuracy.", "Although replacing a named entity with another does change sentence meaning, it is unlikely to compromise grammaticality or render the sentence semantically incoherent.", "Our results show that SE beneficially mitigates gender bias when translating names into gendered languages, which we show leads to more accurate morphological inflection in sentences with female entities.", "At the same time, it does not sacrifice accuracy: the BLEU score of the SE -trained model is the same as for standard training.", "We show two new biases for person names in NMT, relating to gender and sentiment.", "Using constructed templates we show this is a widespread problem affecting state-of-the-art NMT systems.", "We propose a data augmentation method, switch-entity, to mitigate these biases in training, without the need for extra data.", "In languages with rich grammatical gender, the gender of people referenced in a sentence will often affect the morphology of the other words in the sentence.", "For example, [PER] is a Royal Designer translates into German as either Masc.", "where gender agreement holds between the person (PER) and the determiner, adjective and occupation noun.", "Accordingly, knowing the gender of Input (English) Translation (German) She is the developer of the company.", "the person is critical when translating from a language like English, where gender is rarely marked, into a gendered language.", "Ignoring this issue will affect the quality of outputs, and consistent mistakes can constitute a form of gender bias.", "Previous works (Stanovsky et al., 2019; Escud Font and Costa-juss, 2019; Saunders and Byrne, 2020; Stafanovics et al., 2020) showed that NMT systems exhibit gender biases, due to a large skew towards male persons in the training data, resulting in NMT systems producing gender agreement mistakes when translating sentences containing a feminine pronoun.", "A more complex situation arises when presented with person names: gender is not explicitly marked, but is only implied, and the translation system must deduce the gender in order to correctly inflect the translation.", "1 Being able to correctly translate sentences with gender pronouns does not guarantee the correct translation of name sentences, as illustrated in the examples in Table", "1. 2.1 Template for person name bias Here, we propose an evaluation method for assessing whether gender is translated accurately for English German and English French.", "We created a range of templates encoding various syntactic relations which require gender agreement, and assess whether the translation includes the correct morphological inflection ( e.g. , for the above, the choice between Designer vs. Designerin ).", "Table 2 shows a selection of the 30 templates we use for measuring the accuracy of gender agreement.", "Each template includes a person name slot, which we replace with a name from a list of male and female names.", "1 Without additional resources, gender deduction will never be perfect.", "A natural extension would include named entity linking to a knowledge base which stores gender inflection and pronouns for each individual.", "The main body of the grammatical gender system is the name, and it forms an agreement system with other verbs, articles and adjectives.", "Words will change to some extent (usually inflectional affixes) according to gender.", "For example, in German, feminine occupation nouns end with in'.", "In our test, we check whether the translation includes the correct form of the underlined noun, which should agree with the gender of the person.", "Strictly speaking, for the translation to be perfect, other words in the translation will also require gender agreement, however for the sake of simplicity we limit our attention to the noun.", "From visual inspection, when the form of the noun is correctly predicted most often means all tokens have correct gender agreement.", "To evaluate gender bias with respect to names, we must first account for the confound of bias on key nouns.", "For example, some en-de models always translate teacher into feminine form Lehrerin, and never the masculine form.", "Thus for these models a test template for teacher will not help to measure gender bias for names.", "Thus, we first filter the templates using the pronouns he and she and remove from consideration all templates for the which key noun only has one translation.", "Then we tested each machine translation system with these filtered templates using a set of 200 full names and 200 first names.", "Metrics: We measure accuracy, Acc , the proportion of the number of key nouns are translated into the correct form to the total number of templates tested, to evaluate name gender bias.", "We report the mean accuracy for male and female names separately, denoted Acc m and Acc f , respectively, as well as the absolute difference between these scores, denoted Acc .", "Names: Generally speaking, a name may be first name, last name or full name.", "The last name usually does not carry gender information, so we only tested the first name and full name (full lists of names are in the Appendix, Table 12).", "For first names, we used a data set of first names and their frequencies from U.S. births.", "2 We find the set of names with obvious gender, where the frequency of one gender is more than three times that of the other, and the absolute value of the difference is more than 100.", "We reduce this list to 100 female and 100 male names by selecting for each gender the top 1000 names by frequency then randomly sampling 100 names uniformly.", "Full names were extracted from the ParaCrawl corpus, and the U.S. births data set was used to label their gender based on the first name, the names were filtered as above, and finally we randomly selected 100 names each male and female.", "Note that this process is limited to binary gender, but could feasibly be extended to non-binary gender with the right resources, which we leave to future work.", "Language and Models: We tested English German and English French, chosen based on English not having grammatical gender while German and French both do.", "In both settings we compare three online translation systems, 3 2 https://courses.cs.duke.edu/ /compsci307d/fall20/assign/01_data/data/ssa_complete/ 3 Namely, Google Translate, Bing Translator and AWS translate.", "off-the-shelf pretrained research systems, and several custom trained models.", "Overall the systems cover both transformer and convolutional network architectures, and are trained over different corpora.", "Please see Appendix A for further details.", "The test results are shown in Table 3, it can be clearly seen that the NMT system favours male names, with all results far better than for female names, even for the commercial translation systems.", "The smallest Acc is as high as 13.7%.", "However, better performance does not guarantee fewer biases, the BLEU value of custom.wmt18 is higher than transformer.wmt16 , but both first name and full name Acc f are lower.", "All models perform better on full names than first names, it may be because there are more uncommon names in the first names, and full names will contain more information.", "conv.wmt17 has a large bias, it barely detects the female names at all.", "Compared with custom.iwslt17 , which also has a high bias, conv.wmt17 uses much larger corpus and its predictive performance is much higher than cus-tom.iwslt17 .", "Such a high bias may be caused by the convolutional architecture, which cannot capture word level phenomena as well as the transformer.", "Comparing two en-fr wmt14 models, the evaluation results of conv model is also worse than transformer model.", "In general, the larger corpus, denote them as A-C.", "the less the name bias is present.", "This is because the larger the amount of data, the model is exposed to more names, and can better distinguish their gender.", "However, obtaining more data is usually not easy, especial for low-resource language.", "In NMT training corpora, names appear in different contexts, which can result in sentiment biases for specific names.", "For instance, a popular celebrity is likely to appear in many more positive sentiment contexts than a reviled mafioso, which may mean a NMT system mistakenly associate person names with translation sentiment.", "We set about measuring whether this manifests in NMT output using templated ambiguous contexts in English in which the ambiguity must be resolved when translating into the target language.", "To do so we use sentiment ambiguous words : a kind of homograph which has both commendatory and derogatory meanings.", "This is illustrated in Table 4, which shows a translation system is biased towards James over Alice.", "We chose a cross-language family translation of English Chinese for this experiment, inspired by the work of Si et al. (2019).", "Si et al. (2019) constructed ambiguous test sentences based on 110 sentiment ambiguous words.", "Starting with their list, we filtered out words that did not fit with PERSON entities, leaving 30 words.", "We then constructed a template for each word to test for sentiment bias on names.", "Table 5 show a selection of the templates (see also Table 11 in the Appendix for the full set).", "We kept the sentences 2579 Template en sentence Positive / Negative [PERSON] is so proud.", "simple since we want to eliminate the influence of context, and thereby assess how person names affect the translations of sentiment ambiguous words.", "We conducted similar evaluation progress as gender agreement test (Section 2.2).", "The labelled translation words shown in the Table 5 present some examples.", "Because en-zh translation has high entropy, words can have many different translations and the use of dictionaries often fails to cover all possibilities.", "Therefore, when using these templates, we manually check the translation results and add any new translations to our inventory of positive and negative words.", "Metrics We have two evaluation metrics for names' sentiment tendencies: word-level positiveness t and sentence-level positiveness s .", "The word-level positiveness is evaluated by checking the translations of sentiment ambiguous words, calculating the ratio of the number of sentences that sentiment ambiguous words translated to positive words, to the total number of template sentences.", "The sentence-level positiveness is scored by a sentiment analysis classifier (Tian et al., 2020), applied to the translation to get the probability the sentence is a positive sentence, 4 after which we report the mean score over the 30 templates for each name.", "In order to measure the overall degree of sentiment bias of models, we report the highest and lowest mean scores among all person names, as well as the gap between these values, denoted t and s for word and sentence level, respectively.", "4 To remove potentially confound bias from the sentiment classier, we masked PERSON names, replacing all names with masculine pronouns [en: he].", "For example, when we use sentiment analysis to score translation , we first convert sentence into System t min t max t s min s max s Online A 0.47 0.67 0.20 0.40 0.49 0.09 Online B 0.40 0.70 0.30 0.38 0.57 0.20 Online C 0.43 0.60 0.17 0.46 0.65 0.09 opus.en-zh 0.23 0.50 0.27 0.23 0.38 0.15 wmt17 0.26 0.67 0.40 0.40 0.67 0.27 Table 6: The sentiment biases test results on five NMT systems.", "Names For sentiment biases, we used the full names of celebrities, for which we expect sufficient data for NMT systems to learn biases.", "We selected the top 10 popular male celebrities and 10 female celebrities across 7 different occupations (see list in Table 13 in the Appendix).", "We expect different professions to have a substantial impact on training contexts, which may result in different degrees of bias.", "Gender, race and nationality Our templates can be used not only to test names but also to test other sentiment biases, such as gender, race and nationality.", "We used 8 different races and nationalities to fill the templates, which we minimally adapted to ensure they are grammatically correct.", "Additionally, we add man or woman ( e.g. , Asian men) to measure intersectional racial and gender bias.", "Models We tested three commercial systems, as before; and two research models: a pretrained model opus.en-zh and a custom transformer model custom.wmt17 trained with wmt17 en-zh corpus.", "Overall bias Table 6 shows the results of the sentiment bias test for several en-zh NMT systems.", "5 It can be seen that wmt17 has the largest bias, and Online C the smallest, although even this system has a substantial range of sentiment with t = 0 .", "17 .", "The opus.en-zh trained system is uniformly more negative than the other systems.", "Biases per profession and gender We further split the results by occupation and gender, as shown in Figure 1 for Online A .", "From this it is clear that some occupations are more positively translated than others ( e.g. , atheletes vs. actors/actresses) and 5 The BLEU score of opus.en-zh and wmt17 on new-stest2017 is 26.19 and 34.87, respectively.", "in some professions there appears to be evidence of gender bias, such as preferential treatment of actors over actresses, and female politicians and entrepreneurs over their male counterparts.", "Overall there is limited evidence for general gender bias, as the average scores for male and female entities are similar, but note that the results for men is more concentrated than for women, which is more polarized.", "Biases on race and nationality The results for testing race and nationality terms are shown in Figure 2, which overall shows that race and nationality have substantial influence on translation sentiment.", "Black man and Japanese man have the most negative results, and Asian and Australian the most positive.", "There is no consistent evidence of gender bias, however it is surprising that there is often a sizeable (mostly positive) difference between using a race or nationality term on its own versus its use alongside a gender term (man/woman).", "Bias in NMT models are mainly caused by the training data, which is typically unbalanced, e.g. , females are much rarer than males in the training corpus, leading to gender bias.", "One simple way to balance out gender biases is to add a number of female data to balance the ratio of male to female sentences.", "However, obtaining new data can be difficult, especially for low-resource languages.", "Here, we propose a data augmentation method that does not require additional data, SWITCHENTITY .", "By switching names in the training corpus, the model can train with more correct translation patterns about female names, so that the model can correctly identify the gender of the name, and achieve the effect of reducing biases.", "This method can be applied not only to PERSON entities, but also to other classes of named entities.", "Let x t , y t be the language pair containing the named entity t and t x , t y be the named entity pair.", "L el be the candidate list of named entities, where e is the entity type and l the language.", "The replacement candidate list L can be obtained from different resources.", "Here we present a method to extract L from the original corpus, NER models (at least one side) and alignment tool are required:", "1. Use NER to identify named entities on both the source and target sentences;", "6 2. Perform automatic word-alignment over the parallel corpus; and", "3. Use the alignment to find the corresponding t x , t y , which form a named entity pair.", "To ensure precision in step 3 we adopt a conserva-tive approach: If some aligned tokens of a named entity are parts of a named entity in the other language with the same type, they will be regarded detected as a pair.", "One further step is performed only on person entities, where this category is further split into male and female classes based on the person's given name, if available.", "Once the candidate list of entities has been computed, the last step in applying SE involves switching each of the named entities identified above with another named entity during each epoch training, 6 The method also works with NER on one side only, but it may sacrifice precision.", "which is drawn uniformly from the set of entities of the same type (and gender, when considering persons).", "To illustrate, in the following we switch out Al Gore for JAY-Z: (1) Candidate Al Gore concedes the US election.", "Kandidat Al Gore rumt die US-Wahlen ein.", "(2) Candidate JAY-Z concedes the US election.", "Kandidat JAY-Z rumt die US-Wahlen ein.", "In corpora, the distribution of names is usually skewed such that the majority of names have very low frequency, and these names are not well learned by the model.", "SE has the effect of flattening the distribution over entity strings, while preserving the natural distribution over entity types, ensuring the model focuses more on learning to translate names in the tail.", "Switching any parts of a training sentence carries the risk of corrupting the data, both grammatically and semantically, and this will depend on the granularity of named entity labels.", "Switching named entities with others of the same type is key to maintain the sentences' quality.", "For instance, if we mistakenly switch male and female names, it will corrupt training and may result in gender agreement mistakes in translation.", "In the example shown above, we cannot switch Al Gore with a female name without changing Kandidat from masculine to feminine gender.", "For this reason we refine the PERSON entity category to include gender, and only switch like-gender entities.", "We experimented with SE on the three custom models we mentioned in Section 2, use the same training configuration (see Appendix A for details).", "Quality of translation First, we test whether SE has an effect on translation accuracy.", "In terms of BLEU score, Table 7 shows SE has a negligible effect versus a vanilla baseline over both languages.", "Inspection of the translation outputs (see Table 9 in the Appendix) shows that the translations for the SE and vanilla models are overall very similar, exhibiting changes in case, entity translation and transliteration, as well as morphological inflection.", "Gender detection Table 7 show that SE has a substantial effect on gender inflection when both translating en de and en fr.", "SE shows marked improvements for females for both IWSLT (+14.4% accuracy) and WMT (+27.3% accuracy), Model BLEU Acc f Acc m IWSLT17 en de Vanilla 19.8 0 .", "at the expense of a small drop for males (-2.9% and -3.3%, respectively).", "Our method goes some way to addressing the significant bias towards males in these NMT systems (especially true of WMT), which reflect the large gender skew in their training corpora.", "For the two IWSLT tasks, the training corpora are small and the models show substantial gender bias in general, not only pertaining to name gender detection.", "Therefore, the SE method has a significant effect of mitigating biases for those models (increasing the accuracy of female name gender by between 7 and 25 times for en fr and en de, respectively), but despite these improvements the bias remains large.", "Although SE does not introduce the new female training samples, it does balance the frequency of female names, such that contexts of high-frequency female names are shared with low-frequency female names, thereby better training the NMT model to learn general gender cues.", "Sentiment bias We also tested SE on sentiment biases, the results show SE can help to mitigate sentiment biases on names, with t reducing from 0.40 to 0.21.", "This is because training with SE means PERSON names will have chance to appear in different contexts during training, instead of may only appearing in a specific context like vanilla training, which can help to reduce the model's stereotype of names.", "We did not attempt to use SE to mitigate race or nationality biases, although in principle this could be possible using the method.", "Gender bias is a central concern in machine translation research.", "Stanovsky et al. (2019) introduced the WinoMT challenge data set from the study of coreference gender bias (Zhao et al., 2018; 2582 Rudinger et al., 2018) to test the gender bias of machine translation systems.", "Researchers tried many different methods to mitigate gender bias.", "Saunders and Byrne (2020) and Costa-juss and de Jorge (2020) both used transfer learning to reduce gender bias by fine-tuning models with a small gender-balanced data set.", "Stafanovics et al. (2020) annotated source language sentence with grammatical gender information from target language to reduce the stereotype of gender for translation systems.", "Escud Font and Costa-juss (2019) used word embedding techniques, debiased the word embedding and then used these embeddings in training translationn models from scratch.", "All of this work was focused on sentences with gender pronouns, studying whether translation systems can correctly determine the grammatical gender of the words associated with gender pronouns.", "The gender bias we proposed in this paper is focused on names with implicit gender information.", "Other social biases and stereotypes have also been investigated.", "Kiritchenko and Mohammad (2018) evaluated gender and race biases on two hundred sentiment analysis systems, similar to our work, they also tested the influence of names on biases.", "Davidson et al. (2019) examined racial biases on a hate speech task, finding that tweets written in African-American English are more likely to be marked as offensive than tweets written in Standard American English.", "Rudinger et al. (2017) used pointwise mutual information to evaluate over the SNLI natural language inference data set, and uncover a wider range of biases, including gender, age, race, and nationality.", "Shwartz et al. (2020) is the closest to our work, they evaluated the sentiment bias in a language generation model on given names, finding evidence of bias whereby generated sentences related to specific given names being more negative than others.", "Our mitigation method SWITCHENTITY is based on data augmentation.", "Similar methods of entity switching have been proposed for named entity recognition (NER), either for data augmentation in training to increase model coverage over named entities (Agarwal et al., 2020); or during testing as a diagnostic tool to model generalization (Dai and Adel, 2020).", "Wang et al. (2018) proposed more general methods of random lexical substitutions for NMT, which designed to improve translation performance.", "Song et al. (2019) use data augmentation for name entity translation by replacing source words with their corresponding target translation.", "In this paper, we revealed two biases in the NMT systems, gender biases and sentiment biases against names.", "Our results show that the existing research models and commercial translation systems have serious biases, which not only affects translation quality, but also have ethical implications on fairness and bias.", "In order to mitigate biases, we proposed SWITCHENTITY , a simple training strategy which can reduce name biases without the need for any additional data.", "We discuss ethical considerations and limitations of our work.", "First, we focus solely on binary gender, as this can be directly observed in many languages with grammatical gender.", "Our use of binary gender is not intended to promulgate an inappropriate binary gender focus, but rather allows the study of gender bias in translation, based on the text contained in translation corpora.", "Admittedly our method has limitations, for instance, it will not be able to adequately handle trans-gendered and non-binary individuals; to do so would require substantial additional translation corpora, as well as extensions to the technique, which we leave for future research.", "Second, we evaluate only a small number of language pairs, but we expect similar behaviour for translation into many other gendered languages, the exploration of which we leave for future work.", "We thank the anonymous reviewers for their constructive comments.", "The authors acknowledge funding support by Meta, and would like to thank Francisco (Paco) Guzmn for fruitful discussions about this work." ]
[ "abstain", "objective", "abstain", "result", "result", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "result", "result", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "other", "method", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "method", "objective", "other", "method", "other", "other", "abstain", "abstain", "other", "other", "other", "result", "result", "objective", "abstain", "abstain", "abstain", "abstain", "method", "other", "other" ]
[ "Identifying causal relations of events is an important task in natural language processing area.", "However, the task is very challenging, because event causality is usually expressed in diverse forms that often lack explicit causal clues.", "Existing methods cannot handle well the problem, especially in the condition of lacking training data.", "Nonetheless, humans can make a correct judgement based on their background knowledge, including descriptive knowledge and relational knowledge .", "Inspired by it, we propose a novel L atent S tructure I nduction N etwork (LSIN) to incorporate the external structural knowledge into this task.", "Specifically, to make use of the descriptive knowledge, we devise a Descriptive Graph Induction module to obtain and encode the graph-structured descriptive knowledge.", "To leverage the relational knowledge, we propose a Relational Graph Induction module which is able to automatically learn a reasoning structure for event causality reasoning.", "Experimental results on two widely used datasets indicate that our approach significantly outperforms previous state-of-the-art methods.", "Event causality identification (ECI) aims to identify causal relation of events in texts.", "For example, in the sentence The earthquake generated a tsunami ., an ECI model should be able to identify a causal relationship that holds between the two mentioned events, i.e., earthquake cause tsunami .", "ECI is an important task in natural language processing (NLP) area and can support many NLP applications, such as machine reading comprehension (Berant et al., 2014), process extraction (Thalap-pillil Scaria et al., 2013) and future event prediction (Radinsky et al., 2012; Hashimoto et al., 2014).", "challenging, because event causality is usually expressed in diverse forms that often lack explicit clues indicating its existence.", "For example in Figure 1, the sentence has no explicit clue indicating the causal relation between global warming and tsunami .", "In this scenario, models can resort to a large amount of labeled data to learn diverse causal expressions.", "However, existing ECI datasets are very small.", "For example, the largest dataset EventStoryLine (Caselli and Vossen, 2017) only contains 258 documents, which is not suffi-cient to train neural network models (Liu et al., 2020).", "Consequently, models cannot thoroughly understand the text and possibly make a wrong prediction.", "Nonetheless, humans could make a correct judgement, because humans have the background knowledge about the two events.", "To be more spe-cific, humans not only know what the two events are, but also know the connection between them.", "Fortunately, existing knowledge bases (KBs) usually contain the Descriptive Knowledge of events and Relational Knowledge between events, which can be regarded as the background knowledge to enhance ECI models.", "In this paper, we focus on how to incorporate these two kinds of external knowledge into the task.", "Descriptive Knowledge : The external knowledge base contains the descriptive or explanatory information about events, which can be called the descriptive knowledge of events.", "It usually consists of one-hop neighbors of events.", "This kind of knowledge is able to help the model better understand what the mentioned event is.", "For example in Figure 1, the descriptive knowledge associated with global warming includes (global warming, IsA, temperature change) , (global warming, CreatedBy, greenhouse gas) and so on.", "If the model can make use of such knowledge, it is obvious that the model can better understand the meaning of the event itself than using only the given text.", "Therefore, incorporating the descriptive knowledge is very helpful for this task.", "However, when leveraging this kind of knowledge, we find two critical challenges: (1) As shown in Figure 1, the descriptive knowledge forms a sub-graph.", "How to effectively encode the graph-structured knowledge is a very challenging problem; (2) The knowledge base is incomplete (Wang et al., 2020), which will inevitably cause the descriptive knowledge of some events cannot be obtained from the KB.", "Thus, the model should have the ability to obtain and encode such knowledge, even if it does not exist in the KB.", "Relational Knowledge : The external knowledge base contains connections between events, which can be referred as the relational knowledge between events.", "It is usually defined by the multi-hop path between two events.", "This kind of knowledge can provide useful information for event causality reasoning, especially when the text lacks causal clues.", "For example in Figure 1, the relational knowledge between the two events is global warming Causes glacier melting CapableOf sea-level rising AtLocation ocean AtLocation tsunami .", "Apparently, compared with only using text information, utilizing the relational knowledge can provide ample evidence for the model to judge the causality between global warming and tsunami .", "However, two challenges exist when using the relational knowledge: (1) The multi-hop path may miss some potentially useful relations.", "For example in Figure 1, the fact (sea-level rising, Causes, tsunami) is described in the wikipedia page of sea-level rising 1 , while it is not annotated in the KB; (2) Not all the knowledge on the path is related to causality, such as (sea-level rising, AtLocation, ocean) .", "Therefore, directly reasoning along the multi-hop path struc-1 https://en.wikipedia.org/wiki/Sea_ level_rise ture may not be optimal.", "The model should be able to learn a more reasonable structure for capturing potentially useful information and reducing the impact of irrelevant knowledge.", "In this paper, we propose a novel method termed as L atent S tructure I nduction N etwork (LSIN) to overcome aforementioned challenges.", "Specifically, we devise a Descriptive Graph Induction module to make use of the descriptive knowledge.", "The module first adopts a hybrid method of retrieval and generation to obtain the descriptive knowledge, and then utilizes the information aggregation technique to encode the graph-structured knowledge.", "Meanwhile, we propose a Relational Graph Induction module to leverage the relational knowledge.", "The module first treats the reasoning structure as a latent variable and learns it in an end-to-end fashion.", "Then, the module performs event causality reasoning based on the induced structure.", "Experimental results on two widely used datasets demonstrate that our model substantially outperforms previous state-of-the-art methods.", "Our contributions are summarized as follows: We propose a novel Latent Structure Induction Network (LSIN) to leverage the external structural knowledge.", "To our knowledge, we are the first to use both the descriptive knowledge and relational knowledge for this task.", "To exploit the descriptive knowledge, we devise a descriptive graph induction module.", "To utilize the relational knowledge, we propose a relational graph induction module.", "Experimental results on two widely used datasets indicate that our proposed approach significantly outperforms previous state-of-the-art methods.", "Event causality identification (ECI) is a very important task in natural language processing area, which has attracted extensive attention in the past few years.", "Early studies for the task are feature-based methods which utilize lexical and syntactic features (Riaz and Girju, 2013; Gao et al., 2019), explicit causal patterns (Beamer and Girju, 2009; Do et al., 2011; Hu et al., 2017), and statistical causal associations (Riaz and Girju, 2014; Hashimoto et al., 2014; Hu and Walker, 2017; Hashimoto, 2019) for the task.", "With the development of deep learning, neural Global warming worsened, and tsunami strengthened.", "network-based methods have been proposed for the task and achieved the state-of-the-art performance (Kruengkrai et al., 2017; Kadowaki et al., 2019; Liu et al., 2020; Zuo et al., 2020).", "Liu et al. (2020) propose a mention masking generalization method and also consider the external structural knowledge.", "The very recent work (Zuo et al., 2020) propose a data augmentation method to alleviate the data lacking problem for the task.", "Regarding datasets construction, Mirza (2014) annotates the Causal-TimeBank dataset about event causal relations in the TempEval-3 corpus.", "Caselli and Vossen (2017) construct a dataset called EventStoryLine for event causality identification.", "Despite many efforts for this task, most existing methods typically train the models on manually labeled data solely, rarely considering the external structural knowledge.", "As a result, these methods cannot handle well the cases where there is no explicit causal clue.", "Although Liu et al. (2020) leverage the descriptive knowledge to enrich event representations, they directly retrieve the descriptive knowledge from the KB.", "Therefore, their method cannot handle the cases where there is no knowledge about the event in the KB.", "In addition, they ignore the relational knowledge between events.", "By contrast, our method can not only generate the descriptive knowledge when it cannot be retrieved from the KB, but also leverage the relational knowledge.", "To our knowledge, we are the first to simultaneously make use of the descriptive knowledge and relational knowledge for this task.", "Following previous works (Ning et al., 2018; Liu et al., 2020), we formulate ECI as a binary classification", "classification problem.", "For every pair of events in a sentence, we predict whether a causal relation holds.", "Figure 2 schematically visualizes our approach, which consists of three major components: (1) Context Encoding ( 3.1), which encodes the input sentence and outputs contextualized representations; (2) Descriptive Graph Induction ( 3.2), which first obtains the corresponding descriptive knowledge for each event, and then encodes the graph-structured knowledge; (3) Relational Graph Induction ( 3.3), which automatically induces a reasoning structure and performs causality reasoning on the induced structure.", "We will illustrate each component in detail.", "Given a sentence with a pair of events (denoted as e 1 and e 2 ), the context encoding module aims to extract context features, which takes the sentence as input and outputs the context representations.", "Our context encoder is based on the Transformer architecture (Vaswani et al., 2017).", "We adopt the BERT (Devlin et al., 2019) to encode the input sentence, 2 which has achieved the state-of-the-art performance for ECI task (Liu et al., 2020; Zuo et al., 2020).", "After using BERT encoder to compute the contextual representations of the entire sentence, we concatenate representations of [CLS], e 1 and e 2 as the context representation regarding to the event pair ( e 1 , e 2 ), namely F ( e 1 ,e 2 ) C = h [CLS] h e 1 h e 2 , (1) 2 Note that the encoder is not our focus in this paper.", "In fact, other models like convolutional neural networks and long short-term memory networks can also be as encoders.", "where indicates the concatenation operation.", "h [CLS] R d , h e 1 R d and h e 2 R d are representations of [CLS], e 1 and e 2 , respectively.", "d is the output hidden size of BERT model.", "Given e 1 and e 2 , we adopt a hybrid method of retrieval and generation to obtain their descriptive knowledge, respectively.", "The descriptive knowledge forms a sub-graph which is called Descriptive Graph (denoted as G d ).", "For this paper, we prefer CONCEPTNET (Speer et al., 2017) as the external KB, which contains abundant semantic knowledge of concepts.", "We take e 1 as an example to illustrate the knowledge obtaining procedure: (1) If the descriptive knowledge can be retrieved from the KB, we adopt the retrieval method.", "Our method first grounds e 1 to a concept via matching the event mention with the tokens of concepts in CONCEPTNET .", "We enhance the matching approach with some rules, such as soft matching with lemmatization and filtering of stop words.", "The grounded concept is called zero-hop concept.", "Then, our method grows zero-hop concept with one-hop concepts.", "The zero-hop concept, one-hop concepts and all relations between them form the descriptive graph for e 1 (denoted as G d 1 ).", "(2) If the descriptive knowledge cannot be retrieved from the KB, we adopt the generation method.", "Our method employs the pre-trained model, COMET (Bosselut et al., 2019), which is originally proposed for the knowledge base completion.", "Specifically, COMET is obtained by fine-tuning GPT (Radford et al., 2018) on CONCEPTNET .", "The input of COMET is the head event and candidate relation, and the output is the tail event.", "The relation types are the same as the ones used in Bosselut et al. (2019).", "By leveraging COMET, we can generate the descriptive graph G d 1 for e 1 .", "Graph neural networks have been widely used to encode graph-structured data (Lin et al., 2019; Yang et al., 2019), as they are able to effectively col-lect relevant evidence based on an information aggregation scheme.", "In addition, many works show that relational graph convolutional networks (R-GCNs) (Schlichtkrull et al., 2018) usually over-parameterize the model and cannot effectively utilize multi-hop relational information (Zhang et al., 2018; Lin et al., 2019).", "We thus apply GCNs (Kipf and Welling, 2017) to encode the related descriptive knowledge of e 1 and e 2 .", "Formally, given a descriptive graph G d (i.e., G d 1 or G d 2 ) with n d nodes (i.e., concepts), which can be represented with an n d n d adjacency matrix A d .", "If there is a connection between node i and node j , the A dij is set to 1. For the node i at the l -th layer, the convolution computation can be defined as follows: u ( l ) i = ( n d (cid:88) j =1 A dij W ( l ) u u ( l 1) j + b ( l ) u ) , (2) where W ( l ) u and b ( l ) u are the weight matrix and bias vector for the l -th layer, respectively.", "is an activation function (e.g., ReLU).", "u (0) i R d is the initial representation of the i -th node obtained by the pre-trained model (i.e., BERT).", "To consider context information when encoding descriptive knowledge, we use the h e 1 and h e 2 obtained in Section 3.1 as the initial representations of events.", "After the knowledge encoding, the representations of e 1 and e 2 in descriptive graphs are denoted as u e 1 and u e 2 , respectively.", "We concatenate them as the descriptive knowledge representation: F ( e 1 ,e 2 ) D = u e 1 u e 2 .", "Given e 1 and e 2 , our model first retrieves the multihop path between the two events from CONCEPTNET .", "We refer to the multi-hop path as Relational Path .", "Since shorter connections between two concepts could mean stronger relevance (Lin et al., 2019), our model exploits the shortest path between the two events as the relational path.", "We represent the CONCEPTNET as a graph, and then use Net-workX toolkit 3 to get the shortest path between the two events.", "When there are multiple shortest paths, we randomly select one path for avoiding information redundancy.", "To capture potentially useful information and reduce the impact of irrelevant knowledge on the relational path, our model treats the reasoning structure as a latent variable and induces it with the", "input of the relational path, which can be shown in Figure 2. We call the induced reasoning structure as Relational Graph (denoted as G r ).", "The structure induction module is built based on the structured attention (Kim et al., 2017).", "We use a variant of Kirchhoff's Matrix-Tree Theorem (Koo et al., 2007; Nan et al., 2020) to learn the graph structure.", "Formally, the nodes of relational graph are the concepts on the relational path.", "The initialized representation of each node is obtained via the pre-trained model (i.e., BERT).", "The representation of the i -th node is denoted as m i R d .", "We first calculate the pair-wise unnormalized attention score s ij between the i -th node and the j -th node: s ij = (tanh( W p m i )) TW b (tanh( W c m j )) , (4) where W p and W c are weights matrixes.", "W b are the weights for the bilinear transformation.", "Next, we compute the root score s ri which represents the unnormalized probability of the i -th node to be selected as the root node of the structure: s ri = W r m i , (5) where W r R 1 d is the weight for linear transformation.", "Suppose the graph G r has n r nodes, we first assign non-negative weights P R n r n r to the edges of the induced relational graph: P ij = (cid:40) 0 , if i = j exp( s ij ) , otherwise , (6) where P ij is the weight of the edge between the i -th and the j -th node.", "Then, following Koo et al. (2007), we define the Laplacian matrix L R n r n r of G r , and its variant L R n r n r , respectively: L ij = (cid:40) (cid:80) n r k =1 P kj , if i = j P ij , otherwise , (7) L ij = (cid:40) exp( s ri ) , if i = 1 L ij , otherwise .", "We use A rij to denote the marginal probability of the edge between the i -th node and the j -th node, which can be computed as follows:", "where is the Kronecker delta (Koo et al., 2007) and 1 denotes matrix inversion.", "A r can be regarded as a weighted adjacency matrix of the graph G r .", "Finally, A r is fed into the iterative refinement for event causality reasoning.", "After obtaining the relational graph structure, we perform event causality reasoning on the induced structure.", "To better capture potential reasoning clues, we adopt the densely connected graph convolutional networks (DCGCNs) (Guo et al., 2019), which allows training a deeper reasoning model.", "The convolution computation of each layer is: v ( l ) i = ( n r (cid:88) j =1 A rij W ( l ) v g ( l ) j + b ( l ) v ) , (10) where g ( l ) j is the concatenation of the initial node representation and the node representations produced in layers 1 , . . . , l 1 , namely g ( l ) j = m j v (1) j v ( l 1) j .", "The induced structure at once is relatively shallow (Liu et al., 2019; Nan et al., 2020) and may not be optimal for causality reasoning.", "Therefore, we iteratively refine the induced structure to learn a more informative structure.", "We stack N blocks (each block is structure induction and DCGCNs reasoning) of this module to induce the structure N times.", "Intuitively, as the structure gets more refined, the structure is more reasonable.", "After the iterative refinement, the representations of e 1 and e 2 are denoted as v e 1 and v e 2 , respectively.", "We concatenate them as the relational knowledge representation: F ( e 1 ,e 2 ) R = v e 1 v e 2 .", "We concatenate the context representation, descriptive knowledge representation and relational knowledge representation as the final representation:", "To make the final prediction, we perform a binary classification by taking F e 1 ,e 2 as input: p e 1 ,e 2 = softmax( W s F e 1 ,e 2 + b s ) .", "For training, we adopt cross entropy as the loss function: J () = (cid:88) s D (cid:88) e i ,e j E s e i (cid:54) = e j y e i ,e j log( p e i ,e j ) , (14) where denotes the model parameters.", "s denotes a sentence in the training set D .", "E s is the set of events in sentence s .", "y e i ,e j is a one-hot vector representing the gold label between e i and e j .", "We evaluate our proposed method on two widely used datasets, including EventStoryLine (Caselli and Vossen, 2017) and Causal-TimeBank (Mirza et al., 2014).", "For EventStoryLine, the dataset contains 258 documents, 5,334 events in total, and 1,770 of 7,805 event pairs are causally related.", "For Causal-TimeBank, the dataset contains 184 documents, 6,813 events, and 318 of 7,608 event pairs are causally related.", "We conduct the 5-fold and 10-fold cross-validation on the EventStoryLine dataset and Causal-TimeBank dataset respectively, same as previous methods to ensure fairness.", "Following previous works (Choubey and Huang, 2017; Gao et al., 2019), we adopt Precision (P), Recall (R) and F1-score (F1) as evaluation metrics.", "In our implementations, our method uses the Hug-gingFace's Transformers library 4 to implement the uncased BERT base model, which has 12-layers, 768-hidden, and 12-heads.", "The learning rate is initialized as 2e-5 with a linear decay.", "We use the Adam algorithm (Kingma and Ba, 2015) to optimize model parameters.", "The batch size is set to 20.", "The number of induction blocks (i.e., N ) is set to 2. The dropout of GCN is set to 0.3.", "Due to the sparseness of positive examples, we adopt a negative sampling strategy for training.", "The negative sampling rate is 0.6 and 0.7 for the EventStoryLine and Causal-TimeBank, respectively.", "We utilize CONCEPTNET 5.0 as the external knowledge base.", "Feature-based methods: (1) Mirza and Tonelli (2014), which proposes a data driven method with causal signals for the task; (2) Mirza (2014), which employs a verb rule based model with data filtering and causal signals enhancement; (3) Choubey and Huang (2017), which proposes a sequence model exploring complex handcrafted features for", "the task; (4) Gao et al. (2019), which utilizes a logistic regression classifier with the integer linear programming to model causal structure for the task.", "Neural network-based methods: (1) Cheng and Miyao (2017), which proposes a dependency path based bidirectional long short-term memory network (BiLSTM) that models the context between two event mentions for causal relation identification; (2) KMMG (Liu et al., 2020), which proposes a mention masking generalization method and also utilizes the external knowledge; (3) KnowDis (Zuo et al., 2020), which proposes a knowledge enhanced distant data augmentation method to alleviate data lacking problem.", "Since some baselines are evaluated either on the EventStoryLine dataset or the Causal-TimeBank dataset, the baselines used for the two datasets are different.", "Table 1 and Table 2 show the results on the EventStoryLine and Causal-TimeBank, respectively.", "From the tables, we can observe that: (1) Our method outperforms all the baselines by a large margin on the two datasets.", "For example, compared with the state-of-the-art model KnowDis (Zuo et al., 2020), our method LSIN Methods P(%) R(%) F1(%) BERT 36.9 56.0 44.5 BERT+DK 41.8 51.9 46.3 BERT+RK 46.1 55.4 50.3 BERT+DK+RK 47.9 58.1 52.5 Table 3: Experimental results by using different kinds of knowledge on the EventStoryLine dataset.", "achieves 2.8% and 3.9% improvements of F1-score on the EventStoryLine and Causal-TimeBank, respectively.", "It indicates that our proposed method is very effective for this task.", "(2) Compared with the state-of-the-art model KMMG (Liu et al., 2020), our method achieves 6.0% improvements in terms of Precision score on the EventStoryLine.", "The reason may be that our method utilizes the relational knowledge between events for causality reasoning, which can improve the confidence of event causality prediction.", "(3) Our method improves upon the BERT model by 8.0% and 12.4% in terms of F1-score on the two datasets, respectively.", "This suggests that only using the annotated training data is not enough to tackle the task.", "Moreover, it also indicates that our method is able to effectively leverage the external structural knowledge for ECI task.", "(4) The BERT model achieves comparable performance with complex feature-based methods such as Gao et al. (2019) on the EventStoryLine dataset, which indicates that the BERT is able to extract useful text features for the task.", "We validate the effectiveness of external structural knowledge for this task.", "Based on the BERT model, we leverage the descriptive knowledge via descriptive graph induction module, and the relational knowledge via relational graph induction module.", "The results are shown in Table 3. We have two important observations: (1) Based on the BERT model, incorporating these two kinds of knowledge can both improve performance.", "Moreover, simultaneously using these two kinds of knowledge can further improve the performance.", "It indicates that the external structural knowledge is very effective for this task.", "(2) The performance improvement of using the Methods P(%) R(%) F1(%) Liu et al. (2020) 44.5 39.3 41.8 DGI-Retrieval 40.0 46.1 42.8 DGI-Generation 39.3 51.3 44.5 DGI-Hybrid 41.8 51.9 46.3 Table 4: Comparison between the different methods for using the descriptive knowledge on the EventStoryLine dataset.", "relational knowledge is more obvious than that of using the descriptive knowledge, achieving 4.0% improvements in terms of F1-score.", "We guess that the relational knowledge can provide more clues for event causality reasoning.", "To verify the effectiveness of descriptive graph induction module, we compare our method with the state-of-the-art model (Liu et al., 2020).", "Liu et al. (2020) first retrieve the descriptive knowledge, and then transfer the knowledge into a sequence.", "Finally, they adopt the BERT to encode the knowledge.", "The results are listed in Table 4. In the table, DGI-Retrieval, DGI-Generation and DGI-Hybrid denote obtaining the descriptive knowledge via retrieval, generation and hybrid method, respectively.", "Overall, we can observe that: (1) The DGI-Hybrid model significantly outperforms Liu et al. (2020), achieving 4.5% improvements of F1-score.", "Moreover, even if we use the same retrieval method as Liu et al. (2020), our model still achieves better result.", "It indicates the descriptive graph induction module can better take advantage of the descriptive knowledge.", "(2) Compared with Liu et al. (2020), the DGI-Hybrid model achieves great improvements in terms of Recall score (i.e., improving 12.6%).", "The reason is that our method can automatically generate the descriptive knowledge, when the knowledge cannot be retrieved from the KB.", "To validate the effectiveness of the relational graph induction module, we compare our method with other three baselines.", "The three baselines are illustrated as follows: (1) LSTM-based Reasoning , which regards the relational path as a sequence and employs LSTM Methods P(%) R(%) F1(%) LSTM-based 43.0 54.5 48.1 Fixed Graph-based 43.1 56.5 48.9 Attention-based 46.3 55.0 50.3 LSIN (Ours) 47.9 58.1 52.5 Table 5: Comparison between the different methods for leveraging the relational knowledge on the EventStoryLine dataset.", "Its nodes are concepts on the path and edges only exist between adjacent concepts; (3) Attention-based Reasoning , which uses the self-attention to encode the relational path for modeling the dependencies between arbitrary two concepts.", "(1) Our method LSIN outperforms the three methods by a large margin.", "For example, compared with LSTM-based reasoning method, our method achieves 4.4% improvements of F1-score.", "This empirically confirms using induced relational graph structure is more effective than directly using the relational path for causality reasoning.", "(2) Compared with Fixed Graph-based reasoning method, our method achieves 3.6% improvements of F1-score.", "It indicates that our method is able to effectively capture the potentially useful information and reduce the impact of irrelevant knowledge on the relational path.", "We investigate the effect of the refinement on the overall performance.", "We plot the overall F1-score varying with the number of refinements in Figure 3. From the figure, we can observe that: (1) Our method LSIN yields the best performance in the second refinement.", "Compared with the first induction, the second refinement achieves 1.1% improvements of F1-score on the EventStoryLine dataset.", "This indicates that the proposed LSIN is able to induce more reasonable reasoning structures by iterative refinement.", "(2) When the number of refinements is too large, the performance on the two datasets stops increasing or even decreases due to over-fitting.", "We conduct case study to further verify the effectiveness of our method.", "Table 6 shows several cases showing the outputs of BERT and our method LSIN.", "From the results, we can observe that the BERT model cannot handle the cases where there is no causal clue.", "By contrast, our method can make correct predictions by leveraging the external structural knowledge.", "For the second example in Table 6, although the text has no clue indicating the existence of causality between fights and arrested , there is the relational knowledge between the two events in the KB, namely fight HasSubevent hurt someone else HasSubevent get arrested .", "Our method can make use of the relational knowledge to make a correct prediction.", "The two examples qualitatively demonstrate our method can effectively leverage the external knowledge for ECI task.", "In this paper, we propose a novel latent structure induction network (LSIN) to leverage the external structural knowledge for ECI task.", "To make use of the descriptive knowledge, we devise a descriptive graph induction module to obtain and encode the graph-structured descriptive knowledge.", "To utilize the relational knowledge, we propose a relational graph induction module to induce a more reasonable reasoning structure for causality reasoning.", "Experimental results on two widely used datasets indicate that our approach substantially outperforms previous state-of-the-art methods.", "We thank anonymous reviewers for their insightful comments and suggestions.", "This work is supported by the National Key Research and Development Program of China (No. 2020AAA0106400), and the National Natural Science Foundation of China (No. 61806201).", "This work is also supported by Beijing Academy of Artificial Intelligence (BAAI2019QN0301) and the fund of the joint project with Beijing Baidu Netcom Science Technology Co., Ltd." ]
[ "abstain", "abstain", "abstain", "abstain", "objective", "result", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "objective", "abstain", "abstain", "objective", "abstain", "abstain", "objective", "objective", "objective", "abstain", "objective", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "objective", "method", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "result", "abstain", "method", "abstain", "abstain", "abstain", "objective", "abstain", "method", "objective", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "result", "result", "abstain", "abstain", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "objective", "result", "objective", "result", "other", "other", "other" ]
[ "Statutory article retrieval is the task of automatically retrieving law articles relevant to a legal question.", "While recent advances in natural language processing have sparked considerable interest in many legal tasks, statutory article retrieval remains primarily untouched due to the scarcity of large-scale and high-quality annotated datasets.", "To address this bottleneck, we introduce the Belgian Statutory Article Retrieval Dataset (BSARD), which consists of 1,100+ French native legal questions labeled by experienced jurists with relevant articles from a corpus of 22,600+ Belgian law articles.", "Using BSARD, we benchmark several state-of-the-art retrieval approaches, including lexical and dense architectures, both in zero-shot and supervised setups.", "We find that fine-tuned dense retrieval models significantly outperform other systems.", "Our best performing baseline achieves 74.8% R @ 100, which is promising for the feasibility of the task and indicates there is still room for improvement.", "By the specificity of the domain and addressed task, BSARD presents a unique challenge problem for future research on legal information retrieval.", "Our dataset and source code are publicly available.", "Legal issues are an integral part of many people's lives (Ponce et al., 2019).", "However, the majority of citizens have little to no knowledge about their rights and fundamental legal processes (Balmer et al., 2010).", "As the Internet has become the primary source of information in response to life problems (Estabrook et al., 2007), people increasingly turn to search engines when faced with a legal issue (Denvir, 2016).", "Nevertheless, the quality of the search engine's legal help results is currently unsatisfactory, as top results mainly refer people to commercial websites that provide basic information as a way to advertise for-profit services (Hagan and Li, 2020).", "On average, only one in five persons obtain help from the Internet to clarify or solve their legal issue (Ponce et al., 2019).", "As a result, many vulnerable citizens who cannot afford a legal expert's costly assistance are left unprotected or even exploited.", "This barrier to accessing legal information creates a clear imbalance within the legal system, preventing the right to equal access to justice for all.", "People do not need legal services in and of themselves; they need the ends that legal services can provide.", "Recent advances in natural language processing (NLP), combined with the increasing amount of digitized textual data in the legal domain, offer new possibilities to bridge the gap between people and the law.", "For example, legal judgment prediction (Aletras et al., 2016; Luo et al., 2017; Zhong et al., 2018; Hu et al., 2018; Chen et al., 2019) may assist citizens in finding insightful patterns between their case and its outcome.", "Additionally, legal text summarization (Hachey and Grover, 2006; Bhattacharya et al., 2019) and automated contract review (Harkous et al., 2018; Lippi et al., 2019) may help people clarify long, complex, and ambiguous legal documents.", "In this work, we focus on statutory article retrieval, which, given a legal question such as Is it legal to contract a lifetime lease? , aims to return one or several relevant law articles from a body of legal statutes (Kim et al., 2019; Nguyen et al., 2020), as illustrated in Figure 1.", "A qualified statutory article retrieval system could provide a professional assisting service for unskilled humans and help empower the weaker parties when used for the public interest.", "Finding relevant statutes to a legal question is a challenging task.", "Unlike traditional ad-hoc information retrieval (Craswell et al., 2020), statutory article retrieval deals with two types of language: common natural language for the questions and complex legal language for the statutes.", "This difference in language distribution greatly complicates 6789 Shouldphysicians,surgeons,healthofficers,pharmacists,midwives,andallotherswho,throughtheirstatusorprofession,beinpossessionofinformationconfidedtothemrevealsuchsecrets, they shall be punished with imprisonmentofonetothreeyearsandafineof100to1000eurosoroneofthesepenaltiesonly unless called to testify as a witness in a court of law (or before a parliamentary commission of inquiry) or compelled by a decreeorordertodivulgethesecret. Article458,PenalCode Relevant article What do I risk if I violate professional confidentiality?", "the retrieval task as it indirectly requires an inherent interpretation system that can translate a natural question from a non-expert to a legal question to be matched against statutes.", "For skilled legal experts, these interpretations come from their knowledge of a question's domain and their understanding of the legal concepts and processes involved.", "Nevertheless, an interpretation is rarely unique.", "Instead, it is the interpreter's subjective belief that gives meaning to the question and, accordingly, an idea of the domains in which the answer can be found.", "As a result, the same question can yield different paths to the desired outcome depending on its interpretation, making statutory article retrieval a difficult and time-consuming task.", "Besides, statutory law is not a stack of independent articles to be treated as complete sources of information on their own unlike news or recipes.", "Instead, it is a structured and hierarchical collection of legal provisions that have whole meaning only when considered in their overall context, i.e., together with the supplementary information from their neighboring articles, the fields and sub-fields they belong to, and their place in the hierarchy of the law.", "For instance, the answer to the question Can I terminate an employment contract? will most often be found in labor law.", "However, this is not necessarily true if an employer is contracting a self-employed worker to carry out a specific task, in which case the answer probably lies at the higher level of contract law.", "This example illustrates the importance of considering the question's context and understanding the hierarchical structure of the law when looking for relevant statutory articles.", "In order to study whether retrieval models can approximate the efficiency and reliability of legal experts, we need a suitable labeled dataset.", "However, such datasets are difficult to obtain considering that, although statutory provisions are generally publicly accessible (yet often not in a machine-readable for-mat), the questions posed by citizens are not.", "This work presents a novel French native expert-annotated statutory article retrieval dataset as its main contribution.", "Our Belgian Statutory Article Retrieval Dataset (BSARD) consists of more than 1,100 legal questions posed by Belgian citizens and labeled by legal experts with references to relevant articles from a corpus of around 22,600 Belgian law articles.", "As a second contribution, we establish strong baselines on BSARD by comparing diverse state-of-the-art retrieval approaches from lexical and dense architectures.", "Our results show that fine-tuned dense retrieval models significantly outperform other approaches yet suggest ample opportunity for improvement.", "We publicly release our dataset and source code at https: //github.com/maastrichtlawtech/bsard .", "Due to the increasing digitization of textual legal data, the NLP community has recently introduced more and more datasets to help researchers build reliable models on several legal tasks.", "For instance, Fawei et al. (2016) introduced a legal question answering (LQA) dataset with 400 multi-choices questions based on the US national bar exam.", "Similarly, Zhong et al. (2020) released an LQA dataset based on the Chinese bar exam consisting of 26,365 6790 multiple-choice questions, together with a database of evidence that includes 3,382 Chinese legal provisions and the content of the national examination counseling book.", "Furthermore, Duan et al. (2019) proposed a legal reading comprehension dataset with 52,000 question-answer pairs crafted on the fact descriptions of 10,000 cases from the Supreme People's Court of China.", "On a different note, Xiao et al. (2018) presented a dataset for legal judgment prediction (LJP) with around 2.68 million Chinese criminal cases annotated with 183 law articles and 202 charges.", "Likewise, Chalkidis et al. (2019a) introduced an LJP dataset consisting of 11,478 English cases from the European Court of Human Rights labeled with the associated final decision.", "Meanwhile, Xiao et al. (2019) introduced a dataset for similar case matching with 8,964 triplets of cases published by the Supreme People's Court of China, and Chalkidis et al. (2019b) released a text classification dataset containing 57,000 English EU legislative documents tagged with 4,271 labels from the European Vocabulary.", "Additionally, Manor and Li (2019) introduced a legal text summarization dataset consisting of 446 sets of contract sections and corresponding reference summaries, and Holzenberger et al. (2020) presented a statutory reasoning dataset based on US tax law.", "Recently, Hendrycks et al. (2021) proposed a dataset for legal contract review that includes 510 contracts annotated with 41 different clauses for a total of 13,101 annotations.", "In the same vein, Borchmann et al. (2020) introduced a semantic retrieval dataset for contract discovery with more than 2,500 annotations in around 600 documents.", "Lastly, the COLIEE Case Law Corpus (Rabelo et al., 2020) is a case law retrieval and entailment dataset that includes 650 base cases from the Federal Court of Canada, each with 200 candidate cases to be identified as relevant to the base case.", "Regarding statutory article retrieval, the only other publicly available dataset is the COLIEE Statute Law Corpus (Rabelo et al., 2020).", "It comprises 696 questions from the Japanese legal bar exam labeled with references to relevant articles from the Japanese Civil Code, where both the questions and articles have been translated from Japanese to English.", "However, this dataset focuses on legal bar exam question answering, which is quite different from legal questions posed by ordinary citizens.", "While the latter tend to be vague and straightforward, bar exam questions are meant for aspiring lawyers and are thus specific and advanced.", "Besides, the dataset only contains closed questions (i.e., questions with yes or no answers) and considers almost 30 times fewer law articles than BSARD does.", "Also, unlike BSARD, the data are not native sentences but instead translated from a foreign language with a completely different legal system.", "1 As a result, the translated dataset may not accurately reflect the logic of the original legal system and language.", "These limitations suggest the need for a novel large-scale citizen-centric native dataset for statutory article retrieval, which is the core contribution of the present work.", "We create our dataset in four stages:", "(i) compiling a large corpus of Belgian law articles,", "(ii) gathering legal questions with references to relevant law articles,", "(iii) refining these questions, and", "(iv) matching the references to the corresponding articles from our corpus.", "Law articles collection.", "In civil law jurisdictions, a legal code is a type of legislation that purports to exhaustively cover a whole area of law, such as criminal law or tax law, by gathering and restating all the written laws in that area into a unique book.", "Hence, these books constitute valuable resources to collect many law articles on various subjects.", "We consider 32 publicly available Belgian codes, as presented in Table 3 of Appendix A. Together with the legal articles, we extract the corresponding headings of the sections in which these articles appear (i.e., book, part, act, chapter, section, and subsection names).", "These headings provide an overview of each article's subject.", "As preprocessing, we use regular expressions to clean up the articles of specific wording indicating a change in part of the article by a past law (e.g., nested brackets, superscripts, or footnotes).", "Additionally, we identify and remove the articles repealed by past laws but still present in the codes.", "Eventually, we end up with a corpus C = { a 1 , , a N } 1 Japan is a civil law country that relies predominantly on the rules written down in statutes, whereas most Englishspeaking countries (e.g., US, UK, Canada, and Australia) have a common law system that relies predominantly on past judicial decisions, known as precedents.", "Questions collection.", "We partner with Droits Quotidiens (DQ), 2 a Belgian organization whose mission is to clarify the law for laypeople.", "Each year, DQ receives and collects around 4,000 emails from Belgian citizens asking for advice on a personal legal issue.", "Thanks to these emails, its team of six experienced jurists keeps abreast of Bel-gium's most common legal issues and addresses them as comprehensively as possible on its website.", "Each jurist is an expert in a specific field (e.g., family, housing, or work) and is responsible for answering all questions related to that field.", "Given their qualifications and years of experience in providing legal advice in their respective fields, the experts can be considered competent enough to always (eventually) retrieve the correct articles to a given question.", "In practice, their legal clarification process consists of four steps.", "First, they identify the most frequently asked questions on a common legal issue.", "Then, they define a new anonymized model question on that issue expressed in natural language terms, i.e., as close as possible as if a layperson had asked it.", "Next, they search the Belgian law for articles that help answer the model question and reference them.", "Finally, they answer the question using the retrieved relevant articles in a way a layperson can understand.", "These model questions, legal references, and answers are further categorized before being posted on DQ's website (e.g., the question What is the seizure of goods? is tagged under the Money Debt recovery category).", "With their consent, we collect more than 3,200 model questions together with their references to relevant law articles and categorization tags.", "Assuming it takes a jurist between 5 to 20 minutes to find the relevant articles to a given question and categorize the latter.", "An estimate of the pecuniary value of those labeled questions is over C105,000 3,200 questions, each requiring 10 minutes to label, assuming a rate of C200 per hour.", "Questions refinement.", "We find that around one-third of the collected questions are duplicates.", "However, these duplicated questions come with different categorization tags, some of which providing additional context that can be used to refine the questions.", "For example, the question Should I 2 https://droitsquotidiens.be/ install fire detectors? appears four times in total, under the following tags: Housing Rent I am a { tenant , landlord } In { Wallonia , Brussels } .", "We distinguish between the tags with one or a few words indicating a question subject (e.g., housing and rent ) and those that provide context about a personal situation or location as short descriptive sentences (e.g., I am tenant in Brussels. ).", "If any, we append the contextual sentence tags in front of the questions, which solves most of the duplicates problem and improves the overall quality of the questions by making them more specific.", "Questions filtering.", "The questions collected are annotated with plain text references to relevant law articles (e.g., Article 8 of the Civil Code ).", "We use regular expressions to parse these references and match them to the corresponding articles from our corpus.", "First, we filter out questions whose references are not articles (e.g., an entire decree or order).", "Then, we remove questions with references to legal acts other than codes of law (e.g., decrees, directives, or ordinances).", "Next, we ignore questions with references to codes other than those we initially considered.", "We eventually end up with 1,108 questions, each carefully labeled with the ids of the corresponding relevant law articles from our corpus.", "Finally, we split the dataset into training/test sets with 886 and 222 questions, respectively.", "To provide more insight, we describe quantitative and qualitative observations about BSARD.", "Specifically, we explore", "(i) the diversity in questions and articles,", "(ii) the relationship between questions and their relevant articles, and", "(iii) the type of reasoning required to retrieve relevant articles.", "Diversity.", "The 22,633 law articles that constitute our corpus have been collected from 32 Belgian codes covering a large number of legal topics, as presented in Table 3 of Appendix A. The articles have a median length of 77 words, but 142 articles exceed 1,000 words (the lengthiest one being up to 5,790 words), as illustrated in Figure 2b.", "These long articles are mostly general provisions , i.e., articles that appear at the beginning of a code and define many terms and concepts later mentioned in the code.", "The questions are between 5 and 44 words long, with a median of 14 words, as shown in Figure 2a.", "They cover a wide range of topics, with around 85% of them being either about family, 6792 General topic Percentage Subtopics Example Family 30.6% Marriage, parentage, divorce, etc.", "housing, money, or justice, while the remaining 15% concern either social security, foreigners, or work, as described in Table 1.", "Question-article relationship.", "Questions might have one or several relevant legal articles.", "Overall, 75% of the questions have less than five relevant articles, 18% have between 5 and 20, and the remaining 7% have more than 20 with a maximum of 109, as seen in Figure 2c.", "The latter often have complex and indirect answers that demand extensive reasoning over a whole code section, which explains these large numbers of relevant articles.", "Furthermore, an article deemed relevant to one question might also be for others.", "Therefore, we calculate for each unique article deemed relevant to at least one question the total number of times it is cited as a legal reference across all questions.", "As a result, we find that the median number of citations for those articles is 2, and less than 25% of them are cited more than five times, as illustrated in Figure 2d.", "Hence, out of the 22,633 articles, only 1,612 are referred to as relevant to at least one question in the dataset, and around 80% of these 1,612 articles come from either the Civil Code, Judicial Code, Criminal Investigation Code, or Penal Code.", "Meanwhile, 18 out of the 32 codes have less than five articles mentioned as relevant to at least one question, which can be explained by the fact that those codes focus less on individuals and their concerns.", "Formally speaking, a statutory article retrieval system R : ( q, C ) F is a function that takes as input a question q along with a corpus of law articles C , and returns a much smaller filter set F C of the supposedly relevant articles, ranked by decreasing order of relevance.", "For a fixed k = |F| |C| , the retriever can be evaluated in isolation with multiple rank-based metrics (see Section 5.1).", "The following section describes the retrieval models we use as a benchmark for the task.", "Traditionally, lexical approaches have been the de facto standard for textual information retrieval due to their robustness and efficiency.", "Given a query q and an article a , a lexical model assigns to the pair ( q, a ) a score s L : ( q, a ) R + by computing the sum, over the query terms, of the weights of each query term t q in the article, i.e., s L ( q, a ) = (cid:88) t q w ( t, a ) .", "where the term frequency tf is the number of occurrences of term t in article a , and the document frequency df is the number of articles within the corpus that contain term t .", "Then, we experiment with the BM25 weighting formula (Robertson et al., 1994), defined as w ( t, a ) = tf( t, a ) ( k 1 +1) tf( t, a )+ k 1 (cid:16) 1 b + b | a | avgal (cid:17) log |C| df( t ) + 0 .", "5 df( t ) + 0 .", "5 , (3) where k 1 R + and b [0 , 1] are constant parameters to be fixed, | a | is the article length, and avgal is the average article length in the collection.", "During inference, we compute a score for each article in corpus C and return the k articles with the highest scores as the topk most relevant results to the input query.", "Lexical approaches suffer from the lexical gap problem (Berger et al., 2000) and can only retrieve articles containing keywords present in the query.", "To overcome this limitation, recent work (Lee et al., 2019; Karpukhin et al., 2020; Xiong et al., 2021) relies on neural-based architectures to capture semantic relationships between the query and documents.", "The most commonly used approach is based on a bi-encoder model (Gillick et al., 2018) that maps queries and documents into dense vector representations.", "Formally, a dense retriever calculates a relevance score s D : ( q, a ) R + between question q and article a by the similarity of their respective embeddings h q , h a R d , i.e., s D ( q, a ) = sim ( h q , h a ) , (4) where sim : R d R d R is a similarity function such as dot product or cosine similarity.", "Typically, these embeddings result from a pooling operation on the output representations of a word embedding model: h q = pool ( f ( q ; 1 )) , and h a = pool ( f ( a ; 2 )) , (5) where model f ( ; i ) : W n R n d with parameters i maps an input text sequence of n terms from vocabulary W to d -dimensional real-valued word vectors.", "The pooling operation pool : R n d R d uses the output word embeddings to distill a global representation for the text passage using either mean, max, or [CLS] pooling.", "Note that the bi-encoder architecture comes with two flavors:", "(i) siamese (Reimers and Gurevych, 2019; Xiong et al., 2021), which uses a unique word embedding model (i.e., 1 = 2 ) that maps the query and article together in a shared dense vector space, and", "(ii) two-tower (Yang et al., 2020; Karpukhin et al., 2020), which use two independent word embedding models that encode the query and article separately into different embedding spaces.", "During inference, the articles are pre-encoded offline, and their representations are stored in an index structure.", "Then, given an input query, an exact search is performed by computing the similarities between the query representation and all pre-encoded article representations.", "The resulting scores are used to rank the articles such that the k articles that have the highest similarities with the query are returned as the topk results.", "First, we study the effectiveness of siamese bi-encoders in a zero-shot evaluation setup, i.e., pre-trained word embedding models are applied out-of-the-box without any additional fine-tuning.", "We experiment with two types of widely-used word embedding models:", "(i) models that learned contextindependent word representations, namely word2vec (Mikolov et al., 2013a,b) and fastText (Bojanowski et al., 2017), and", "(ii) models that learned contextdependent word embeddings, namely RoBERTa (Liu et al., 2019).", "RoBERTa can process texts up to a maximum input length of 512 tokens.", "Although alternative models exist to alleviate this limitation (Beltagy et al., 2020; Ainslie et al., 2020), they have all been trained on English text, and there are no French equivalents available yet.", "Therefore, we use a simple workaround that splits the text into overlapping chunks and passes each chunk in turn to the embedding model.", "To form the chunks, we consider contiguous text sequences of 200 tokens with an overlap of 20 tokens between consecutive chunks.", "For all zero-shot models, we use mean pooling on all word embeddings of the passage to extract a global representation for the latter and cosine similarity to score passage representations.", "Thereafter, we train our own siamese and two-tower RoBERTa-based bi-encoder models on BSARD.", "Let D = { q i , a + i } Ni =1 be the training data where each of the N instances consists of a query q i associated with a relevant (positive) article a + i .", "Using in-batch negatives (Chen et al., 2017; Henderson et al., 2017), we can create a training set T = { q i , a + i , A i } Ni =1 where A i is a set of negative articles for question q i constructed by considering the articles paired with the other questions from the same mini-batch.", "For each training instance, we contrastively optimize the negative log-likelihood of each positive article against their negative articles, i.e., L (cid:0) q i , a + i , A i (cid:1) = log exp (cid:0) s D ( q i , a + i ) / (cid:1) (cid:80) a A i { a + i } exp ( s D ( q i , a ) / ) , (6) where > 0 is a temperature parameter to be set.", "This contrastive loss allows learning embedding functions such that relevant question-article pairs will have a higher score than irrelevant ones.", "To deal with articles longer than 512 tokens, we use the same workaround as in the zero-shot evaluation and split the long sequences into overlapping chunks of 200 tokens with a window size of 20.", "However, this time, we limit the size of the articles to the first 1,000 words due to limited GPU memory.", "Although not ideal, doing so remains reasonable given that only 0.6% of the articles in our corpus have more than 1,000 words, as mentioned in Section 3.2.", "Each chunk is prefixed by the [CLS] token, and we extract a global representation for the whole article by averaging the output [CLS] token embeddings of the different chunks.", "Here, we use the dot product to compute similarities as it gives slightly better results than cosine.", "Metrics.", "We use three standard information retrieval metrics (Manning et al., 2008) to evaluate performance, namely the (macro-averaged) recall @ k (R @ k ), mean average precision @ k (MAP @ k ), and mean reciprocal rank @ k (MRR @ k ).", "Appendix B gives a detailed description of these metrics in the context of statutory article retrieval.", "We deliberately omit to report the precision @ k given that questions have a variable number of relevant articles (see Figure 2c), which makes it senseless to report it at a fixed k questions with r relevant articles will always have P @ k < 1 if k > r .", "For the same reason, k should be large enough for the recall @ k .", "Hence, we use k { 100 , 200 , 500 } for our evaluation.", "French word embedding models.", "Our focus is on a non-English dataset, so we experiment with French variants of the models mentioned above.", "Specifically, we use a 500-dimensional skip-gram word2vec model pre-trained on a crawled French corpus (Fauconnier, 2015), a 300-dimensional CBOW fastText model pre-trained on French Web data (Grave et al., 2018), and a French RoBERTa model, namely CamemBERT (Martin et al., 2020), pre-trained on 147GB of French web pages filtered from Common Crawl.", "3 Hyper-parameters & schedule.", "For BM25, we optimize the parameters on BSARD training set and find k 1 = 1 .", "0 and b = 0 .", "6 to perform best.", "Regarding the bi-encoder models, we optimize the contrastive loss using a batch size of 22 question-article pairs and a temperature of 0.05 for 100 epochs, which is approximately 20,500 steps.", "We use AdamW (Loshchilov and Hutter, 2019) with an initial learning rate of 2e-5, 1 = 0 .", "9 , 2 = 0 .", "999 , weight decay of 0.01, learning rate warm up over the first 500 steps, and linear decay of the learning rate.", "Training is performed on a single Tesla V100 GPU with 32 GBs of memory and evaluation on a server with a dual 20 core Intel(R) Xeon(R) E5-2698 v4 CPU @ 2.20GHz and 512 GBs of RAM.", "In Table 2, we report the retrieval performance of our models on the BSARD test set.", "Overall, the trained bi-encoder models significantly outperform all the other baselines.", "The two-tower model improves over its siamese variant on recall @ 100 but performs similarly on the other metrics.", "Although BM25 underperforms the trained bi-encoders significantly, its performance indicates that it is still a strong baseline for domain-specific retrieval.", "These results are consistent with those obtained on other in-domain datasets (Thakur et al., 2021).", "Regarding the zero-shot evaluation of siamese bi-encoder models, we find that directly using the embeddings of a pre-trained CamemBERT model without optimizing for the IR task gives poor results.", "Reimers and Gurevych (2019) noted similar findings for the task of semantic textual similarity.", "Furthermore, we observe that the word2vec-based bi-encoder significantly outperforms the fastText and BERT-based models, suggesting that pre-trained word-level embeddings are more appropriate for the task than character-level or subword-level embeddings when used out of the box.", "Although promising, these results suggest ample opportunity for improvement compared to a skilled legal expert who can eventually retrieve all relevant articles to any question and thus get perfect scores.", "As our dataset aims to give researchers a well-defined benchmark to evaluate existing and future legal information retrieval models, certain limitations need to be borne in mind to avoid drawing erroneous conclusions.", "First, the corpus of articles is limited to those collected from the 32 Belgian codes described in Table 3 of Appendix A, which does not cover the entire Belgian law as thousands of articles from decrees, directives, and ordinances are missing.", "During the dataset construction, all references to these uncollected articles are ignored, which causes some questions to end up with only a fraction of their initial number of relevant articles.", "This information loss implies that the answer contained in the remaining relevant articles might be incomplete, although it is still appropriate.", "For instance, the question Can I evict my tenants if they make too much noise? might not have a detailed answer within the statutory law that quantifies a specific noise threshold at which eviction is allowed.", "Instead, the landlord should probably rely more on case law and find precedents similar to their current situation (e.g., the tenant makes two parties a week until 2 am).", "Hence, some questions are better suited than others to the statutory article retrieval task, and the domain of the less suitable ones remains to be determined.", "In addition to helping advance the state-of-the-art in retrieving statutes relevant to a legal question, BSARD-based models could improve the efficiency of the legal information retrieval process in the context of legal research, therefore enabling researchers to devote themselves to more thoughtful parts of their research.", "Furthermore, BSARD can become a starting point of new open-source legal information search tools so that the socially weaker parties to disputes can benefit from a free professional assisting service.", "However, there are risks that the dataset will not be used exclusively for the public interest but perhaps also for profit as part of proprietary search tools developed by companies.", "Since this would reinforce rather than solve the problem of access to legal information and justice for all, we decided to distribute BSARD under a license with a noncommercial clause.", "Other potential negative societal impacts could involve using models trained on BSARD to misuse or find gaps within the governmental laws or use the latter not to defend oneself but to deliberately damage people or companies instead.", "Of course, we discourage anyone from developing models that aim to perform the latter actions.", "In this paper, we present the Belgian Statutory Article Retrieval Dataset (BSARD), a citizen-centric French native dataset for statutory article retrieval.", "Within a larger effort to bridge the gap between people and the law, BSARD provides a means of evaluating and developing models capable of retrieving law articles relevant to a legal question posed by a layperson.", "We benchmark several strong information retrieval baselines that show promise for the feasibility of the task yet indicate room for improvement.", "In the future, we plan to build retrieval models that can handle lengthy statutory articles and inherently exploit the hierarchy of the law.", "In closing, we hope that our work sparks interest in developing practical and reliable statutory article retrieval models to help improve access to justice for all.", "This research is partially supported by the Sector Plan Digital Legal Studies of the Dutch Ministry of Education, Culture, and Science.", "In addition, this research was made possible, in part, using the Data Science Research Infrastructure (DSRI) hosted at Maastricht University." ]
[ "abstain", "abstain", "abstain", "method", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "objective", "abstain", "objective", "result", "other", "other", "abstain", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "result", "method", "objective", "other", "other" ]
[ "Human phenotype-gene relations are fundamental to fully understand the origin of some phenotypic abnormalities and their associated diseases.", "Biomedical literature is the most comprehensive source of these relations, however, we need Relation Extraction tools to automatically recognize them.", "Most of these tools require an annotated corpus and to the best of our knowledge, there is no corpus available annotated with human phenotype-gene relations.", "This paper presents the Phenotype-Gene Relations (PGR) corpus, a silver standard corpus of human phenotype and gene annotations and their relations.", "The corpus consists of 1712 abstracts, 5676 human phenotype annotations, 13835 gene annotations, and 4283 relations 1 .", "We generated this corpus using Named-Entity Recognition tools, whose results were partially evaluated by eight curators, obtaining a precision of 87.01%.", "By using the corpus we were able to obtain promising results with two state-of-the-art deep learning tools, namely 78.05% of precision.", "The PGR corpus was made publicly available to the research community.", "2 1 Introduction Automatic extraction of relations between entities mentioned in literature is essential to obtain knowledge that was already available but required considerable manual effort and time to retrieve.", "Recently, biomedical relation extraction has gained momentum in several text-mining applications, such as event extraction and slot-filling (Lamurias and Couto, 2019b).", "Some of the commonly extracted biomedical relations are protein-protein interactions (Papanikolaou et al., 2015), [email protected] 1 Query 1, corresponds to the 10/12/2018 release of PGR 2 https://github.com/lasigeBioTM/PGR drug-drug interactions (Lamurias et al., 2019) and disease-gene relationships (Kim et al., 2017).", "There are a few worth mention systems regarding biomedical Relation Extraction (RE) (Verga et al., 2018), and that specifically focus on the extraction of phenotype-gene relations regarding different species types like plants (Xing et al., 2018) and humans (Collier et al., 2015).", "The main problem that these systems face is a lack of specific high quality annotated corpora (gold standard cor-pus), mostly because this task requires not only a considerable amount of manual effort but also specific expertise that is not widely available.", "A solution to these limitations is to generate the corpus in a fully automated manner (silver standard corpus).", "Connecting human phenotypes to genes helps us to understand the origin of some phenotypic abnormalities and their associated diseases.", "To extract human phenotype-gene relations, both entities, human phenotypes and genes have to be recognized.", "With genes, as a result of lexical features being relatively regular, many systems can successfully identify them in text (Leaman and Gonzalez, 2008).", "Even though Named-Entity Recognition (NER) research has significantly improved in the last years, human phenotype identification is still a complex task, only tackled by a handful of systems (Lobo et al., 2017).", "To generate a silver standard for phenotype-gene relation extraction, we used a pipeline that performs:", "i) NER to recognize genes and human phenotype entities;", "ii) RE to classify a relation between human phenotype and gene entities.", "First, we gathered abstracts using the PubMed API with manually defined keywords, namely each gene name, homo sapiens , and disease .", "Then we used the Minimal Named-Entity Recognizer (MER) tool (Couto and Lamurias, 2018) to extract gene mentions in the abstracts and the Identifying Human Phenotypes (IHP) tool (Lobo et al., 2017) to extract human phenotype mentions.", "At last, we used a gold standard relations file, provided by the Human Phenotype Ontology (HPO), to classify the relations obtained by co-occurrence in the same sentence as Known or Unknown .", "To the best of our knowledge, there is no corpus available specific to human phenotype-gene relations.", "This work, overcame this issue by creating a large and versatile silver standard corpus.", "To assess the quality of the Phenotype-Gene Relations (PGR) corpus, eight curators manually evaluated a subset of PGR.", "We obtained highly promising results, for example 87.18% in precision.", "Finally, we evaluated the impact of using the corpus on two deep learning RE systems, obtaining 69.23% (BO-LSTM) and 78.05% (BioBERT) in precision.", "The HPO is responsible for providing a standardized vocabulary of phenotypic abnormalities encountered in human diseases (Khler et al., 2017).", "The developers of the HPO also made available a file that links these phenotypic abnormalities to genes.", "These phenotype-gene relations are regularly extracted from texts in Online Mendelian Inheritance in Man (OMIM) and Orphanet (OR-PHA) databases, where all phenotype terms associated with any disease that is related with a gene are assigned to that gene in the relations file.", "In this work, we used the relations file created by HPO as a gold standard for human phenotype-gene relations.", "We started by retrieving abstracts from PubMed, using the genes involved in phenotype-gene relations and homo sapiens as keywords, and the Entrez Programming Utilities (E-utilities) web service ( https://www.ncbi.nlm. nih.gov/books/NBK25501/ ), retrieving one abstract per gene (Query 1).", "Later, we added the keyword disease and filter for abstracts in English (Query 2) 3 .", "Query 2 represents a more focused search of the type of abstracts to retrieve, such as abstracts regarding diseases, their associated phenotypes and genes.", "For each gene, we opted for the most recent abstract (Query 1) and the two most recent abstracts (Query 2).", "We opted by searching per gene and not human phenotype or the combination of both terms because this approach was the one that retrieved ab-3 Query 2, corresponds to the 11/03/2019 release of PGR stracts with the higher number of gene and human phenotype annotations, in the following NER and RE phases.", "We removed the abstracts that did not check the conditions of being written in English, with a correct XML format and content.", "The final number of abstracts was 1712 for Query 1 and 2657 for Query 2 as presented in Table", "1. Then we proceeded to use the MER tool (Couto and Lamurias, 2018) for the annotation of the genes and the IHP framework (Lobo et al., 2017) for the annotation of human phenotype terms.", "MER is a dictionary-based NER tool which given any lexicon or ontology (e.g., an OWL file) and an input text returns a list of recognized entities, their location, and links to their respective classes.", "To annotate genes with MER we need to provide a file of gene names and their respective identifiers.", "To this goal, we used a list created by the HUGO Gene Nomenclature Committee (HGNC) at the European Bioinformatics Institute ( http://www.genenames.org/ ).", "The HGNC is responsible for approving unique symbols and names for human loci, including protein-coding genes, ncRNA genes, and pseudogenes, with the goal of promoting clear scientific communication.", "Considering that we intended not only to map the genes to their names but also their Entrez Gene ( www.ncbi.nlm.nih. gov/gene/ ) identifiers, we used the API from MyGene ( http://mygene.info/ ) with the keyword human in species.", "The MyGene API provides several gene characteristics, including the confidence score for several possible genes that match the query.", "For this work, we chose the Entrez Gene identifier with a higher confidence score.", "After corresponding all gene names to their respective identifiers, we were left with three genes that did not have identifiers ( CXorf36 , OR4Q2 , and SCYGR9 ).", "For the first two genes ( CXorf36 and OR4Q2 ), a simple search in Entrez Gene allowed us to match them to their identifiers.", "For the last gene ( SCYGR9 ) we were not able to find an Entrez Gene identifier, so we used the HGNC identifier for that gene instead.", "We opted to use the Entrez Gene identifiers because of their widespread use in the biomedical research field.", "To the original gene list, we added gene synonyms using a synonyms list file provided by Query Abstracts Annotations Relations Phenotype Gene Known Unknown Total 1 (10/12/2018) 1712 5676 13835 1510 2773 4283 2 (11/03/2019) 2657 9553 23786 2480 5483 7963 Table 1: The final number of abstracts retrieved, number of phenotype and gene annotations extracted and the number of known, unknown and total of relations extracted between phenotype and genes, for Query 1 and", "https://github.com/macarthur-lab/ gene_lists (expanding the original list almost 3-fold).", "These synonyms were matched to their identifiers and filtered according to their length to exclude one character length synonyms and avoid a fair amount of false positives.", "The number of genes in the original gene list was 19194, and by including their synonyms that number increased to 56670, representing a total gain of 37476 genes.", "At last, we identified some missed gene annotations that were caught using regular expressions.", "These missed gene annotations were next to for-ward/back slash and dashes characters (Example 1).", "Example", "1. Missed gene annotation because of forward slash.", "Gene: BAX Gene Identifier: 581 Abstract Identifier: 30273005 Sentence: According to the morphological observations and DNA fragmentation assay, the MPS compound induced apoptosis in both cell lines, and also cause a significant increase in the expression of Bax /Bcl-2.", "IHP is a Machine Learning-based NER tool, specifically created to recognize HPO entities in unstructured text.", "It uses Stanford CoreNLP (Manning et al., 2014) for text processing and applies Conditional Random Fields trained with a rich feature set, combined with hand-crafted validation rules and a dictionary to improve the recognition of phenotypes.", "To use the IHP system we chose to update the HPO ontology for the most recent version 4 .", "The annotations that originated from the IHP system were matched to their HPO identifier.", "There was a total of 7478 annotations for Query 1 and 10973 annotations for Query 2 that did not match any HPO identifier.", "We put aside these annotations to be confirmed or discarded manually as some of 4 09/10/2018 release them are incorrectly identified entities but others are parts of adjacent entities that can be combined for an accurate annotation.", "We did not use the MER system for phenotype extraction mainly because a more efficient tool for this task was available without the limitations of a dictionary-based NER tool for complex terms as phenotypes are.", "After filtering abstracts that did not have annotations of both types, gene and phenotype, we gathered a total of 1712 abstracts for Query 1 and 2656 abstracts for Query 2 as presented in Table", "1. The abstracts retrieved by Query 1 were not specific enough for human phenotype-gene relations and therefore about half of them did not contained entities from both types, which we addressed in Query 2, increasing from about 2.5 relations per abstract to about 3.0 relations per abstract.", "Using a distant supervision approach, with the HPO file that links phenotypic abnormalities to genes, we were able to classify a relation with Known or Unknown .", "For this end, we extract pairs of entities, of gene and human phenotype, by co-occurrence in the same sentence (Example 2).", "The final number of both Known and Unknown annotations is also presented in Table", "1. Example", "Sentence: A homozygous mutation of SERPINB6 , a gene encoding an intracellular protease inhibitor, has recently been associated with post-lingual, autosomal-recessive, nonsyndromic hearing loss in humans (DFNB91).", "Gene: SERPINB6 Gene Identifier: 5269 Phenotype: hearing loss Phenotype Identifier: HP_0000365 Relation: Known 3 Evaluation To evaluate the quality of the classifier, we randomly selected 260 relations from Query 1 to be reviewed by eight curators (50 relations each, with an overlap of 20 relations).", "All researchers work in the areas of Biology and Biochemistry.", "These curators had to evaluate the correctness of the clas-sifier by attributing to each sentence one of the following options: C (correct), I (incorrect) or U (un-certain).", "The U option was given to identify cases of ambiguity and possible errors in the NER phase.", "We classified as a true positive (TP) a Known relation that was marked C by the curator, a false positive (FP) as a Known relation marked I , a false negative (FN) as a Unknown relation marked I and a true negative (TN) as a Unknown relation marked C .", "The BO-LSTM system (Lamurias et al., 2019) is a deep learning system that is used to extract and classify relations via long short-term memory networks along biomedical ontologies.", "This system was initially created to detect and classify drug-drug interactions and later adapted to detect other types of relations between entities like human phenotype-gene relations.", "It takes advantage of domain-specific ontologies, like the HPO and the Gene Ontology (GO) (Ashburner et al., 2000).", "The BO-LSTM system represents each entity as the sequence of its ancestors in their respective ontology.", "The BioBERT system (Lee et al., 2019) is a pre-trained biomedical language representation model for biomedical text mining based on the BERT (Devlin et al., 2018) architecture.", "Trained on large-scale biomedical corpora, this system is able to perform diverse biomedical text mining tasks, namely NER, RE and Question Answering (QA).", "The novelty of the architecture is that these systems (BioBERT and BERT) are designed to pretrain deep bidirectional representations by jointly conditioning on both left and right context in all layers.", "These feature allows easy adaption to several tasks without loss in performance.", "The final results are presented in Table", "2. The inter-curator agreement score, calculated from a total of 20 relations classified by eight curators, was 87.58%.", "Besides the fact that there were a few incorrectly extracted relations due to errors in the NER phase, that were discarded, the inter-curator agreement is not higher due to the complexity of the sentences where the relations between entities were identified.", "Even with highly knowledgeable curators in the fields of Biology and Biochemistry, most of them expressed difficulties in deciding which mark to choose on complex sentences that did not necessarily imply a relation between the identified entities (Example 3).", "Example", "3. Relation marked with U (Uncer-tain).", "Abstract Identifier: 27666346 Sentence: FRMD4A antibodies were used to probe 78 paraffin-embedded specimens of tongue squamous cell carcinoma and 15 normal tongue tissues, which served as controls.", "Mark: U The precision obtained from the test-set (about 6% of the total of relations), was 87.01%.", "Although we cannot state that this test-set is representative of the overall data-set, this is a solid evidence of the effectiveness of our pipeline to automate RE corpus creation, especially between human phenotype and genes, and other domains if a gold standard relations file is provided.", "Our lower recall is mostly due to incorrectly retrieved human phenotype annotations by IHP, that can be manually confirmed in a future optimized version of the PGR corpus, as some of them are parts of adjacent entities that can be combined for an accurate annotation.", "For BioBERT we used the available pre-trained weights for training and testing of RE model on our corpus.", "The results of BO-LSTM and BioBERT in the test-set are presented in Table", "3. We also measured the performance of the co-occurrence (i.e. assuming all-true) baseline method.", "This baseline method assumes that all relations in the test-set are Known and therefore the recall is 100%.", "These results are comparable to Relations Marked Relations Metrics Known Unknown TruePositive FalseNegative FalsePositive TrueNegative Precision Recall F-Measure 77 143 67 86 10 57 0.8701 0.4379 0.5826 Table 2: The Known and Unknown number of relations selected, the number of true positives, false negatives, false positives and true negatives, and the evaluation metrics for the Known relations.", "the ones obtained from the evaluation stage by the curators, and show the applicability of our corpus.", "BioBERT significantly outperforms BO-LSTM in all metrics proving that is indeed a viable language representation model for biomedical text mining.", "Even though the recall for both systems is relatively low, the purpose of this work was mainly to extract correct relations between entities to facilitate Machine Learning (ML), which was achieved by obtaining the precision of 69.23% (BO-LSTM) and 78.95% (BioBERT).", "The most relevant metric for a silver standard corpus, directed towards ML tools, is precision.", "ML tools depend on correct examples to create effective models that can detect new cases, afterwards, being able to deal with small amounts of noise in the assigned labels.", "This paper showed that our pipeline is a feasible way of generating a silver standard human phenotype-gene relation corpus.", "The pipeline required the application of two NER tools, and the availability of a list of known relations.", "We manually evaluated the corpus using eight curators obtaining a 87.01% precision with an interagreement of 87.58%.", "We also measured the impact of using the corpus in state-of-the-art deep learning RE systems, namely BO-LSTM and BioBERT.", "The results were promising with 69.23%, and 78.95% in precision, respectively.", "We believe that our pipeline and silver standard corpus will be a highly useful contribution to overcome the lack of gold standards.", "Future work includes manually correcting the human phenotype annotations that did not match any HPO identifier, with the potential of expanding the number of human phenotype annotations almost 2-fold and increasing the overall recall.", "Also, we intend to expand the corpus by identifying more missed gene annotations using pattern matching, which is possible due to our approach being fully automated.", "Another possibility is the expansion of the test-set for a more accurate capture of the variance in the corpus.", "For example, we can select a subset of annotated documents in which two curators could work to grasp the complexity of manually annotating some of these abstracts.", "Further, we intend to use semantic similarity to validate the human phenotype-gene relations.", "Semantic similarity has been used to compare different types of biomedical entities (Lamurias and Couto, 2019a), and is a measure of closeness based on their biological role.", "For example, if the BRCA1 gene is semantically similar to the BRAF gene and the BRCA1 has an established relation with the tumor phenotype, it could be possible to infer that BRAF gene also has a relation with the tumor phenotype, even if that is not evident by the training data.", "Finally, the effect of different NER systems applied to the pipeline should be studied.", "We acknowledge the help of Mrcia Barros, Joana Matos, Rita Sousa, Ana Margarida Vasconcelos, Maria Teresa Cunha and Sofia Jesus in the curating phase.", "This work was supported by FCT through funding of DeST: Deep Semantic Tagger project, ref.", "PTDC/CCI-BIO/28685/2017 ( http://dest. rd.ciencias.ulisboa.pt/ ), and LASIGE Research Unit, ref.", "UID/CEC/00408/2019." ]
[ "abstain", "abstain", "abstain", "method", "abstain", "result", "result", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "objective", "method", "result", "abstain", "method", "abstain", "result", "result", "abstain", "abstain", "abstain", "method", "other", "method", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "method", "other", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "method", "abstain", "objective", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "other", "other", "other" ]
[ "Simile interpretation (SI) and simile generation (SG) are challenging tasks for NLP because models require adequate world knowledge to produce predictions.", "Previous works have employed many hand-crafted resources to bring knowledge-related into models, which is time-consuming and labor-intensive.", "In recent years, pre-trained language models (PLMs) based approaches have become the defacto standard in NLP since they learn generic knowledge from a large corpus.", "The knowledge embedded in PLMs may be useful for SI and SG tasks.", "Nevertheless, there are few works to explore it.", "In this paper, we probe simile knowledge from PLMs to solve the SI and SG tasks in the unified framework of simile triple completion for the first time.", "The backbone of our framework is to construct masked sentences with manual patterns and then predict the candidate words in the masked position.", "In this framework, we adopt a secondary training process (Adjective-Noun mask Training) with the masked language model (MLM) loss to enhance the prediction diversity of candidate words in the masked position.", "Moreover, pattern ensemble (PE) and pattern search (PS) are applied to improve the quality of predicted words.", "Finally, automatic and human evaluations demonstrate the effectiveness of our framework in both SI and SG tasks.", "The simile, which is a special type of metaphor, is defined as a figurative expression in which two fundamentally different things are explicitly compared, usually using like or as (Israel et al., 2004; Zeng et al., 2020).", "It is widely used in literature because it can inspire the reader's imagination (Paul, 1970) by giving a vivid and unexpected analogy between two objects with similar attributes.", "elements: the tenor , the attribute and the vehicle , 1 which can be defined in the form of a triple ( tenor attribute , vehicle ) (Song et al., 2021).", "For example, the simile sentence Love is as thorny as rose can be extracted as the triple (love, thorny, rose), where the tenor is love, the vehicle is rose, and the attribute is thorny.", "Note that a simile triple can produce different simile sentences with different templates.", "For the example triple above, the simile sentences can be also constructed as love is thorny like rose \" with the pattern tenor is attribute like vehicle \".", "The study of simile is benefit to many downstream tasks, like sentiment analysis (Rentoumi et al., 2012), question answering (Zheng et al., 2020), writing polishment (Zhang et al., 2021) and creative writing (Gero and Chilton, 2019).", "Simile interpretation (SI) (Qadir et al., 2016; Su et al., 2016) and simile generation (SG) (Yu and Wan, 2019) are the two important tasks in the study of simile (Tong et al., 2021).", "The SI task is to find suitable attributes as a mediator between the tenor 1 Tenor: the logical subject of the comparison, usually a noun phrase.", "Attribute: what things being compared have in common, usually an adjective.", "Vehicle: the logical object of the comparison, usually a noun phrase.", "and vehicle.", "Likewise, the SG task is to select a proper vehicle for the tenor with the given attribution.", "And these two tasks can be unified into the form of simile triple completion (STC) (Song et al., 2021) as shown in Figure", "1. Previous works on the SI and SG tasks relied on a limited training corpus or labor-intensive knowledge base, which leads to an upper limit on the diversity of results.", "(Song et al., 2021) collected sentences containing comparator words from a Chinese essays corpus and manually annotated them to obtain the simile triple.", "Some works (Stowe et al., 2021; Gero and Chilton, 2019; Veale et al., 2016) relied on a knowledge base such as ConceptNet 2 , FrameNet 3 , which are scarce to other languages because it is time-consuming and labor-intensive to construct such a knowledge base.", "It is notable that pre-trained language models (PLMs) (Devlin et al., 2019; Radford et al., 2019) have made significant progress recently in many NLP tasks since it learns generic knowledge such as grammar, common sense from a large corpus (Davison et al., 2019; Liu et al., 2021a,b).", "Considering the suf-ficient existence of simile in the large corpus, it's reasonable to assume that PLMs are equipped with rich knowledge of similes during the pre-training stage.", "However, few works have explored directly probing the knowledge of simile from the PLMs.", "In this paper, we propose a unified framework to solve the SI and SG tasks by mining the knowledge in PLMs, which does not require fine-labeled training data or knowledge graphs.", "The backbone of our method is to construct masked sentences with manual patterns from an incomplete simile triple, and then use language models with MLM heads to predict the masked words over the task-specific vocabulary.", "We take the K words with the highest probability as the result words.", "However, there are problems with this crude approach.", "Firstly, the predicted words should be creative and surprised for the simile sentence.", "On the contrary, the PLMs tend to predict common words (e.g., good, bad) with a higher probability.", "To address this issue, we introduce a secondary pre-training stage Adjective-Noun mask Training (ANT), where only the noun or adjective contained in the amod dependencies (Nivre et al., 2017) could be masked in the MLM training process and the number of words masked times are limited.", "Secondly, the words predicted 2 https://conceptnet.io/ 3 https://framenet.icsi.berkeley.edu/fndrupal/ by MLM have a preference for different patterns.", "For this reason, we employ a pattern ensemble to obtain high-quality and robust results.", "Finally, we also introduce a prompt-search method to improve the quality of the simile component predictions.", "Our main contributions are as follows: We propose a unified framework to solve both the simile interpretation (SI) and simile generation (SG) tasks based on pre-trained models.", "To the best of our knowledge, it is the first work to introduce pre-trained language models to unify these tasks.", "We propose a secondary pre-training stage that effectively improves the prediction diversity.", "Further, we employ the pattern-ensemble and pattern-search approaches to obtain better results.", "We compare our models on both automated metrics and manual measures, and the results show that our approach outperforms the baselines in terms of diversity and correctness.", "Simile interpretation and simile generation are the two main directions of the simile study (Yu and Wan, 2019).", "The SI task (Shutova, 2010; Su et al., 2017) aims at finding a suitable attribute when given the tenor and vehicle, while the SG task (Yu and Wan, 2019) is to find a proper vehicle when given the tenor and its attribute.", "For simile interpretation, some works (Zheng et al., 2020; Bar et al., 2018; Xiao et al., 2016; Gagliano et al., 2016; Qadir et al., 2016) applied word vectors to decide which attribute words can fit into the tenor and vehicle domains and some other works (Gero and Chilton, 2019; Stowe et al., 2021) introduced knowledge base (Baker et al., 1998; Speer et al., 2017) to help find intermediate attributes.", "For simile generation, some works focused on constructing limited training corpus to finetune a sequence-to-sequence model (Lewis et al., 2020) by pattern-based (Zhang et al., 2021; Bollegala and Shutova, 2013) or knowledge-based approaches (Chakrabarty et al., 2020, 2021; Stowe et al., 2021).", "There are also some works (Abe et al., 2006; Hervs et al., 2007; Zheng et al., 2020) that focused more on the relationships between concepts (i.e., 5876 BookCorpus Incomplete triple Optimal multi-pattern combination Pattern Search Masked LM LM_ANT Adjective-Noun Mask Training Training Dataset Beautiful Romantic Task Vocab T V A The T is as A as V. ... 4 Classes Multi-Patterns T AV A T V ... ... ...", "tenor and vehicle) and attribute.", "However, our paper carries out the task of simile interpretation and generation uniformly in the form of simile triples.", "And instead of extracting the simile triples from the limited corpus using designed templates or a handcrafted knowledge base, we probe simile-related knowledge from PLMs.", "Pre-trained language models such as Bert and GPT (Devlin et al., 2019; Radford et al., 2019) are trained on the large-scale unlabeled corpus.", "Many recent works (Manning et al., 2020; Ettinger, 2020; Petroni et al., 2019; Shin et al., 2020; Haviv et al., 2021; Jiang et al., 2020; Zhong et al., 2021; Wang et al., 2022a,b; Li and Liang, 2021) focused on exploring the rich knowledge embedded in these PLMs.", "Manning et al. (2020) and Ettinger (2020) learned the syntactic and semantic knowledge from PLMs.", "Among these works, one branch of works(Petroni et al., 2019; Shin et al., 2020; Haviv et al., 2021; Jiang et al., 2020) designed discrete patterns to explore the common sense and world knowledge embedded in PLMs.", "In addition, some works (Zhong et al., 2021; Li and Liang, 2021) probed knowledge by searching the best-performing continuous patterns in the space of embedding.", "Inspired by the above works, in this paper, we probe the knowledge of simile in these pre-trained models and further apply pattern ensemble and pattern search to improve the results.", "As shown in Figure 1, the simile triple complete consists of two tasks: simile interpretation (SI) and simile generation (SG).", "Each simile sentence can be abstracted into the form of a triple.", "Therefore, we define a triple: ( T , A , V ) , where T , V are mainly nouns or noun phrases and represent the tenor and vehicle in the simile sentence, respectively.", "A is the attribute in simile sentences, which is an adjective.", "If the A is None in the triple, i.e. ( T , None, V ) , we define it as the simile interpretation task.", "Similarly, if the V is None, i.e. ( T , A , None ) , this will be the task of simile generation.", "The masked language model (MLM) (Devlin et al., 2019; Taylor, 1953) randomly masks the words in the input sentence and feeds the masked sentence into the pre-trained models to make predictions by other visible words.", "For example, given a sentence s = [ w 1 , w 2 , . . . , w i , . . . , w m ] , where the w i means the i -th word in the sentence.", "We can randomly mask s and feed the masked sequence (cid:101) s into the PLMs e.g. BERT (Devlin et al., 2019) to obtain the masked words by Equation: (cid:101) s = f mask ( s, i, v ) (1) P = f ( (cid:101) s ) (2) 5877 where the v means the Vocabulary for pre-trained models, and the i denotes the position of the masked word in Equation", "1. The is the parameters of PLMs in Equation", "2. We can select the word corresponding to the maximum probability in P as the output of the model.", "To probe the simile knowledge in pre-trained masked language models, the intuitive solution is: (1) Construct a sentence that contains the simile triple in Section 3.1 with the given pattern.", "(2) Mask the attribute A or vehicle V in this simile sentence.", "(3) Predict the words in the masked position with MLM.", "For example, when given a pattern The T is as A as V , the input sentence of MLM is The T is as [MASK] as V for the SI task while The T is as V as [MASK] for the SG task.", "To formulate this problem, we define the pattern function as p ( ) , where { SG, SI } .", "The pre-trained MLM is denoted as M and the predicted distribution Q over vocabulary V can be formulated as: Q ( w | p ( )) = exp ( M ( w | p ( ))) (cid:80) w (cid:48) V exp ( M ( w (cid:48) | p ( ))) (3) 4 Method In this section, we will introduce our proposed method of probing simile knowledge from pre-trained models.", "Our method first introduces a secondary pre-training stage Adjective-Noun mask Training (ANT) based on pre-trained language models to acquire diverse lexical-specific words.", "Then two modules of pattern ensemble and pattern search are used to obtain the high-quality predictions.", "The framework of our method is shown in Figure 2 in detail 4 .", "For the MLM task, pre-trained models prefer to output high-frequency words as candidate words since the objective of the training is to minimize the cross-entropy loss (Gehrmann et al., 2019).", "However, the components of simile triples are usually nouns or adjectives and the simile sentences are appealing due to their creativity and unexpectedness.", "Therefore, to predict more diverse and 4 we released our code at https://github.com/nairoj/Probing-Simile-from-PLM.", "specific words of simile component, we introduce a secondary pre-training stage Adjective-Noun mask Training (ANT) that fine-tune the pre-trained model with specially designed datasets.", "First, we utilize trankit (Nguyen et al., 2021) to construct the training set by selecting sentences from BookCorpus (Zhu et al., 2015) that contains amod 5 dependencies (Nivre et al., 2017).", "Second, we mask a word at the end of amod relation, instead of randomly masking, and all words are masked no more than 5 times.", "Finally, the pre-trained model is fine-tuned on the constructed dataset with MLM loss.", "In this way, the pre-trained model will avoid the bias to high-frequency words and have a higher probability of generating diverse and novel words.", "Since words predicted by MLM have a preference for different patterns and only using one pattern is insufficient, we apply the pattern ensemble to obtain better performance where different types of patterns are designed as shown in Table", "1. Specifically, the class I describes the relationship between the three-element T , V and A .", "However, the similes tend to highlight an obvious attribute between tenor and vehicle (Israel et al., 2004).", "We further design the class II and class III to find the attribute corresponding to the tenor and vehicle, respectively.", "Finally, the attributes of simile sentences are sometimes omitted and thus the class IV is designed to deal with this case.", "Additionally, we also design three patterns for each class to obtain high-quality and robust results.", "where P is the set of patterns p ( ) for specific task .", "Note that though we design four classes of patterns in Table 1, some classes of patterns are not required for the SI or SG task.", "Specifically, The patterns of Class IV are not used for the SI task because the attribute A is missed in Class IV.", "Likewise, the patterns of Class III are not used for the SG task due to the lack of vehicle V .", "5 An adjectival modifier of a noun (or pronoun) is any adjectival phrase that serves to modify the noun (or pronoun).", "The relation applies whether the meaning of the noun is modi-fied in a compositional way (e.g., large house) or an idiomatic way (hot dogs).", "The prediction of pattern ensemble in Section 4.2 is averaged by adding up the output distributions of all the patterns.", "Conversely, the hand-designed patterns are heuristic, which may lead to suboptimal results.", "Therefore, it is worth studying how these patterns can be combined to obtain better performance.", "To solve this problem, we introduce an approach of pattern search (PS) to find the best combination of different patterns.", "Specifically, given a simile dataset DPS , we calculate Equation 4 on DPS by iterating all subsets of the patterns.", "Finally, we select the optimal subset p best as the input of MLM to predict simile components.", "Dataset for ANT: We constructed our train set of ANT from BookCorpus.", "We first extracted the sentences with length less than 64 and then masked nouns or adjectives in them based on amod dependencies (Nivre et al., 2017).", "Meanwhile, we limited the frequency of masked words to less than", "5. Finally, we got 98k sentences as the dataset of ANT, which contains 68k noun-masked sentences and 30k adjective-masked sentences.", "Dataset for PE and PS: We evaluate our method on the dataset proposed in (Roncero and de Almeida, 2015).", "As the samples in Table 2, there are multiple attributes for each ( T , V ) pair.", "For example, the pair of (anger, fire) has the attributes of dangerous, hot, and red.", "In addition, we followed the previous work (Xiao et al., 2016) to filter the dataset by reversing simile triples with attribute frequencies greater than", "4. Eventually, we obtain the train set with 533 samples and the test set with 145 samples.", "Notice that the train set is the Triple Frequency (Anger, Dangerous, Fire) 8 (Anger, Hot, Fire) 8 (Anger, Red, Fire) 5 (Love, Beautiful, Rainbow) 10 (Love, Beautiful, Melody) 2 (Love, Beautiful, Rose) 9 Table 2: Some samples from the dataset.", "DPS in Section 4.3 used for the pattern search and the test set is used for evaluating all the approaches in this paper.", "Details for ANT : In adjective-nouns mask training, we utilized Adam as our optimizer and the learning rate is 5e-5.", "The batch size is set to 32 and the max sequence length is set to 64, respectively.", "Further, we utilize the Bert-Large 6 with 340M parameters as the basic model to perform adjective-nouns mask training and the number of training epoch is 3 .", "Vehicle Filtering : For simile generation, we filter the predicted vehicles that are similar to the tensor by calculating the semantic similarity with Glove embedding.", "For instance, given the sentence The child is as tall as [MASK]\", we will filter out the word father\" as its vehicle due to not meeting the simile definition 7 .", "To solve this problem, we compute the similarity score of the tenor and vehicle and filter the predicted vehicle whose score is above the threshold 0.48 8 .", "6 https://huggingface.co/Bert-large-uncased 7 Using something with similar characteristics to give an analogy to another thing 8 The threshold is the maximum similarity score of tenor and vehicle in the train set 5879 72.4 67.1 48.5 46.1 35.4 25.0 1.3 21.1 29.0 9.2 5.3 1.3 good big strong old real great 0 20 40 60 80 100 P e r c en t age ( % ) BertANT Top 15 84.2 76.3 69.7 67.1 55.3 42.1 5.3 29.0 32.9 17.1 11.8 1.3 good big strong old real great 0 20 40 60 80 100 P e r c en t age ( % ) BertANT Top 25 Figure 3: Percentage of samples whose top K predicted words contain a given common word.", "In this section, we will demonstrate that ANT could improve the diversity of predicted words for both the SI and SG tasks.", "We compare the predicted results of MLM (i.e., Bert) before and after ANT, which use the patterns The T is as [MASK] as V \" for the SI task and The T is as A as [MASK]\" for the SG task.", "Metric: We evaluate the diversity of the MLM predictions by calculating the proportion of unique words in the predicted Top K results on the test set.", "It can be formulated as p @ K = Num ( Unique _ words ) K N (5) where the Num ( Unique _ words ) means the number of unique words, and the N represents size of the test set.", "Result: To illustrate the effectiveness of ANT, We evaluate the results on the test set based on Equation", "5. As shown in Table 3, the diversity of predicted words significantly improves after ANT for different p @ K , specifically about 100% improvement for the SI task and about 50% for the SG task.", "Additionally, Figure 3 plots the percentage of samples on the test set, where a given common word (e.g., good, big, strong) appears in the list of the top k = 15 , 25 predicted words.", "We can observe that the frequency of common words decreases significantly after ANT.", "For example, the frequency of the common word good decreases from 72 .", "37% to 1 .", "32% when k = 15 .", "We compare the proposed approaches with the following baseline:", "(1) Meta4meaning (Xiao et al., 2016) : It uses the trained LSA vector representation according to the degree of abstraction and salience imbalance to select appropriate attributes.", "(2) GEM (Bar et al., 2018) : A method calculates the cosine similarity and normalized PMI between each attribute and tensor/vehicle based on Glove representing to obtain the best attribute with ranking.", "(3) Bert (De-vlin et al., 2019) : Directly use pre-trained MLM to predict the simile component with a single pattern as Section 3.3.", "In this paper, we utilize the bert-large-uncased as the basic pre-trained MLM.", "(4) ConScore (Zheng et al., 2020) : A connecting score is proposed to select an attribute word A for T and V .", "Our proposed approaches are denoted as: (1) ANT : Perform Adjective-Noun mask Training based on a pre-trained MLM with the datasets mentioned in Section 5.1.", "(2) ANT+PE : Based on ANT, the output distribution over vocabulary is pre-5880 Task Method MRR R@5 R@10 R@15 R@25 R@50 Meta4meaning N/A 0.221 0.303 0.339 0.397 0.454 GEM 0.312 0.198 0.254 0.278 0.405 0.562 ConScore 0.078 0.076 0.138 0.172 0.269 0.386 Bert 0.266 0.338 0.428 0.448 0.538 0.641 ANT 0.245 0.310 0.407 0.455 0.510 0.614 ANT+PE 0.241 0.331 0.400 0.448 0.552 0.628 SI ANT+PS+PE 0.270 0.379 0.490 0.524 0.579 0.655 ConScore 0.036 0.055 0.09 0.103 0.145 0.200 Bert 0.064 0.076 0.124 0.159 0.207 0.283 ANT 0.049 0.069 0.117 0.145 0.186 0.303 ANT+PE 0.036 0.034 0.083 0.097 0.131 0.172 SG ANT+PS+PE 0.095 0.124 0.145 0.159 0.214 0.290 Table 4: Automatic evaluation for SI and SG tasks.", "dicted by average on all the corresponding patterns in Table", "1. (3) ANT+PS+PE : Based on ANT, first the pattern search is to decide which patterns in Table 1 are applied, and then the pattern ensemble is used over these selected patterns.", "Automatic Evaluation: (1) Mean Reciprocal Rank (MRR): average on the reciprocal of the ranking r i of label words in the predicted candidates, denoted as MRR = 1 NN (cid:88) i =1 1 r i (6) (2) R @ K : the percentage of the label words appear in the top K predictions.", "previous works (Xiao et al., 2016; Bar et al., 2018), we consider a predicted word as the correct answer if it is a synonym of label word n in WordNet (Miller, 1992).", "It can be formulated as cor ( w ) = (cid:40) 1 w Synonyms ( L i ) 0 w / Synonyms ( L i ) (7) R @ K = 1 NN (cid:88) i =1 (cid:80) w K i cor ( w ) K (8) where K i denotes the list of predicted words, L i denotes the list of label words and Synonyms ( L i ) represents the synonyms of a word.", "Human Evaluation: To further prove our approaches are better than baselines, human evaluation is used to evaluate the quality of predicted simile triples from three levels (0, 1, 2).", "0 The triple is unacceptable.", "1 The triple is acceptable.", "2 The triple is acceptable and creative.", "Given a simile triple, annotators need to score it according to their subjective judgment and each triple is annotated by three annotators independently.", "We use the average score of three annotators as the quality of a simile triple.", "Automatic and Human Evaluation: The results of both automatic and human evaluation are shown in Table 4 and Table", "5. The agreement between annotators is measured using Fleiss's kappa (Ran-dolph, 2005).", "The value is 0.68 (substantial agreement) for the SI task and 0.48 (moderate agreement) for the SG task.", "(1) For both SI and SG tasks, our proposed approaches (i.e., ANT, ANT+PE, ANT+PS+PE) significantly outperform the baselines on both automatic and human evaluations.", "It proves that our methods not only enhance the diversity of predicted simile components in Section 5.3 but also their quality.", "(2) Pre-trained MLM-based methods (i.e., Bert, ANT, ANT+PE and ANT+PS+PE) perform better than the traditional methods (i.e., GEM, Meta4meaning, ConScore).", "It shows the potential of pre-trained models in probing simile knowledge.", "(3) Compared ANT with Bert, we found that though ANT improves the diversity of predicted words in Table 3, the average scores on automatic and human evaluations decrease because the simile knowledge is not involved in the ANT training process.", "However, our proposed PE and PS compensate for the performance.", "(4) The scores of automatic evaluation metrics on the SI task are remarkably higher than the SG task.", "Yet, the scores of human evaluation metrics are significantly lower than on the SG task.", "We conjecture that this may be because the list of candidate words of attribute predicted by SI are smaller than that of the vehicle for the SG task.", "For example, given the SI sample (Cloud, None , Cotton), the attribute words are almost restricted to the physical properties of the vehicle, such as Soft, while the choices of vehicle words are more varied and unexpected given the SG sample (Cloud, soft, None ) such as cotton, silk, towel\".", "In future work, we will continue to study how 5882 to mine broader or complex knowledge from pre-trained models, such as metaphor, common sense and we expect more researchers to perform related research.", "Discussion for PS: Compared ANT+PS+PE to ANT+PE, it can be included that pattern search brings a great improvement to the results on both automatic and human evaluations.", "To have a deeper insight into PS, the pattern subsets with high performance are listed in Table", "6. For the SI task, the optimal multi-pattern combination is { p 1 , p 5 } , which support the hypothesis proposed by (Ortony, 1979) considers that the highlighted attribute of a simile triple is more salient in the vehicle domain despite it is commonly shared by both tenor and vehicle domains.", "Specifically, pattern p 1 belonging to the Class I, models the relationship of all three simile components while the pattern p 5 belonging to Class II requires the candidate words to be the salient attribute of the vehicle.", "Similarly, for SG task, optimal multi-pattern combination is { p 3 , p 4 } , which is also a combination of the Class I pattern and the Class II pattern.", "In this paper, from the perspective of simile triple completion, we propose a unified framework to solve the SI and SG tasks by probing the knowledge of the pre-trained masked language model.", "The backbone of our method is to construct masked sentences with manual patterns from an incomplete simile triple, and then use language models with MLM heads to predict the masked words.", "Moreover, a secondary pre-training stage (the adjective-noun mask training) is applied to improve the diversity of predicted words.", "Pattern ensemble (PE) and pattern search (PS) are further used to improve the quality of predicted words.", "Finally, automatic and human evaluations demonstrate the effectiveness of our framework in both SI and SG tasks.", "This work is supported by the Key Research and Development Program of Zhejiang Province (No. 2022C01011) and National Natural Science Foundation of China (Project 61075058).", "We would like to thank the anonymous reviewers for their excellent feedback." ]
[ "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "method", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "abstain", "method", "other", "other", "other", "other", "other", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "result", "result", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "abstain", "abstain", "objective", "other", "other" ]
[ "Incremental syntactic parsing has been an active research area both for cognitive scientists trying to model human sentence processing and for NLP researchers attempting to combine incremental parsing with language modelling for ASR and MT. Most effort has been directed at designing the right transition mechanism, but less has been done to answer the question of what a probabilistic model for those transition parsers should look like.", "A very incremental transition mechanism of a recently proposed CCG parser when trained in straightforward locally normalised discriminative fashion produces very bad results on English CCGbank.", "We identify three biases as the causes of this problem: label bias, exposure bias and imbalanced probabilities bias.", "While known techniques for tackling these biases improve results, they still do not make the parser state of the art.", "Instead, we tackle all of these three biases at the same time using an improved version of beam search optimisation that minimises all beam search violations instead of minimising only the biggest violation.", "The new incremental parser gives better results than all previously published incremental CCG parsers, and outperforms even some widely used non-incremental CCG parsers.", "It has been known for a long time that human sentence processing is highly incremental (Marslen-Wilson, 1973), with early formation of semantic representations.", "A parser that is able to form representation early must have some notion of partial structure such as S missing an object NP.", "Also, such parser needs to be able to combine partial structures into bigger partial structures.", "These two properties are at the core of Combinatory Categorial Grammar (CCG) (Ades and Steedman, 1982; Steedman, 2000).", "CCG represents partial constituents using complex categories.", "For example S/NP is the category of a transitive sentential prefix such as I like or I think I like requiring an object NP on its right.", "Such prefix categories are constructed using combinatory rules such as function composition.", "In this way we can form (mostly) left-branching derivation trees that can be parsed incrementally even with simple transition mechanisms such as shift-reduce parsers.", "Still, left branching structures are not sufficient to solve all the problems of incremental sentence processing.", "Right adjuncts are particularly problematic.", "They appear on the right of the head that they modify which means that they need to be predicted, but at the same time they are optional which makes it impossible to predict them with confidence.", "Stanojevic and Steedman (2019) tackle this issue by using incremental tree-rotation and revealing operations that allow adjuncts not to be predicted, but still be easy to attach to the head in case they appear.", "They show great improvement in the incrementality of this approach as measured by connectedness (the average stack size).", "However, Stanojevic and Steedman (2019) parser is not fully incremental because its oracle (the function that decides which transition to take in case of non-determinism) 1 is a probabilistic model that looks at the whole sentence.", "It does so using bi-directional ELMo embeddings with the addition of bi-directional LSTMs.", "The present paper describes a fully incremental version of Stanojevic and Steedman (2019) parser using an incremental oracle that does not look at the words that are not yet processed.", "We should note that by a fully incremental parsing model we do not mean a parser that has all the partial trees on the stack fully connected at ev-1 Note that this sense of a psycholinguistic term oracle is not the same as the one used in dependency parsing literature.", "ery point in time.", "This is a property of extremely predictive top-down parsers, while the parser that we use is a CCG bottom-up parser.", "This choice is intentionaleven though there is clear evidence that human sentence processing is highly incremental, we argue below that there is no unequivocal evidence that it is more incremental than would be allowed under the Strict Competence Hypothesis (SCH) which states that the parser cannot construct any structure that is not licensed by the competence grammar, given CCG's generalized notion of constituency (Steedman, 1989).", "Most research in incremental parsing has been directed at finding the right parsing algorithm (Ab-ney and Johnson, 1991; Resnik, 1992; Hale, 2014; Stanojevic and Stabler, 2018) or grammar formalism (Steedman, 1989; Stabler, 1991; Sturt and Lombardo, 2005; Demberg et al., 2013; Stanojevic et al., 2020), but not much has been done in addressing the issue of finding the right oracle.", "Early approaches to this problem were late-closure and minimal-attachment heuristics (Fra-zier, 1979; Pereira, 1985) which do not appear to be language universal (Cuetos and Mitchell, 1988).", "Altmann and Steedman (1988) have shown that these heuristics are overruled by human parser if the context gives evidence for a particular interpretation, in itself further evidence for processing incrementality at all levels.", "It seems natural to model the non-deterministic decision by using a probabilistic model which will condition on words and possibly on the context.", "Oracles of the modern broad coverage incremental parsers are without exception statistical in nature.", "The most typical statistical oracle is a locally normalised generative model ether in the form of simple PCFG (Stolcke, 1995; Hale, 2001), feature based (Roark and Johnson, 1999; Roark, 2001) or neural model (Dyer et al., 2016; Hale et al., 2018).", "RNNG (Dyer et al., 2016) is the main contemporary representative of this approach.", "RNNG is a top-down parser which in its first version used a non-incremental discriminative locally-normalised model.", "To make the parser fully incremental Dyer et al. (2016) exchanged the discriminative model for a generative one.", "This was not enough to get a working single-model incremental parser.", "Stern et al. (2017) added a couple more modifications to the search, namely word-synchronous beams with a very large number of hypotheses, that gave good results.", "Could we just apply these same techniques to the CCG parser of Stanojevic and Steedman (2019) and replace non-incremental probabilistic model with an incremental one?", "The short answer is no.", "As it will be shown later, a straightforward adaptation of the beam search and switching to a generative model does indeed improve accuracy over the model that does not do that, but not enough to make the incremental parser competitive.", "We provide an explanation for these results and offer an alternative approach.", "We identify the problem for building incremental parsing models in terms of three biases: (1) label-bias, (2) exposure-bias and (3) imbalanced probability search bias.", "These biases are well known from the machine learning literature in structured prediction, but they do not usually have the extreme effect that is seen in the case of incremental parsing.", "The techniques used in RNNG address some of these biases individually but none of the techniques addresses all three together.", "Instead of using a collection of techniques for each bias, we replace them all with a single solution in the form of a global unnormalized model trained with beam-search optimization that minimises all margin violations in the beam simultaneously.", "This single technique addresses all of the mentioned biases and gives results that outperform all previous incremental parsing models even with a relatively small beam.", "This is not to say that all unwanted biases are removedfor instance, beam search is still a biased search.", "However, the biases that remain do not have the drastic effect on performance of the three identified above.", "The parser of Stanojevic and Steedman (2019) already offers a fully incremental transition system with a non-incremental probabilistic model that gives state of the art accuracy in recovering predicate-argument dependencies.", "The parser encodes words using ELMo (Peters et al., 2018) and BiLSTM (Graves et al., 2005), sub-trees with tree encoders and the stack with Stack-LSTM (Dyer et al., 2015).", "This provides the encoding of the whole configuration together with the buffer, because the buffer is implicitly encoded via ELMo and Bi-LSTM, which look at the whole sentence.", "Given the hidden vector representation of the configuration, the parser uses a feed-forward network to determine the probability of the next action.", "There are three main types of transitions: Parsing actions: shift and reduce(X) where X is a unary or binary combinatory rule; Supertagging actions: tag(X) where X is one of the lexical supertags from English CCG-bank (Hockenmaier and Steedman, 2007); Right-adjunction actions: adjoin(X) where X is one of the nodes to which the adjunct can be adjoined.", "We refer the reader to (Stanojevic and Steedman, 2019) for more detail on the original neural model and transition system, which are not of particular relevance here.", "What matters is only that (1) the number of tagging actions is much bigger than the number of possible parsing actions and (2) that the buffer is implicitly encoded with ELMo and Bi-LSTM.", "To make the parsing model fully incremental first we modify ELMo embeddings: instead of using full ELMo embeddings we use only the forward LSTM part of ELMo.", "This decreases performance by only two points on the dev set F1 score from 89 .", "5 to 87 .", "5 .", "Finally, we replace Bi-LSTM with normal LSTM (Hochreiter and Schmidhuber, 1997).", "This causes a significant drop in performance to 60 .", "9 .", "We take the fully incremental model with 60 .", "9 F1 as our baseline, and show how it can be improved, to come as close as possible to the non-incremental version that uses the same embeddings, which has accuracy 87 .", "5 F1, changing only the method of training, keeping the network architecture and embeddings the same.", "Label bias is a frequent bias present in some types of locally normalised models.", "It was first recognised by Bottou (1991), but became more widely known with the publication of CRFs (Lafferty et al., 2001).", "Here we give an explanation of label-bias in incremental parsing context.", "For a more formal treatment see Andor et al. (2016).", "In a general non-incremental setting, a discriminative parsing model assigns a probability to the whole transition sequence as p ( y | x ) where y = [ y 0 , y 1 , . . . , y m ] is sequence of parsing actions and x = [ x 0 , x 1 , . . . , x n ] is a sequence of words.", "Since the model is locally normalised we can express this conditional probability as the product of conditional probabilities of each parsing action: p ( y | x ) = (cid:81) i p ( y i | y <i , x ) .", "In the non-incremental version of the parser there are no independence assumptions, so every parsing action can condition on the whole sequence of words x .", "However, in the incremental version we can condition only on the k ( i ) words that have been observed (have shifted from the buffer to the stack) in first i transitions.", "This makes the new model of the whole transition sequence be p ( y | x ) = (cid:81) i p ( y i | y <i , x <k ( i ) ) .", "This small change has big consequences on parsing.", "Imagine the situation in which the incremental parser has processed a prefix x <k ( i ) .", "This prefix may be genuinely ambiguous making the parser have two derivations in the beam, one in state A and the other in state B , both equally good up until that point in the sentence.", "After processing some more words, the parser might find a word that resolves the ambiguity and provides evidence that the state A was correct.", "A good incremental parser would then give a higher score to all derivations that originated in state A and a lower score to derivations that originated in state B .", "However, the locally-normalised model cannot do that.", "Because the model is locally normalised, the probability of all transitions leaving any state must sum to 1 , so even if all transitions are bad (they come from a bad state) they cannot all be penalised.", "What this means is that parser cannot recover from garden-paths even with an unboundedly large beam .", "2 This is a deficiency of the probabilistic model because of the introduced independence assumption that the parsing action depends only on the processed prefix.", "This makes the model effectively ignore its input in some situations.", "When we are parsing with greedy search the label-bias will have no influence, because there will be no two states that compete with each other while having a different history.", "Label-bias is harmful only in the case of beam search.", "The usual way of training any sequence prediction model is to train the prediction of the next action based on the gold history in the data.", "But at test time the model will have a predicted history rather than a gold history.", "On occasions when that predicted history is wrong, the model may not assign good probabilities to the future actions because it has not been exposed to this erroneous history in 2 We use the term garden-path in a more general sense than in psycholinguistic literature to mean taking any transition path that may end up being wrong.", "its training data.", "This problem is often referred to as exposure bias.", "This is again specifically relevant for incremental parsing.", "Let's say that the parser did enter into a garden-path, and that there are still some words left in the suffix.", "There will still be many transition sequences that the parser could choose from, before it finishes parsing the whole sentence.", "Even though they are all bad, because we are in a garden-path, they are not all equally bad.", "We want the parser to choose the transition sequence that would make the most out of this bad situation.", "The exposure-bias, unlike the label-bias, influ-ences greedy search too.", "In fact, exposure-bias is particularly important for greedy models because they are more likely to fall into a garden-path.", "Incremental parsing models that condition on the whole history cannot carry out exact search, and have to use approximate methods like beam search.", "Beam search is a biased search because it searches only in the local neighbourhoods of the locally most probable derivations.", "This locality is proportional to the size of the beam.", "If the beam were unbounded then search would be exact, but often we use a small beam that is only a small relaxation of greedy search.", "The fact that the beam search is biased is well known and often accepted as a necessary evil, but it has been recognised by Stern et al. (2017) that for some parsing models the issue is particularly bad because of imbalanced probability bias .", "In their case, an incremental RNNG model had actions for parsing and actions for generation of words.", "The number of parsing actions was many orders of magnitude smaller than the number of word generation actions.", "This made the probability of word generation very small.", "The expensive action of word generation happens in all derivations an equal number of times but it happens in different time steps.", "Beam search may accordingly discard a good hypothesis too early because that hypothesis has used expensive actions early.", "The imbalanced probability bias implicitly prefers states with low entropy.", "Bias for the low entropy states is often associated with label-bias , however the reasons and situations when this happens are different from imbalanced probability search bias .", "Label-bias is a deficiency of the probabilistic model, while imbalanced probability is a deficiency of the search method.", "This is visible in the context of search with an unboundedly large beam: the model with label-bias would still prefer states with low entropy while imbalanced probability bias would not be presentsearch would be exact so it would not matter at which point in time expensive actions were applied.", "Some of these biases are well known in the literature of structure prediction and various proposals have been made for reducing their effect.", "However, most of these techniques usually address only one of the biases, and the combination of these techniques is not always straightforward.", "As mentioned before, label-bias is caused by model being", "(i) discriminative,", "(ii) locally normalised and", "(iii) having independence assumptions about future input not influencing current actions.", "We could remove label-bias by removing one of these properties from the model.", "Clearly, we cannot remove property", "(iii) because we want an incremental model.", "The simplest solution is to change property", "(i) and make the model generative.", "The generative model would give us probability p ( x , y ) instead of p ( y | x ) .", "This is done by having an additional action for generation of a word following a shift action.", "Here the model cannot ignore the input because it is forced to generate it.", "It can also recognise garden-paths: if we are in a state that cannot generate the following word that means we are in a garden-path and will punish all transitions from that state.", "However, this solution introduces imbalanced probability search bias because we will introduce word-generation actions that have much higher entropy.", "Lafferty et al. (2001) advocated dropping the property", "(ii) by making the model globally normalised.", "This would allow transitions to have local weights instead of local probabilities.", "If all transitions from some state are bad, the model is able to give low weight to all of them because weights do not have to sum to one.", "Lafferty et al. (2001) advocated using conditional random fields (CRF), which are globally normalised probabilistic models, but margin-based alternatives like Max-Margin Markov Networks (M 3 N) (Taskar et al., 2004) and Structured SVM (Tsochantaridis et al., 2004) could be used in the same way.", "These particular solutions are not applicable here because they require (implicitly) enu-4114 merating all possible derivations which is not possible with a model like ours that makes only few independence assumptions.", "Exposure bias happens because model is not exposed to its errors during training time.", "With dynamic oracle (Goldberg and Nivre, 2012) parser is trained on its own prediced history instead of the gold sequence of actions (static oracle).", "Whenever the model is in some sampled state (which is not necessarily a good state), we train the model to pick the transition that is a beginning of a path that would lead the parser to the ending state with the highest achievable metric score from that state.", "Finding such a transition is not trivial for all systems and all metrics (Cross and Huang, 2016).", "To this date there have been no proposals for a dynamic oracle for CCG parsing with F1 metric over CCG dependency structures and it is not even clear if there is a polynomial solution to this problem.", "3 Therefore this is not an option that we can use.", "An alternative is to use a reinforcement learning algorithm REINFORCE (Williams, 1992).", "REINFORCE samples derivations for training just like dynamic-oracle, but does not require design of a task-specific oracle extraction algorithm.", "Instead, it implicitly minimises the expected error of the desired metric.", "4 Fried and Klein (2018) have shown that in some circumstances REINFORCE can give results almost as good as dynamic oracle, but it requires using additional techniques to compensate for high variance of the training method.", "The method of applying REINFORCE to the discriminative parser is straightforward because sampling trees from the discriminative parser is easy.", "However, that is not the case for the generative model from which we have to sample both trees and sentences at the same time.", "That is why we will apply REINFORCE only to the discriminative model.", "Imbalanced probability causes a search bias so the way it was addressed by Stern et al. (2017) is to modify the search itself.", "Stern et al. (2017) introduced a word synchronous beam search (WordSync) in which all the hypotheses that are competing with each other are guaranteed to have the same number of expensive actions.", "3 In parsers that do not use a normal-form there is an additional source of exposure bias arrising from alternative gold derivations.", "For this source of exposure bias there is a dynamic oracle by Xu et al. (2014) that works on a subset of CCG dependencies.", "4 A related, but different, objective is minimum expected error training (Xu et al., 2016).", "Most of these methods are either not applicable (exact CRF, exact M 3 N, dynamic oracle), or they solve only some subset of the previously mentioned biases.", "However, we can resort to some approximate methods to global models.", "For instance, instead of enumerating all hypotheses to compute normalization we could use a beam search as an approximation.", "This was done for CRF objective in (Zhou et al., 2015; Andor et al., 2016) and for (single-violation) M 3 N objective in (Wiseman and Rush, 2016).", "They all need to compare in some way the gold hypothesis to the rest of the beam, but the issue arises when the gold hypothesis falls out of the beam.", "For that situation they use different heuristics.", "CRF approximation of Zhou et al. (2015) and Andor et al. (2016) uses Early update of Collins and Roark (2004).", "During training with Early update, the beam search is stopped when the gold hypothesis falls out of the beam and the parameter update is performed.", "In the Beam-Search Optimization (BSO) method of Wiseman and Rush (2016) an alternative heuristic is used from Daume III and Marcu (2005) called LaSO.", "LaSO does the update at the same point as Early but, unlike Early, it continues decoding by removing all elements of the beam except for the gold one.", "This will potentially result in another update for the same training instance.", "We have implemented most of these methods in attempt to improve incremental CCG parsing.", "However, even though many of them gave some improvements over the baseline, none of them was good enough to give a reasonably good parser.", "To further improve the model we propose two novel approaches: Gen-Rescaling and BSO-*-All where * stands for both Early and LaSO heuristics.", "Word Synchronous beam search did solve the imbalanced probabilities issue for RNNG models, but its improvements do not transfer to CCG.", "Here we take a different approach: instead of adapting the search to the model, we adapt the model to the search.", "Since probabilities are imbalanced a possible way to solve that issue is to balance them by exponentiating them with some weight.", "We use the Beam Search Optimisation (BSO) LaSO method from the previous section to train only 3 new parameters: one for supertagging actions, one for word generation actions and one for reduce actions.", "These three numbers will be used to 4115 exponentiate the probability of the respective actions and by that put them on the same scale.", "This method is applied to a generative model and therefore addresses label-bias and imbalanced probability bias, but it does not address exposure-bias.", "After training the rescaled generative model scores every new transition sequence with: p ( a ) 2 .", "17 p ( t ) 1 .", "08 p ( w ) 1 .", "00 where a , t and w are parsing, tagging and word generation actions respectively, while the numbers are the three learned parameters that put probabilities in the same scale.", "To address all biases together using only a single techniques we modified margin approaches to minimize all margin violations in the beam instead of just the single one.", "When gold hypothesis falls out of the beam BSO-Early and BSO-LaSO use only the most violated hypothesis to update the parameters.", "However, there is no good reason not to update against all violations present in the beam.", "LeCun et al. (2006) argue that the good property of CRF models is that they simultaneously decrease weight of all bad hypotheses simultaneously.", "Our BSO-*-All approach can be seen as an approximation of this idea using a beam.", "This small modification does not slow down training in any significant manner (we already have a forward pass for all the additional hypotheses because they are in the beam) and it gives significant improvements in parsing accuracy.", "We have conducted experiments on English CCG-bank (Hockenmaier and Steedman, 2007).", "For evaluation we use F1 score over labelled-directed and unlabelled-undirected dependencies.", "The parser is implemented in Scala and uses DyNet (Neubig et al., 2017) for the neural computation.", "The code is available on github.", "5 There are two dependency types often used in CCG parsing research: first one from (Clark et al., 2002) which is much closer to the typical CCG notion of dependencies and the second one from (Clark and Curran, 2007) which is more formalism-neutral but less expressive.", "The only implementation of the second method is the one in C&C parser and is not able to handle all the categories that come from CCGbank.", "This is the rea-5 https://github.com/stanojevic/ Rotating-CCG/tree/incremental_max_margin son why most previous work on incremental CCG parsing has used the dependencies of Clark et al. (2002).", "In order to be able to compare to them we use the same dependencies.", "We have tested the following methods:", "base-line).", "Gen Generative model that additionally has word generation transitions.", "Gen-Rescaled Generative model that uses additional three weights to put the probabilities of all actions on the same scale.", "Disc-REINFORCE Discriminative model trained using REINFORCE to maximise the expected reward (F1 score of CCG dependencies).", "Gen-WordSync Same generative model but decoded with word-synchronous beam with main beam size 100, word-beam size 10 and no fast-tracking.", "Un-normalised model trained with Early and LaSO updates but only with single violation per update as proposed in Wiseman and Rush (2016).", "BSO-Early-All and BSO-LaSO-All Same as above but with minimizing all violations present in the beam.", "We refer to them together as BSO-*-All.", "CRF-Early Globally normalized model with Early update as proposed in (Zhou et al., 2015; Andor et al., 2016).", "CRF-LaSO Same as above but modified to use LaSO instead of Early update.", "All beam approximation methods used beam of size 32.", "The number of samples in REINFORCE is 32 and it includes a gold hypothesis for stability as suggested by Fried and Klein (2018).", "CRF-Early achieved accuracy of 36.9%, BSO-Early-Single of 51.7% and Gen-WordSync of 58.1% which are all way below the baseline.", "CRF-Early and BSO-Early-Single update methods probably gave bad results because the training is too unstable with Early heuristic that often does not get to learn from the whole transition sequence.", "We are not sure why Gen-WordSync gave 4116 Disc Gen BSO LaSO All Non-Inc 87 88 89 90 L a b e ll e d F 1 argmaxMBR Figure 1: Reranking 100 samples of dev set sentences generated by discriminative non-incremental model.", "bad results.", "It could be that word-synchronous decoding while addressing the imbalanced probability search bias introduces some other search bias that is even more harmful.", "Another reason could be that, unlike RNNG, we have introduced an additional bottleneck of supertagging transitions that would require additional modifications.", "We will not consider these methods in the rest of the paper.", "Figure 2 shows the results for all the other methods with different beam sizes.", "REINFORCE training does improve the robustness of the discriminative model.", "It improved greedy decoding by 10% more than any other method, but due to label-bias it cannot exploit the benefits of a larger beam.", "The generative model addresses the label bias which is evident from relatively good results with a bigger beam.", "When on top of the generative model we add Rescaling parameters the model gets even more benefit as the beam gets larger.", "The BSO-LaSO-Single model that addresses all three biases at the same time gets very good results and is outperformed by Gen-Rescaled model only in the context of a very large beam.", "Gen-Rescaled and BSO-LaSO-Single get close to 80% but do not go above it.", "Our BSO-*-All modification to beam search optimisations gives significantly better results already with a very small beam.", "With beam of size 8 BSO-LaSO-All crosses the bor-der of 80% and it improves all the way to 82.7%.", "This is only 4.8% lower than the upper bound set by the non-incremental model.", "BSO-LaSO-All is a small modification over BSO-LaSO-Single but is responsible for more than 5% of improvement over it.", "The importance of updating for all violations is particularly striking with the case of BSO-Early where the accuracy increases by 29%.", "CRF-Early already has the property of updating against all bad hypotheses in the beam but it differs from our best method in the type of loss (logis-tic vs max-margin) and the update heuristic (Early vs LaSO).", "We have also tried modifying the CRF method to use LaSO (CRF-LaSO) which made the model significantly better than the original CRF-Early but still much lower than BSO-*-All.", "Is the gap between non-incremental models and incremental models due to the imperfect search or to the imperfect prediction models?", "To test that we have conducted an experiment where the models need only to rerank a list of 100 derivations sampled from non-incremental model for each sentence in the development set.", "This puts beam search out of the equation and tests only how good are the models as discriminators between good and bad trees.", "The samples have trees of mixed quality: the worst score a parser could get by reranking the trees is 67.8 F1 while the best is 95.8 F1.", "The results in Figure 1 show that the gap between incremental and non-incremental models is around one point of F1-score.", "This is much smaller than the results with beam search would lead us to expect.", "Also here the generative model outperforms BSO-LaSO-All.", "This means that the primary reason for success of BSO-LaSO-All over Gen in beam search is probably due to its incremental scoring (a property that was also noticed by Goyal et al. (2019) for similar models) and/or lack of imbalanced probability bias.", "We have also conducted reranking using Minimum-Bayes Risk (MBR) method (Kumar and Byrne, 2004) which finds the hypothesis that would minimise the expected loss under some metric.", "In the parsing context that means finding the tree with the best expected F1-score (Good-man, 1996; Titov and Henderson, 2006; Stanojevic and Sima'an, 2015).", "MBR is defined only for probabilistic models, but as Titov and Henderson (2006) show it could also be adapted and applied to non-probabilistic models, such as our BSO-LaSO-All model.", "Figure 1 shows that while MBR does not make any significant difference for the non-incremental model, it makes a huge difference for the incre-4117 1 4 8 16 32 64 50 60 70 80 Beam size L a b e ll e d F 1 Disc ( baseline ) Disc-REINFORCEGenGen-RescaledBSO-LaSO-SingleBSO-LaSO-AllBSO-Early-SingleBSO-Early-AllCRF-LaSO Figure 2: Influence of beam size on the dev results.", "mental models.", "With MBR they all manage to outperform the non-incremental model.", "However, we should not credit this right away to the quality of the incremental models.", "As Fried et al. (2017) point out, improvements in reranking with a different model could be a result of model ensembling.", "Table 1 compares our strongest method on the test set against all the previously published incremental CCG models.", "The results show that it outperforms all the previous incremental models when using beams of the same size.", "The improvement is even bigger with the larger beam.", "Even thought our primary goal is not to compete with non-incremental parsers, our incremental model outperforms some widely used non-incremental CCG parsers such as EasyCCG (Lewis and Steedman, 2014).", "The result is particularly good for unlabelled dependencies.", "We also report the results of applying MBR reranking using incremental model over the samples generated by the non-incremental model.", "This model outperforms other incremental and non-incremental models on all metrics.", "The incremental CCG parser of Ambati (2016) uses the linear model trained with a structured perceptron objective and the early update heuristic.", "Given the simplicity of that model, it performs surprisingly well.", "The reason is the fact that the structured perceptron addresses all the biases identified in our paper.", "Our work has been an attempt to Tag UF LFI n c r e m e n t a l Hassan et al. (2008) beam= 1 59.0 Ambati (2016) beam= 1 74.6 67.5 57.5 this work beam= 1 78.8 69.9 55.8 Goyal et al. (2019) beam= 5 85.5 this work beam= 5 90.1 92.2 82.1 Ambati (2016) beam=16 90.8 88.3 80.8 this work beam=16 91.4 91.5 82.3 this work beam=64 92.0 92.3 83.4 N on -I n c r e m e n t a l Lewis and Steedman (2014) 93.0 88.6 81.3 Ambati et al. (2015) 91.2 89.0 81.4 Hockenmaier (2003) 92.2 92.0 84.4 Zhang and Clark (2011) 93.1 85.5 Xu et al. (2016) 93.9 86.4 Clark and Curran (2007) 94.3 93.0 87.6 Stanojevic and Steedman (2019) 95.4 95.8 90.2 this work MBR reranking 95.6 95.9 90.6 Table 1: Results on the test set.", "Another interesting approach to tackle label-bias while keeping the probabilistic interpretation is the error-states model of Vaswani and Sagae (2016).", "This model in its original formulation would not be computationally efficient in our setting because there are too many instances of error-states to be trained on in CCG parsing caused by large number of transitions.", "Possibly some modification based on sampling could remedy this.", "There has also been some recent work on reducing the imbalanced probability bias.", "Mabona et al. (2019) propose an algorithmic solution for organising beam search into buckets that have the same number of expensive transitions.", "Crabbe et al. (2019) propose a sampling based approach with 4118 the same motivation of controlling which hypotheses are being compared.", "Of relevance for the CCG incrementality are Sturt and Lombardo (2005) and Demberg et al. (2013) who claimed that human sentence processing is more incremental than CCG allows under SCH for sentences like: The pilot embarrassed Mary and put herself in a very awkward situation.", "Here a male gender-biased interpretation of the antecedent the pilot conflicts with a feminine bound reflexive herself.", "The eye-movements show processing difficulty as soon as put herself is read, rather than being delayed until the completion of the VP by the PP.", "This allows subject binding to be established by VP coordination.", "Stanojevic et al. (2020) argue that Sturt and Lombardo's result is explained by the fact that the category for put is predictive of a future PP, allowing establishment of binding in advance of parsing without strict incrementality or compromising SCH.", "The methods discussed here have been applied to the task of incremental CCG parsing, but they are not limited to CCG or even to parsing as a task.", "In principle, they could be applied to any task involving sequential structure prediction.", "We see this as the most interesting use case not only for the BSO-*-All training method but also for having an incremental CCG parser.", "Such parsers can potentially make much more informed decisions about the next word, compared to the models based on mere sequence of words prefix, by including semantic and referential meaning (Altmann and Steedman, 1988), as well as syntax." ]
[ "abstain", "abstain", "method", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "result", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "result", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "other", "other", "method", "other", "other", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "result", "abstain" ]
[ "Named-entities are inherently multilingual, and annotations in any given language may be limited.", "This motivates us to consider polyglot named-entity recognition (NER), where one model is trained using annotated data drawn from more than one language.", "However, a straightforward implementation of this simple idea does not always work in practice: naive training of NER models using annotated data drawn from multiple languages consistently underperforms models trained on monolingual data alone, despite having access to more training data.", "The starting point of this paper is a simple solution to this problem, in which polyglot models are fine-tuned on monolingual data to consistently and significantly outperform their monolingual counterparts.", "To explain this phenomena, we explore the sources of multilingual transfer in polyglot NER models and examine the weight structure of polyglot models compared to their monolingual counterparts.", "We find that polyglot models efficiently share many parameters across languages and that fine-tuning may utilize a large number of those parameters.", "Multilingual learningusing data from multiple languages to train a single modelcan take many forms, such as adapting a model from a high-resource to low-resource language (Xie et al., 2018; Ni et al., 2017; Mayhew et al., 2017; Cotterell and Duh, 2017; Wu and Dredze, 2019; M`arquez et al., 2003), taking advantage of beneficial multilingual features or datasets (Kim et al., 2012; Ehrmann et al., 2011; Tackstrom, 2012), and unsupervised representation learning (Devlin et al., 2018a).", "We adopt the term Polyglot from Tsvetkov et al. (2016) to refer to models that are trained on and applied to multiple languages.", "There are several advantages to training a single polyglot model across languages.", "Single models ease production requirements; only one model need be maintained.", "They can be more efficient, using fewer parameters than multiple monolingual models.", "Additionally, they can enable multilingual transfer (Devlin, 2018; Wu and Dredze, 2019; Pires et al., 2019).", "However, a key goal of polyglot learning concerns producing a single model that does better on each language than a monolingual model.", "In the context of named entity recognition, we may expect aspects of the task to transfer across languages.", "For example, since entity names tend to be transliterated or directly used across languages, even distant languages may see benefit from training a single model, e.g. Apple (company) is rendered as such in French rather than as Pomme.", "Intuitively, the more similar and the larger the set of languages, the more we should expect to see a benefit from considering them jointly.", "These polyglot models can take advantage of different sets of labeled corpora in different languages (Gillick et al., 2016; Mulcaire et al., 2019).", "Nevertheless, progress towards this goal remains mixed; polyglot models often do not improve results in each language (Mulcaire et al., 2019; Kondratyuk and Straka, 2019; Upadhyay et al., 2018; Conneau et al., 2019).", "Models trained across all languages come close but typically fail to outperform monolingual models.", "Thus, while multilingual learning can benefit low resource languages through transfer and simplify models by sharing one across all languages, it fails to realize a key goal: improving results in each language.", "Our experiments in 4 confirm this negative result in two different multilingual settings for 4 different neural NER models.", "Our first contribution is a technique in which a polyglot NER model can be adapted to a target language by fine-tuning on monolingual data.", "A similar continued training approach to transfer has been explored for domain adaptation in neural machine translation (Luong and Manning, 2015; Khayrallah et al., 2018); we show that it works with polyglot models for NER, improving performance by up to 3 F 1 over monolingual baselines.", "Our second contribution is an explanation of the surprising effectiveness of this technique through an extensive empirical study of polyglot models for NER.", "We compare several types of neural NER models, including three character (or byte) level architectures, and evaluate transfer across a small (4) and large (10) set of languages.", "In particular, we find that: 4 Other than Byte-to-Span (BTS; Gillick et al., 2016), most NER architectures do not benefit from polyglot training.", "Still, simpler models than BTS, with more inductive bias, can outperform BTS in both monolingual and polyglot settings.", "5.2 Polyglot models are more efficient than monolingual models in that for a given level of performance, they require vastly fewer parameters.", "This suggests that many parameters are shared cross-lingually.", "4.2 Polyglot weights transfer to unseen languages with mixed results.", "In particular, transfer can occur when there is high lexical overlap or closely related languages in the polyglot training set.", "5.3 Languages share a large number of important parameters between each other in polyglot models, and fine-tuning may utilize those parameters to strengthen it's performance.", "There is a long history of multilingual learning for NER (Kim et al., 2012; Ehrmann et al., 2011; Tackstrom, 2012).", "This work has is driven by an interest in learning NER models for many languages (Cucerzan and Yarowsky, 1999; Pan et al., 2017a) and the relative lack of data for many languages of interest (Das et al., 2017).", "Polyglot Models Johnson et al. (2017) and Lee et al. (2017) showed that a single neural MT model could benefit from being trained in a multilingual setting.", "Gillick et al. (2016) showed similar results for NER, presenting a model that benefited from learning to perform NER on 4 languages at once.", "We find that other polyglot NER models are rarely better than monolingual models in terms of absolute performance.", "Mulcaire et al. (2019) showed that polyglot language model pretraining can help improve performance on NER tasks, although polyglot NER training hurts.", "However, multilingual BERT (De-vlin et al., 2018b), when compared to monolingual BERT performance on NER, shows that polyglot pretraining is not always beneficial for downstream tasks.", "Finally, most recently, Kondratyuk and Straka (2019) showed how to train a single model on 75 languages for dependency parsing while retaining competitive performance or improving performance, mostly on low-resource languages.", "This work is closely related to ours, although we are predominantly interested in how we can leverage polyglot learning to improve performance across all languages.", "Cross-lingual Models Cross-lingual transfer leverages labeled data from different source languages to augment data for a target language.", "Rahimi et al. (2019) do this on a massive scale for NER, leveraging over 40 languages for crosslingual transfer.", "Xie et al. (2018) employed self-attention to combat word-order differences when transferring parameters from high-resource languages to low-resource.", "Much work in this space has looked at how to leverage a mixture of shared features and language-specific features (Kim et al., 2017), similar to domain adaptation techniques Daume III (2007).", "Recently, a lot of this work has focused on using adversarial models to force models to learn language-agnostic feature spaces (Chen et al., 2019; Huang et al., 2019).", "These works show, similar to our work, that it is possible to leverage multilingual data to increase performance across languages.", "1 We release the code for these models at https:// github.com/davidandym/multilingual-NER", "broadly follows the description in Lample et al. (2016), and we consider three different variants of this model.", "The first two are characterand byte-level models.", "2 We consider these since Gillick et al. (2016) showed that multilingual transfer could occur across byte-level representations and we were interested in whether characters produced similar results when more diverse languages were involved.", "Each word passes through a multi-layer BiLSTM as a sequence of characters or bytes to produce word-level representations.", "Word-level representations feed into a sentence-level BiLSTM, which outputs, for each time step, logits for all possible labels.", "The logits are then fed into a CRF model (Lafferty et al., 2001) trained to maximize the log-likelihood of the gold label sequences.", "The third variant of this model uses contex-tualized representations from multilingual BERT (mBERT) (Devlin et al., 2018b).", "This model is similar to the one described above, with the key difference being that word-level representation are obtained using a pretrained subword-level BERT model, as opposed to being built from raw charac-ters/bytes.", "As is done in the original BERT paper, 2 Early experiments found these models suffered much less from multilingual training than subword/word models.", "we treat the representation of the first subword of each word as a representation for that word, and take the concatenation of the outputs of the last 4 layers at that subword position as our final word representation.", "CharNER (Kuru et al., 2016) is a deep neural sequence labeling architecture which operates strictly at the character level during training, but uses word-level boundaries during inference.", "The model runs a 5-layer BiLSTM over sequences of characters, and is trained to predict the NER tag for each character of the sequence (without BIO labels).", "During inference a Viterbi decoder with untrained transition parameters enforces consistent character level tags across each word; no heuristics and little post-processing is necessary to obtain word-level BIO labels.", "To compare with the other architectures, we apply this model to bytes and evaluate its polyglot performance.", "Intuitively, we expect this model to do better than a word-level CRF at seeing beneficial transfer across languages, as it is closer to the model of Gillick et al. (2016): a deep, byte-level model that performs inference at the level of individual bytes.", "BTS is a sequence-to-sequence model operating over byte sequences (Gillick et al., 2016).", "The input consists of a window of UTF-8 bytes, and the output is sequences with sufficient statistics of labeled entity spans occurring in the input sequence.", "3 Because byte sequences are long BTS operates over a sliding window of 60 bytes, treating each segment independently; the model's entire context is always limited to 60 bytes.", "By consuming bytes and producing byte annotations, it has the attractive quality of being truly language-agnostic, without any language specific preprocessing.", "Despite obviating the need for language-specific preprocessing, BTS achieves comparable results to more standard model architectures with no pretraining information.", "Additionally, it showed significant improvement in monolingual CoNLL performance after being trained on all 4 CoNLL languages.", "In this paper, we find that this trend holds in our multilingual settings, although our results show lower overall numbers to those reported in Gillick et al. (2016).", "4 3.4 Hyperparameters All experiments are run on GeForce RTX 2080 Ti GPUs, using Tensorflow (Abadi et al., 2016).", "CRF The characterand byte-level neural CRF use a sub-token BiLSTM encoder with 2-layers and 256 hidden units.", "The sentence-level BiLSTM has 1-layer with 256 hidden units.", "All characters and bytes have randomly initialized embeddings of size 256.", "We optimized these parameters with grid-search over 1-3 layers at each level and hidden sizes of { 128, 256, 512 } .", "We train using Adam with a learning rate of 0.001 and tune the early stop parameter for each model based on development set F1 performance.", "4 We reimplemented BTS based on correspondence with the model authors.", "We matched the published results on CoNLL English, and the same overall trends, but could not match the other three CoNLL languages.", "Despite significant effort, two differences remained: the authors could not share their proprietary implementation or deep learning library, and reported using more byte segments than is available in our CoNLL dataset.", "layers with hidden size 128, Adam Optimizer) with a byte dropout of 0.2, and dropout rates of 0.8 on the final layer, and 0.5 on the other layers.", "We also train our models using a learning rate of 0.001 and early stop based on development set F1 performance.", "BTS For BTS we use the same training scheme and hyperparameters reported in Gillick et al. (2016).", "5 Since we do not have document-level information in LORELEI, we treat each separate language dataset as its a whole document and slide a window across the entire dataset at once.", "We train using SGD (Adam performed much worse), with a learning rate of 0.3, and similarly, early stop based on development set F1 performance.", "Each LORELEI language has less than half the data of a CoNLL language, but in total, the two datasets are roughly equal in size.", "The CoNLL setting consists of European languages in the same alphabet, and prior work has shown beneficial transfer in this setting (Gillick et al., 2016).", "LORELEI is more challenging because it contains more distantly related languages.", "We train a monolingual NER model for each language (14 models) and two polyglot models: CoNLL and LORELEI.", "For polyglot training we concatenate each annotated language-specific dataset into one combined corpus.", "Because our language-specific datasets are comparable in size 5 4 layers with 320 hidden units, byte dropout of 3.0 and layer dropout of 5.0.", "we do not correct for minor size differences.", "6 All models were trained over 5 random seeds, with the best model selected by development performance.", "For polyglot models, we select the best model using the average development performance across all languages.", "Results Table 1 reports test performance.", "With few exceptions, polyglot training does worse than monolingual.", "In some cases, the two settings do nearly the same (such as Character and mBERT CRFs on LORELEI) but we do not see improved results from a polyglot model.", "Murthy et al. (2018) found that languages with different label distributions do worse for transfer.", "We find large label distribution changes in CoNLL, but not LORELEI.", "To determine if this could explain polyglot NER failures in CoNLL, we allow our CRF models to learn language-specific label distributions via language-specific CRF transition parameters.", "However, we saw little difference in the results for either CoNLL or LORELEI (no more than 0.5 F1 on any lan-guage).", "This suggests that other factors are preventing more language transfer.", "The exception to these observations is the BTS model, which showed significant improvements in the polyglot settings, matching the conclusion of Gillick et al. (2016).", "However, our implementation failed to match the higher numbers of the original paper, and so the model is significantly worse overall compared to the other NER models.", "Perhaps the unique architecture of BTS enables it to improve in the polyglot setting.", "However, if BTS requires more training data to achieve results similar to the other models, the polyglot improvements may not hold up.", "Conclusion Polyglot NER models fail to improve over their monolingual counterparts, despite using 4 (CoNLL) or 10 (LORELEI) times more labeled data.", "Discrepancies of label priors between languages do not, by themselves, account for this.", "While polyglot models perform worse than monolingual models, they are competitive.", "This suggests that polyglot models may be successfully learning multilingual representations, but that the optimization procedure is unable to find a global 6 A uniform sampling strategy is recommended for language combinations with significant size discrepancies.", "minimum for all languages.", "To test this theory, we fine-tune the polyglot model separately for each language.", "We treat the parameters of the polyglot NER models as initializations for monolingual models of each language, and we train these models in the same fashion as the monolingual models, with the exception of using a different initial step size.", "7 With few exceptions, fine-tuned polyglot models surpass their monolingual counterparts (Table 1), improving up to 3 F 1 over monolingual baselines.", "Conclusion This demonstrates that the polyglot models are in fact learning more from observing multiple languages, and that this information can transfer to each language.", "Additionally, this indicates that the ideal optima for a monolingual model may not be achievable using standard training objectives without observing other languages; we found more regularization did not help the monolingual models.", "However, jointly optimizing all languages naively may provide too challenging an optimization landscape to obtain that optima for each language simultaneously.", "Finally, since the polyglot models demonstrate the ability to transfer information between languages, we ask: can these models generalize to unseen languages?", "We consider a similar approach to the previous section, except we now fine-tune the polyglot model on a novel language for which we have supervised NER data.", "In this setting, we only consider byte-level models, since byte vocabularies mean we can use the same parameters on unseen languages with different character sets.", "We select 4 additional LORELEI languages: Rus-7 We use the Adam optimizer settings saved from multilingual training.", "sian, Yoruba, Bengali, and Uzbek.", "For comparison, we train monolingual Byte CRF models (from scratch), following the same optimization protocols, as described above.", "Table 3 shows results for the monolingual model, polyglot fine-tuned, and the polyglot model evaluated without any fine-tuning (zero-shot).", "Unsurprisingly, the polyglot model does poorly in the zero-shot setting as it has never seen the target language.", "However, sharing a script with some languages in the polyglot training set can lead to significantly better than random performance (as in the case of Yoruba and Uzbek).", "In the fine-tuning setting, the results are mixed.", "Yoruba, which enjoys high script overlap with the polyglot training set, sees a large boost in performance from utilizing the polyglot parameters, whereas Uzbek, which has moderate script overlap but no family overlap, is hurt by it.", "Russian and Bengali have no script overlap with the polyglot training set, but Bengali, which is closely related to Hindi (sharing family and genus) sees a moderate amount of transfer, while Russian, which is not closely related to any language in the training set, is negatively impacted from using the polyglot weights.", "Conclusion The transferability of the polyglot parameters to unseen languages depends on a variety of factors.", "We conjecture that these factors are partially connected to relatedness to languages in the original polyglot training set.", "We now turn our attention towards understanding how polyglot models are transferring information across languages.", "We examine the types of errors made in each setting, as well as how polyglot models efficiently use parameters and how parameter weights are shared across languages.", "We broadly examine the types of errors made across each of our regimes, focusing on results from the Byte-CRF model.", "To explore what kinds of errors polyglot fine-tuning targets we plot, in Figure 1, the counts of recall errors (including O-tags) on validation data made by the monolingual and polyglot models, compared to the finetuned model.", "We find that polyglot models tend to make more errors on O-tags, indicating a tendency towards making precision errors, but that Monolingual Polyglot Model 0 250 500 750 1000 1250 1500 1750 2000 N u m be r o f E rr o r s -248 +154 +77 +69 +44 +33 +32 +14 +111 +99 +3 -1", "fine-tuning tends to correct this trend back towards monolingual performance.", "We additionally find that, compared to monolingual models, fine-tuned models do much better PER and ORG tags (in both LORELEI and CoNLL settings).", "However, the same is not true for polyglot LORELEI models, indicating that some of this transfer comes from the combination of polyglot and fine-tune training.", "One reason that polyglot fine-tuned models may perform better than monolingual models is the larger number of entities they see during training.", "Many languages contain entities in their validation set, which appear in the training sets of other languages .", "We identify such common entities as entities in the validation set of a language l which share some level of surface form overlap (either n-gram or exact match) 8 and type with an entity appearing in the training set of lan-8 We explore n-gram overlap with n = 4 , 5 , 6 , 7 , 8 and exact name overlap.", "We report the average rate across each granularity.", "guage l (cid:48) (cid:54) = l .", "We plot the average error rate (de-fined as the harmonic mean between the rate of precision errors and the rate of recall errors) of the CoNLL Byte-CRF model in Figure 2. We find that polyglot models have a lower error rate on common entities than monolingual models, indicating that such entities are a source of transfer in polyglot NER.", "We also see that language-specific fine-tuning tends to increase the error rate, either due to forgetting or simply to decreasing errors on non-common entities during fine-tuning.", "Many studies have demonstrated that modern neural models have enormous capacity, and that not all parameters are needed to model the target function (LeCun et al., 1990; Hinton et al., 2015; Frankle and Carbin, 2019; Sanh et al., 2019).", "Let us assume that it takes M l parameters to learn a monolingual NER model for language l .", "If we sought to train monolingual models for each language in L , we would need M = (cid:80) l LM l parameters.", "Does a polyglot model trained on these languages need M parameters?", "Perhaps the polyglot NER model is partitioning its parameters by language, and little sharing occurs across languages, so the full M parameters are needed.", "In this case, the negative results for polyglot learning could be explained by the under-parameterization of the model.", "Conversely, the model could be sharing parameters across many languages, effectively learning crosslingual representations.", "In this case, we would expect the model to need much fewer than M parameters, and the over-sharing of parameters across languages could explain the poor polyglot performance.", "Model Compression To explore polyglot model behavior, we utilize model compression techniques, which have the goal of compressing a large number of parameters into a smaller amount with minimal loss in overall model accuracy.", "We use magnitude weight pruning (Han et al., 2015) to answer two questions: (1) How many more parameters do polyglot models require than monolingual models?", "(2) Does fine-tuning learn an equally compact solution to that of monolingual training?", "We analyze the byte-level CRF because they are stronger than, or comparable to, all other models with no pretraining, and have the same number of parameters across all languages and settings (monolingual, polyglot, and fine-tuned).", "We perform our analysis on models without pretraining, as we wish to isolate the effects of polyglot learning on our models from external polyglot resources.", "We prune the lowest magnitude weights of each model in 10% increments and plot the average 9 performance over time in Figure 3. Additionally, we define over-pruning to occur for language l and model m when pruning causes the performance of model m on language l to decrease by more than 1 F1 from model m 's original performance.", "We plot the pruning threshold for each language and model 10 before over-pruning occurs in Figure 3 as well.", "We find that polyglot models require more parameters than monolingual models to maintain their performance, but are significantly more efficient, i.e. they need much fewer than M parameters.", "For example, the CoNLL polyglot model needs 60% of its parameters to maintain performance on all languages; English, Spanish, and Dutch require fewer parameters still.", "Compared to the total number of parameters needed by the four individual monolingual models combined ( M ), the polyglot model needs only 30% of that, although this is paid for by an average decrease of 0.33 F1.", "This suggests that polyglot performance suffers due to over-sharing parameters, rather than 9 Averaged across all CoNLL or LORELEI languages.", "10 For polyglot models we report the percentage required to maintain performance on each individual language using the same model.", "under-sharing, during joint optimization.", "Additionally, we find that fine-tuning the polyglot models does not recover as sparse a solution as monolingual training.", "This finding suggests that either fine-tuning utilizes polyglot parameters to learn a denser solution than monolingual models, or that fine-tuning retains several high-magnitude polyglot weights not crucial to the target language.", "In the latter case, more sophisticated pruning criteria may be better suited to determining the sparsity of fine-tuned models, despite recent evidence indicating the strength of simple magnitude pruning (Gale et al., 2019).", "In addition to measuring the parameter efficiency of the polyglot models, we are interested in knowing how much overlap exists between the parameters which are most important for different languages, and how those parameters change during fine-tuning.", "This answers two important questions: 1) How do languages utilize shared polyglot parameters?", "2) Does fine-tuning benefit from many or few polyglot weights?", "To measure overlap between important weights for each language in a polyglot model, we compare the language-specific Fisher information matrix diagonals of the polyglot model.", "The Fisher information matrix has been used in this way to measure individual parameter importance on a specific task, and has been shown to be effective for retaining important information across tasks during sequential learning (Kirkpatrick et al., 2016; Thompson et al., 2019).", "For a given language l with N training examples we estimate the Fisher information matrix F l with the empirical Fisher information matrix F l .", "F l is computed via 11 1 NN (cid:88) i =1 E y p (cid:2) log p ( y | x i ) log p ( y | x i ) T (cid:3) We take the diagonal values F i,i as an assignment of importance to i .", "To compute the overlap of important weights shared between two tasks, we take the top 5%, 25%, and 50% of weights from each layer for each task (given by the tasks' Fishers) and calculate the percentage overlap between them.", "We do this for two settings: First, we consider the percentage of weights shared between a specific language and all other languages in a polyglot model.", "Second, we examine the percentage of weights that remain important to a particular language after fine-tuning.", "We plot the average overlap across all lan-11 The expectation over y p is approximated by sampling exactly from the posterior of each x i .", "We take 1,000 samples for each example.", "We find that languages share a high number of important weights between each other in the polyglot model (40% overlap in the top 25% of weights of the LSTM layers), which helps explain how polyglot models are competitive, with fewer parameters, than multiple monolingual models.", "Interestingly, however, we find that the most important weights (top 5%) for each language share little overlap, implying that in polyglot learning, each language acquires parameters that are uniquely important to that language.", "We additionally find that fine-tuning does not shift the importance of a significant number of weights (more than half of the top 25% important weights for a language in the polyglot model remain similarly important after fine-tuning).", "Surprisingly, the parameters that were most important to a language in the polyglot model are the parameters that are the most affected during fine-tuning for that language.", "Thus, we see that language-specific fine-tuning retains the importance of many shared parameters, but the most important weights to that language are significantly affected.", "12 6 Conclusions We explore the benefits of polyglot training for NER across a range of models.", "We find that, while not all models can benefit in performance from polyglot training, the parameters learned by those models can be leveraged in a language-specific way to consistently outperform monolingual models.", "We probe properties of polyglot NER models, and find that they are much more efficient than monolingual models in terms of the parameters they require, while generally maintaining a competitive performance across all languages.", "We show that the high amount of parameter sharing in polyglot models partially explains this, and additionally find that language-specific fine-tuning may use a large portion of those shared parameters.", "In future work, we will explore whether the observed trends hold in much larger polyglot settings, e.g. the Wikiann NER corpus (Pan et al., 2017b).", "Finally, regarding the sharing of weights between languages in polyglot models, our key conclusion is that standard training objectives are unable to find an optimum which simultaneously achieves high task performance across all languages.", "With this in mind, exploring different training strategies, such as multi-objective optimization, may prove beneficial (Sener and Koltun, 2018).", "On the other hand, when the objective is to maximize performance on a single target language it may be possible to improve the proposed fine-tuning approach further using methods such as elastic weight consolidation (Kirkpatrick et al., 2016).", "We would like to thank the anonymous reviewers for their helpful comments.", "12 Note that typically it is not reasonable to compare the weights of two different neural networks, as they are unidentifiable (Goodfellow and Vinyals, 2015).", "However, since one model is initialized from the other, we believe it is reasonable to characterize how weights shift during language-specific fine-tuning." ]
[ "abstain", "objective", "abstain", "result", "objective", "result", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "objective", "abstain", "objective", "method", "result", "objective", "objective", "objective", "objective", "objective", "objective", "other", "abstain", "other", "other", "abstain", "other", "other", "other", "abstain", "other", "other", "other", "other", "method", "abstain", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other" ]
[ "Xuefeng Su 1,2 , Ru Li 1,3, , Xiaoli Li 4 , Jeff Z.Pan 5 , Hu Zhang 1 , Qinghua Chai 1 , Xiaoqi Han 1. School of Computer and Information Technology, Shanxi University, Taiyuan, China 2. School of E-commerce and Logistics, Shanxi Vocational University", "of Engineering Technology, Taiyuan, China 3. Key Laboratory of Computational Intelligence and Chinese Information Processing of Ministry of Education, Shanxi University, Taiyuan, China 4. Institute for Infocomm Research, A*Star, Singapore", "Frame Identification (FI) is a fundamental and challenging task in frame semantic parsing.", "The task aims to find the exact frame evoked by a target word in a given sentence.", "It is generally regarded as a classification task in existing work, where frames are treated as discrete labels or represented using one-hot embeddings.", "However, the valuable knowledge about frames is neglected.", "In this paper, we propose a K nowledgeG uided F rame I dentification framework ( KGFI ) that integrates three types frame knowledge, including frame definitions, frame elements and frame-to frame relations, to learn better frame representation, which guides the KGFI to jointly map target words and frames into the same embedding space and subsequently identify the best frame by calculating the dot-product similarity scores between the target word embedding and all of the frame embeddings.", "The extensive experimental results demonstrate KGFI significantly outperforms the state-of-the-art methods on two benchmark datasets.", "F rame I dentification (FI) aims to find the exact frame evoked by a target word in a given sentence.", "A frame represents an event scenario, and possesses frame elements (or semantic roles) that participate in the event (Hermann et al., 2014), which is described in the FrameNet knowledge base (Bak-er et al., 1998; Ruppenhofer et al., 2016) grounded on the theory of Frame Semantics (Fillmore et al., 2002).", "The theory asserts that people understand the meaning of words largely by virtue of the frames which they evoke.", "In general, many words are polysemous and can evoke different frames in different contexts.", "Process stop respectively in two sentences.", "It is a challenging task to distinguish the frames evoked by target words in sentences.", "Furthermore, FI is a key step before Frame Semantic Role Labeling ( FSRL ) (Das et al., 2010, 2014; Swayamdipta et al., 2017; Kalyanpur et al., 2020) which is widely used in event recognition (Liu et al., 2016), machine reading comprehension (Guo et al., 2020b,a), relation extraction (Zhao et al., 2020), etc.", "Through FI process, hundreds of role labels in FrameNet are reduced to a manageable small set (Hartmann et al., 2017), which can significantly improve the performance of FSRL models.", "Thus, FI is a fundamental and critical task in NLP.", "FI is typically regarded as a classification task, in which class labels are frame names.", "In earlier studies, researchers manually construct features and then use supervised learning methods to learn classification models (Bejan and Hathaway, 2007; Johansson and Nugues, 2007; Das et al., 2010, 2014).", "These methods, however, do not take the valuable semantic information about frames into considera-Frame: Activity stop Frame: Process stop Def An Agent ceases an Activity without completing it A Process stops at a certain Time and Place FEs core: Agent, Activity core: Process peripheral: Degree, Duration, Manner, Time peripheral: Manner, Place, Time extra-thematic: Depictive,Purpose, Result,... extra-thematic: Depictive, Duration, ...", "tion, and merely treat them as discrete labels.", "The recent studies of FI use distributed representations of target words and their syntactic context to construct features, and construct classification models with deep neural network (Hartmann et al., 2017; Kabbach et al., 2018).", "These methods usually transform frame labels into one-hot representations (Hermann et al., 2014; Tackstrom et al., 2015), and then learn the embeddings of target words and frames simultaneously.", "However, the abundant semantic information and structure knowledge of frames contained in FrameNet are still neglected.", "Knowledge of frames defined by linguists, such as frame definition, frame elements and frame-to-frame relations, can enrich frame labels with rich semantic information that can potentially guide FI models to learn more unique and distinguishable representations.", "Thus, in this paper, we propose a K nowledge G uided F rame I dentification framework ( KGFI ) which consists of a Bert-based context encoder and a frame encoder based on a specialized graph convolutional network (FrameGC-N).", "In particular, the frame encoder incorporates multiple types of frame knowledge into frame representation which guides the KGFI to jointly map target words and frames into the same embedding space.", "Instead of predicting the frame label directly, KGFI chooses the best suitable frame evoked by the target word in a given sentence by calculating the dot-product similarity scores between the target word embedding and all of the frame embeddings.", "In summary, our contribution is threefold: To the best of our knowledge, we are the See the details in https://FN.icsi.berkeley.edu/fndrupal/ first to propose a unified FI method which leverages heterogeneous frame knowledge for building rich frame representations.", "We design a novel Framework KGFI, consisting of a Bert-based context encoder and a GCN-based frame encoder, which learns the model from a combination of annotated data and FrameNet knowledge base, and maps target words and frames into the same embedding space.", "Extensive experimental results demonstrate our proposed KGFI framework outperforms the state-of-the-art models across two benchmark datasets.", "FrameNet is built on the hypothesis that people understand things by performing mental operations on what they already know (Baker et al., 1998).", "Such knowledge reflecting people's cognitive experience is described as structured information packets called frames .", "A frame represents an event scenario, associated with a set of semantic roles ( frame elements (FEs) ).", "Lexical units (LUs) are capable of evoking the scenario (Kshirsagar et al., 2015).", "Frame elements in terms of how central they are to a particular frame can be divided into three distinguishing levels: core, peripheral and extra-thematic.", "Each frame has a textual definition (Def) , depicting the scenario and how the roles interact in the scenario.", "Frames are organized as a network with several kinds of frame-to-frame relations (FRs) .", "F rame I dentification ( FI ) is the task of predicting a frame evoked by the target word in a sentence.", "Let c = w 0 , w 1 ,..., w st ,..., w en ,..., w n denote a given sentence, and t = w st ,..., w en ( t c ) represent the target word, where st and en are the start and end index respectively for the target word t in the sentence.", "Let F = ( f 1 , f 2 ,..., f | F | ) denote the set of all frames in FrameNet.", "The FI model is defined as a mapping function G : ( c , t , st , en ) f j , subject to f j F .", "Table 1 illustrates the structured knowledge ( Def , FEs , LUs ) of two different frames and their frame-to-frame relations ( FRs ).", "We explicitly leverage them to enrich the frame embeddings with semantic information.", "The resulted informative frame representations serve two purposes:", "1) guide our model to learn more distinguishable embeddings of target words, and", "2) improve FI model's generalization performance in the prediction phase.", "The proposed KGFI framework consists of three components: context encoder , frame encoder and scoring module , as shown in Figure 2. Specifically, context encoder is used to represent the context-aware target word into an embedding with a Bert-based module, and frame encoder is used to incorporate three types of knowledge about a frame into frame embeddings.", "With the guidance of the knowledge about frames, two encoders jointly learn the embeddings of target words and frames.", "Finally, a scoring module is used to calculate the similarity scores between the given target word embedding and all frames' embeddings, to identify the best frame with the highest score.", "To get the context-aware embeddings of target words, we employ Bert (Devlin et al., 2019) for our context encoder, since its architecture is a multilayer bidirectional Transformer which can aggregate information from context into the target word through the self-attention mechanism.", "As we know, Bert model is pre-trained on a large corpus and can transfer language knowledge into the context encoder, which is very helpful for the target word representation as the manually labeled training data of FI is very small.", "The context encoder, which we define as E c , takes given sentence c containing a target word t as input.", "We denote the last layer of Bert's output as H t .", "The context encoder can be expressed as : r t = E c ( c , t , st , en ) = W Tc h t + b c (1) where h t = 1 en + 1 st en i = st ( H t [ i ]) , (2) W c R n m and b c R m are learned parameters.", "In FrameNet, all the frames are connected into a directed graph through the frame-to-frame relations, as shown in Figure 3. Moreover, the graph convolutional network(GCN) (Kipf and Welling, 2017) has been proved to be effective to model the relationship between labels (Yan et al., 2019; Chen et al., 2019; Cheng et al., 2020; Linmei et al., 2019), and it can enrich the representation of the node through aggregating information from its neighbors.", "In order to make better use of frame knowledge and the advantage of GCN, we propose a specialized GCN , called FrameGCN, to incorporate multiple frame knowledge into frame representations.", "FrameGCN is a combination of two dedicated GCNs (FEsGCN and DefGCN) and an attention network, as shown in Figure 2. FEsGCN is used to represent frame by aggregating the FEs features of its neighbors, while DefGCN is used to represent frame by aggregating the Def features of its neighbors.", "The attention network is responsible for incorporating the outputs of two GCNs into one unified embedding where adjacent matrix A is shared by the two dedicated GCNs.", "Frame-to-frame relation in FrameNet is a asymmetric relation between two frames, where one frame is called super-frame and the other is called sub-frame , as shown in Figure 3. A frame typically obtains/inherits more information from its super-frame than from its sub-frame.", "Therefore, we define the adjacent matrix of the graph as a weighted asymmetric matrix denoted as A = ( a ij ) | F || F | , where a ij = 3 , f j = f i 2 , f j is a super f rame o f f i 1 , f j is a sub f rame o f f i 0 , other .", "(3) Three types of frame-to-frames relations, including Inherits , Using and Subframe , are used in this study.", "The FEs of a frame express its semantic roles and structure.", "Frames which have similar structures imply that they have close semantic, so we regard FEs as features and use them to represent frames.", "Let FE = ( e 1 , e 2 ,..., e | FE | ) denote the set of all frame elements in FrameNet, and V e R | F || FE | denote the feature matrix of frames represented by FEs.", "The i th row of V e is the feature vector of i th frame f i , and can be expressed as V e [ i , : ] = ( ve 1 , ve 2 ,..., ve | FE | ) , where ve j = (cid:40) 1 , e j FE f i 0 , other , (4) FE f i FE is the FEs of frame f i .", "FEsGCN is used to learn a map function which maps the node (frame) vectors represented by FEs to a new representation via convolutional operation defined by A .", "We take a two-layer GCN to implement the map function, which can be expressed as: g ( 0 ) e ( A , V e ) = ReLU ( AV e W ( 0 ) e ) , g ( 1 ) e ( A , V e ) = Tanh ( Ag ( 0 ) e ( A , V e ) W ( 1 ) e ) .", "Here, W ( 0 ) e R | FE | h is an input-to-hidden weight matrix for the hidden layer and W ( 1 ) e R h m is a hidden-to-output weight matrix.", "Since the frame definition is a short text that depicts an event scenario and frame elements that participate in the event, we employ Bert as a feature extractor to construct the feature matrix V d of frames.", "Specifically, we first input a frame definition into Bert, and subsequently take the first token's representation (corresponding to the input [CLS] token) in Bert's last layer as the feature vector of the frame.", "Since the name of a frame is also meaningful, we concatenate the frame name and frame definition into one string, e.g. Activity stop: an agent ceases an activity without completing it .", "DefGCN is used to learn a map function which maps the node (frame) vectors represented by definition to a new representation via convolutional operation defined by A .", "We use a network similar to FEsGCN, which can be expressed as: g ( 0 ) d ( A , V d ) = ReLU ( AV d W ( 0 ) d ) , g ( 1 ) d ( A , V d ) = Tanh ( Ag ( 0 ) d ( A , V d ) W ( 1 ) d ) .", "Here, W ( 0 ) d R n h is an input-to-hidden weight matrix for a hidden layer with h feature maps, and W ( 1 ) e R h m is a hidden-to-output weight matrix.", "We use an attention network to dynamic incorporate the outputs of FEsGCN and DefGCN into one frame embedding through the attention weighting", "mechanism.", "The incorporation operation takes the following function: r f i = k { e , d } a i , k g ( 1 ) k ( A , V k ) i (7) where r f i R m is the embedding of i th frame, g ( 1 ) k ( A , V k ) i is the i th row of convolved representation of graph k , and a i , k is a weight of i th frame against the graph k , which is computed as: a i , k = exp ( w a g ( 1 ) k ( A , V k ) i ) k (cid:48) { e , d } exp ( w a g ( 1 ) k (cid:48) ( A , V k (cid:48) ) i ) (8) where w a R m is a learnable vector.", "After obtaining the embeddings of target words and frames through context encoder and frame encoder respectively, we score a target word t with each frame f j F by computing the dot product similarity between r t and each r f for f j F :", "During training, all model parameters are jointly learned by minimizing a cross-entropy loss:", "where D is the number of the training data, | F | is the total number of frames in FrameNet, y ij (one-hot representation of frame labels) and y ij are true labels.", "The predicted probability over frames is calculated by the softmax function over the scores.", "During prediction, we predict the frame evoked by the target word t to be f j F , whose representation r f j has the highest score with r t .", "The prediction function is defined as: f = argmax f j FS ( r t , r f j ) (11) Note most of the frames contain a set of lexical units ( LUs ) in the form of lemma.POS (e.g. stop.v).", "As shown in Table 1, the LUs of the frame Activity stop and the frame Process stop are listed in the fourth row.", "Therefore, we adopt the lexicon filtering operation to reduce the possible candidate frame set.", "Firstly, we utilize lemmatization and POS tools to convert the target word t into the form of LU (e.g. stop.v).", "Secondly, we use this LU to match the frames whose LUs contains this LU, and then use the matched frames as the possible candidate frame set F t for the target word t .", "At last, we predict the frame label by the following function: Datasets Train Dev Test | F | | FE | FN1.7 19391 2272 6714 1221 1285 FN1.5 16621 2284 4428 1019 1170 Table 2: Statistics for FrameNet datasets.", "In the light of the coverage issue of FrameNet (see Section 4.4), these two prediction functions (11 and", "12) can be used in different circumstances.", "In general, we can first use LU to obtain candidate frame set F t by performing lexicon filtering and then use function 12 to identify best frame from F t .", "However, if we can not find any candidate frame using LUs, i.e. F t = /0, then we have to identify best frame from F using function 11.", "Note that F t only contains a couple of candidate frames, while F contains more than one thousand of frames.", "This requires FI models have very good generalization performance to handle a big F set.", "We have employed two knowledge bases, i.e. FrameNet1.5 and FrameNet1.7.", "Both of them contain various documents which have been annotated manually, including target words and corresponding evoked frames.", "Documents and corresponding annotations in FrameNet1.7 are extended from FrameNet 1.5 and thus are more complete.", "Note train, dev and test documents in both data have been partitioned following (Swayamdipta et al., 2017).", "Given a sentence in documents may contain multiple target words, we regard it as multiple pairs of target word and sentence in train, dev and test sets.", "The statistics of two datasets are illustrated in Table 2. To test the model's performance on the more challenging ambiguous data , following the previous studies, we constructed a specialized dataset by extracting pairs of target word and annotated sentence from test data, in which the target words are polysemous or can evoke multiple frames.", "We first compare the KGFI against five existing models.", "Semafor (Das et al., 2014) is a conditional log-linear model which uses statistical features about target word to predict the frame label.", "Hermann-14 (Hermann et al., 2014) is a joint learning model which maps frame labels and the dependency path of target word into a common embedding space.", "SimpleFrameId (Hart-mann et al., 2017) models a classifier based on the embeddings of entire words in the sentence.", "Open-Sesame (Swayamdipta et al., 2017) models a classifier based on bi-directional LSTM.", "Hermann-14 converts frame labels into onehot embeddings, while other models treat frame labels as discrete supervision signals.", "Peng's model (Peng et al., 2018) is a joint learning model for FI and FSRL, which both uses exemplars in FrameNet knowledge base and the full-text annotation training data to train the model.", "In addition, we also implemented two additional Bert-based baselines for fair comparison.", "One is called Bert-cls that uses Bert to represent the target word in a sentence and treats discrete frame labels as supervision signals.", "The other is called Bert-onehot , which also uses the dual-encoder architecture (Context encoder and frame encoder) and maps target words and frames into a common embedding space.", "The difference between KGFI and Bert-onehot is that KGFI uses GCN-based modules to incorporate frame knowledge into frame embeddings, while Bert-onehot uses a linear network to map onehot vector of frame labels into frame embeddings without incorporating knowledge .", "Clearly, we will test if the knowledge plays a significant role for better frame embeddings and subsequent FI task.", "All Bert modules in KGFI were initialized with Bert-base.", "We set both the dimensions of target word embedding r t and frame embedding r f to 128 (m=128), the hidden layer size of FEsGCN and DefGCN to 256 (h=256).", "The size of Bert embedding is n=768.", "The dimensions of FEs and FRs feature vectors are related to FrameNet version (see Table 2).", "For optimization, we use BertAdam optimizer and set learning rate to 5 e 5. As for parameter tuning, our parameters are tuned using the development set with the early stop strategy.", "FrameNet has a few coverage issues in that: (1) the LUs set is incomplete for some frames; (2) many words that should evoke frames are not included in LUs set of frames.", "Thus, we design two types of test settings: test without lexicon filtering, or test FN 1.7 FN 1.5 Models All Amb All Amb Semafor -83.60 69.19 Hermann-14 -88.41 73.10 SimpleFrameId 83.00 71.70 87.63 73.80 Open-Sesame 86.55 72.40 86.40 72.80 Peng's model * 89.10 77.50 90.00 78.00 Bert-cls 90.17 79.87 90.13 78.32 Bert-onehot 90.57 80.66 91.46 80.78 KGFI(1-layer) 91 .", "The overall testing results, as shown in Table 3, demonstrate that Bert-cls and Bert-onehot are two strong baselines, outperforming all of the prior work that does not incorporate pre-training modules into their systems.", "Bert-onehot slightly outperforms Bert-cls in all of the testing settings, indicating joint learning target word embedding and frame embedding is helpful for FI task.", "Our best KGFI models, including KGFI (2-layers) for FrameNet1.7 and KEFI (1-layer) for FrameNet1.5, outperform all the baseline models of FI in terms of accuracy.", "Compared with the stronger Bert-onehot model, our model achieves absolute 1 .", "83% and 0 .", "67% improvements on two datasets respectively in All test setting.", "With the help of lexicon filtering with LUs in FrameNet, the model predicts the exact frame evoked by the target word among a small set of candidate frames.", "Clearly, the improvements are credited to the model's performance improvement in predicting frames for ambiguous target words, since the model achieves absolute 3 .", "75% and 1 .", "56% improvements in Amb test setting on two datasets respectively.", "except for SimpleFrameId model, so we choose SimpleFrameId and the stronger Bert-onehot model as our baseline to compare our best model's performance under no-lexicon filter setting.", "As shown in Table 4, in comparison with the stronger Bert-onehot model, our model achieves absolute 5 .", "72% and 3 .", "63% improvements on two datasets respectively in all setting (without using LUs and compared with more than 1000 frames), signifying the generalization performance of our model achieves significant improvement, considering that the model predicts the exact frame evoked by the target word among all the frames without knowing the possible candidate frames of the target word in no-lexicon filtering setting.", "To further test the performance of our best KGFI model, we use the top-K accuracy to measure the model performance without lexicon filtering.", "The higher top-K accuracy indicates that the model has learned better frame representations.", "Furthermore, the model can reduce the candidate frame set into a small frame subset (containing K most possible frames), which is useful for the downstream tasks, such as LUs induction for FrameNet, FSRL, etc.", "As shown in Table 5, compared with Bert-onehot baseline, our best KGFI model achieves higher top-K (K=1,2,3,5) accuracy, which further demonstrates the model has learned the better frame representation through incorporating the frame knowledge.", "Considering FrameNet1.5 dataset is relatively small, the performance of simple structure model (using 1-layer GCN) achieves the best performance, while the performance of the model using 2-layers GCN drops slightly.", "In general, no matter how many layers are adopted, our models outperform all the baselines and achieve the best performance on two datasets in all settings consistently.", "To test the function of each component in KGFI, we conduct the ablation study.", "As shown in Table 6, the results demonstrate that all of the three components, i.e. DefGCN, FEsGCN and attention network, are helpful for enhancing the model's performance.", "Even with DefGCN or FEsGCN individually, the performance of our model is still better than the stronger baseline Bert-onehot, which indicates the frame definition, FEs and FRs are all useful knowledge for frame representation, and our proposed GCN-based model architecture is effective to incorporate them into the informative embeddings.", "Compared with frame definition, FEs are more useful for frame representation, since the performance of GKFI (with FEsGCN) outperforms KGFI (with DefGCN), although it slightly lags behind KGFI full model (with FrameGCN).", "Note that the attention module is removed when DefGCN or FEsGCN is used as the frame encoder.", "As for the attention module, the performance of KGFI (with FrameGCN) drops when we replace it with a simple addition operation, suggesting it is necessary to use attention mechanism to integrate the outputs of DefGCN and FEsGCN.", "To test the rationality of our proposed weighting method for adjacent matrix A , we conduct a set of", "comparison experiments, in which the weighted matrix is replaced with a binary matrix.", "Binary matrix is widely used approach to express the relations between nodes in graph modeling.", "Our weighting method expresses the hierarchy relationships between frames straightforwardly.", "The results demonstrate that the weighted method has significant impact on the model's performance, and our proposed weighting method for adjacent matrix is quite reasonable, since the performance of all the models using weighted matrix outperforms their counterparts using binary matrix, shown in Table 7.", "Figure 4 shows that KGFI (w/FEsGCN) model tends to predict correct frame by finding the semantic relatedness between FEs and the context of target word.", "For instance, in sentence 1), the target word stopped may evoke Activity stop or Process stop , and the phrase the fighting is the key to distinguish two frames evoked by the word stopped , since these two frames differ in that the subject of stopped is an Agent or a Process .", "Our KGFI(w/FEsGCN) model has learned the semantic relation between the fighting phrase and FE Process , and outputs the correct frame, since FE Agent is related to an entity in general.", "The Bert-onehot model can't grasp this relation, so it outputs a wrong prediction Activity stop .", "On the other hand, the KGFI(w/ DefGCN) model tends to predict the frame with the semantic similarities between frame definition and the sentence.", "For instance, in sentence 2), the word Traversing in definition is similar to phrase passed through , so the model outputs the correct frame Traversing .", "In sentence 3), the KGFI(w/ DefGCN) model outputs a wrong prediction Quitting a place due", "to the similar meaning of the word depart in the sentence and the word leaves in the frame definition ( Quitting a place: a Self mover leaves an initial Source location. ).", "The KGFI(w/ FEsGCN) model, on the other hand, has learned that the word Ferries in the sentence is more closely related to FE Theme of frame Departing ( Departing: a Theme moves away from a Source .) rather than FE self mover of frame Quitting a place , and outputs the correct frame Departing , since the self mover generally refers to a living object (e.g. a person, an animal).", "Note that the frame Departing is inherited by the frame Quitting a place , so they have nearly the same FEs set except for FE Theme and FE self mover .", "In other words, our KGFI(w/ DefGCN) and KGFI(w/ FEsGCN) are complementary to each other to some extent.", "KGFI(w/ FEsGCN) can capture the subtle differences between different frames, even if the frames have close frame-to-frame or semantic relations.", "The case studies show that KGFI models can incorporate frame knowledge into its representations and guide the context encoder to learn the semantic relations between frames and the context-aware representations of target words and frames through joint learning.", "Researchers have made great effort to tackle the FI problem since it has been proposed in the Semeval-2007 (Baker et al., 2007).", "It is generally regarded as a classification task.", "The best system (Johansson and Nugues, 2007) in the SemEval-2007 adopted SVM to learn the classifier to identify frames with a set of features, such as target lemma, target word, and so on.", "SEMAFOR (Das et al., 2014) utilized a conditional model that shares features and weights across all targets, frames, and prototypes.", "These approaches use manually designed features and traditional machine learning methods to learn the classification models, while the class labels as supervision signals are discrete frame names.", "Recently, distributed feature representation and models based on neural network are used to tackle FI.", "According to the model architecture, there are two trends of work.", "One is joint learning approach that converts the discrete frame labels into continuous embedding by learning the embeddings of target words and frames simultaneously.", "For instance, Hermann-14 (Hermann et al., 2014) implemented a model that jointly maps possible frame labels and the syntax context of target words into the same latent space using the WSABIE algorithm, and the syntax context was initialized with concatenating their word embeddings.", "SimpleFrameId (Hartman-n et al., 2017) useed the concatenation of SentBOW (the average of embeddings of all the words in the sentence) to represent the context and then learns the common embedding space of context and frame labels following the line of (Hermann et al., 2014).", "The other trend is to construct the classifier model using deep neural network and regard discrete frame labels as supervision signals, which is similar to those earlier work.", "Open-Sesame (Swayamdipta et al., 2017) used a bidirectional LSTM to construct the FI classifier.", "Peng (Peng et al., 2018) proposed a joint learning model for FI and FSRL, which adopted a multitask model structure.", "Different from previous studies, this paper focuses on how to represent frames by incorporating frame knowledge into frame representations and enriching frame labels with semantic information.", "In this work, we propose a novel idea that leverages frame knowledge, including frame definition, frame elements and frame-to-frame relations, to improve the model performance of FI task.", "Our proposed KGFI framework mainly consists of a Bert-based context encoder and a GCN-based frame encoder which can effectively incorporate multiple types of frame knowledge in a unified framework and jointly map frames and target words into the same semantic space.", "Extensive experimental results demonstrate that all kinds of knowledge about frames are useful for enriching the representation of frames, and the better frame representation is helpful for FI task.", "The experimental results also show that the proposed model achieves significantly better performance than seven state-of-the-art models across two benchmark datasets.", "We thank the anonymous reviewers for their helpful comments and suggestions.", "This work was supported by the National Natural Science Foundation of China (No.61936012, No.61772324) and the Open Project Foundation of Intelligent Information Processing Key Laboratory of Shanxi Province (No. CICIP2018007)." ]
[ "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "other", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "other", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "objective", "objective", "abstain", "abstain", "other", "other" ]
[ "Eye of the Beholder", "Sian Gooding Dept of Computer Science and Technology University of Cambridge [email protected]", "Ekaterina Kochmar Dept of Computer Science University of Bath [email protected]", "Seid Muhie Yimam Language Technology group Universitt Hamburg, Germany [email protected] Abstract", "However, an important aspect of CWI and LS that is often neglected is that text complexity is not an objective notion homogeneous across various target populations: what is challenging for a reader with a particular background (for example, a non-native reader at a lower level of language proficiency) would not necessarily be challenging for readers with other backgrounds (for example, more proficient readers) (Bingel, 2018).", "A number of factors may contribute to that, including the reader's age and level of language proficiency, among others (Paetzold and Specia, 2016c).", "LS systems often aim to address the needs of specific reader populations, such as children, non-native speakers, or readers with particular cognitive impairments.", "Thus, personalization in LS typically results in specialized simplification tools aimed at certain groups of readers (Carroll et al., 1998; Rello et al., 2013; Evans et al., 2014), with only a few systems addressing adaptation to the readers' needs in a more dynamic way (Bingel et al., 2018; Yimam and Biemann, 2018a,b; Scarton and Specia, 2018).", "Lexical complexity is a highly subjective notion, yet this factor is often neglected in lexical simplification and readability systems which use a one-size-fits-all\" approach.", "In this paper, we investigate which aspects contribute to the notion of lexical complexity in various groups of readers, focusing on native and nonnative speakers of English, and how the notion of complexity changes depending on the proficiency level of a non-native reader.", "To facilitate reproducibility of our approach and foster further research into these aspects, we release a dataset of complex words annotated by readers with different backgrounds.", "Complex word identification (CWI) is the first step in a lexical simplification (LS) pipeline, concerned with identification of words in text that are in need of further simplification (Shardlow, 2013).", "For instance, in example (1) a CWI system might identify engulfed as a complex word, which would allow an LS system to replace it with a simpler alternative, e.g. flooded , in the next step (Paetzold and Specia, 2016a; Gooding and Kochmar, 2019b): (1) Water engulfed Beringia.", "Water flooded Beringia.", "It has been shown that accurate CWI can sig-nificantly reduce errors in simplification (Shard-low, 2014), thus improving the quality of an LS system output (Lee and Yeung, 2018).", "In addition, CWI has been shown to be an important component in readability assessment systems (Mad-dela and Xu, 2018) and in vocabulary acquisition modules of educational applications (Zaidi et al., 2020).", "Despite CWI being one of the key steps in an LS pipeline in need of adaptation to readers' profiles, this is rarely addressed in practice (Lee and Yeung, 2018; Bingel, 2018).", "For instance, existing and widely used datasets on CWI present a homogeneous view on word complexity, merging annotations from various groups of readers (Paet-zold and Specia, 2016c; Yimam et al., 2018).", "From the cognitive perspective, little is still known about the challenges that particular readers face when developing their reading skills and about the factors contributing to their vocabulary acquisition.", "In this paper, we investigate factors focusing on the two key background aspects in the development of reading abilities: whether a reader is a native speaker of the language, and if not, what is the reader's level of language proficiency .", "We use the data from Yimam et al. (2017a), which contains English sentences where complex words are annotated by native and non-native speakers of English, spanning three different levels of language proficiency.", "We investigate which aspects contribute to the notion of lexical complexity for readers with different backgrounds and how the notion of complexity changes depending on the proficiency levels of the non-native readers.", "We show that the best models for predicting complexity are trained using the annotations of the target audience.", "We perform feature analysis by observing which correlate most with the notion of complexity for native and non-native audiences.", "We analyse the distribution of features for complex words across differing proficiency levels.", "Finally, we release a CWI dataset annotated by readers with different backgrounds.", "CWI was established as an essential step in LS in Shardlow (2013), which demonstrated that without this step, LS systems tend to overor under-simplify, thus rendering the output less useful for the readers.", "Early approaches to this task considered simplification of all words (Devlin and Tait, 1998; Bott et al., 2012) and use of frequency-based thresholds (Zeng et al., 2005; Biran et al., 2011), however Shardlow (2013) shows that classification algorithms are more precise in identification of complex words than both these approaches.", "Recent shared tasks on CWI (Paetzold and Specia, 2016c; Yimam et al., 2018) helped it gain popularity in the NLP community as they provide researchers with shared data and benchmarks.", "Most systems participating in the shared tasks addressed CWI with classical machine learning algorithms, with the best-performing systems using ensemble-based approaches.", "Current state-of-the-art results on CWI are achieved by a sequence-labeling model of Gooding and Kochmar (2019a), however models of such type are less easily interpretable.", "The question of what contributes towards the notion of word complexity has been investigated before, for example in readability studies.", "Word length is commonly believed to correlate with text complexity and is included as a component in a wide range of readability formulas (Dale and Chall, 1948; Kincaid et al., 1975; Dubay, 2004).", "Frequency , another factor often considered in readability and text simplification approaches (Rudell, 1993; De Belder and Moens, 2010), was shown to correlate and cause word familiarity , which in its turn contributes to higher word recognition and lower reaction times (Connine et al., 1990; Morrel-Samuels and Krauss, 1992).", "Notably, word length and frequency have been widely used in CWI systems, and are reported to be good, cross-linguistic predictors of complexity (Bingel and Bjerva, 2018).", "Other factors considered important for word complexity include a variety of psycholinguistic properties, including word's age of acquisition , concreteness , and imagability (Carroll and White, 1973; Zevin and Seidenberg, 2002; Begg and Paivio, 1969).", "At the same time, not all factors are equally applicable to all groups of readers: for instance, while frequency may be an important factor for second language learners, other populations may be more affected by the length of a word or the occurrence of certain character combinations (Rudell, 1993; Rello et al., 2013).", "Yet, little is still known about the factors contributing to word complexity for native vs non-native readers as well as for non-native readers at different levels of language proficiency.", "The most comprehensive CWI dataset to date was released by Yimam et al. (2017a) and further used in the CWI shared task 2018 (Yimam et al., 2018).", "This dataset has been annotated for complex words across a number of languages, including English, German, and Spanish.", "In this paper, we use the English portion of the data with the information about annotators' backgrounds 1 .", "The dataset contains texts from 3 different sources: professionally written news articles (NEWS ), amateurishly written news articles (WIKINEWS ), and WIKIPEDIA articles.", "The annotation was performed using the Amazon Mechanical Turk platform, where a total of 20 annotators, 10 native speakers and 10 nonnative speakers, were asked to mark words that they deemed complex for a given target readership, particularly children , language learners , and people with reading impairments .", "The workers were presented with text, consisting of 5 to 10 sentences (Figure 1), and were asked to select lexical items that they found complex (Figure 2).", "Workers use 1 CWI Dataset with Language levels their mouse pointer to highlight the complex units.", "The complex words or phrases included content words (e.g., nouns, verbs, adjectives, and adverbs) and phrases up to 50 characters in length.", "In this dataset, the complex units are considered if they are selected by at least one worker (Yimam et al., 2017a,b).", "Non-native speakers of English were asked to report their proficiency levels (beginner, intermediate, advanced).", "For our experiments, we concentrate on complex words only and disregard complex phrases.", "A break-down of proficiency labels for words (across all genres) is presented in Table 5, with label 1 denoting complex words and label 0 used for non-complex words.", "It is worth noting that the groups of annotators labelling portions of the dataset were not fixed.", "Within each group, the proficiency distribution varied, with some containing no annotators from a given class.", "We firstly show that when predicting word complexity, the needs of sub-groups differ and are best predicted using models targeting them specifically.", "We demonstrate that the best performing models for a sub-group are trained with the annotations of that group using a classical machine learning approach.", "Secondly, we analyse the correlation of features with the number of annotators who found the word complex for both native and non-native groups.", "Finally, we investigate how the distributions of features vary for words marked as complex across audiences.", "To gain fundamental insights into the performance across proficiency groups, we run experiments using the CAMB system by Gooding and Kochmar (2018) as it achieved the best results across all binary and two probabilistic tracks in the CWI 2018 shared task (Yimam et al., 2018).", "Furthermore, the code for this system has been made publicly available by the authors.", "The CAMB system relies on 27 features in total.", "Feature types include lexical, syntactic, frequency-based and other aspects of information about individual words, outlined below.", "Lexical Features : For each target word, the word itself as well as the length and number of syllables (obtained using the Datamuse API) is included.", "Additionally, the number of senses, hypernyms and hyponyms are collected for the word lemma using WordNet (Fellbaum, 2005).", "Finally, the number of phonemes for the word are included sourced from the MCR Psycholinguistic Database (Wilson, 1988).", "POS & Dependency Parse Relations : The target sentence is parsed using the NLPCore pipeline.", "Following this, the number of dependency relations are counted to produce a feature.", "The part-of-speech tag for the word is additionally included.", "List-Based Features : A set of binary features are used that indicate the presence of the target word in a given list.", "The source of each list is outlined below: SubIMDB : using the SubIMDB corpus (Paet-zold and Specia, 2016b), the word frequencies are calculated from the Movies and Series for Children ' section.", "The top 1 , 000 most frequent words are then included.", "Simple Wikipedia (SimpWiki) : a list of the top 6 , 368 words contained in the Simple Wikipedia (Coster and Kauchak, 2011).", "Ogden's Basic English : the top 1 , 000 words from Ogden's Basic English list (Ogden, 1968).", "Cambridge Advanced Learner's Dictionary (CALD): 2 the entries contained in the Cambridge Advanced Learner's Dictionary.", "Word Frequency : The frequency of the target word is estimated using the Google dataset of n-grams (Goldberg and Orwant, 2013).", "Additionally, the Thorndike-Lorge written frequency derived from Thorndike and Lorge (1944) is obtained from the MCR Psycholinguistic Database (Wilson, 1988).", "Psycholinguistic Features : Finally, the following features are extracted from the MCR Psycholinguistic Database (Wilson, 1988): Word familiarity rating (FAM) Imagability rating (IMG) , representing the ease of associating the word with an image.", "Concreteness rating (CNC) represents the degree to which the word refers to a tangible entity, based on the norms of Gilhooly and Logie (1980).", "The number of categories ( KFCAT ) and samples ( KFSMP ) are derived from Kucera and Francis (1967).", "Age of acquisition ( AOA ) is based on the norms of Gilhooly and Logie (1980) Figure 1: Complex word identification instruction with examples Figure 2: Complex word identification annotation interface 4.2 Experimental Framework The CAMB system uses the sklearn machine learning framework 3 and achieves best results using an ensemble of algorithms.", "In our experiments, we use the logistic regression classifier as this was the best performing classifier for proficiency prediction due to the reduced number of annotations.", "As shown in Table 5, the number of annotations for each subgroup varies and the ratio of non-complex to complex words is highly skewed.", "For the data in our experiments, we firstly convert all proficiency annotations to a binary format, where if at least one annotator has marked the word as complex the word is given a binary label of 1 .", "For our initial experiments the aim is to see if the needs of a proficiency group are best predicted by that target group.", "In order to make a fair comparison, we control for the number of binary annotations by restricting all groups to the same amount of labels as in the beginner class ( 2 , 263 ).", "The annotations are ordered by the highest class agreement and the top 2 , 263 values are selected.", "Additionally, we remove 20% of non-complex labels, where no proficiency groups had marked the word as complex, to re-balance the class distribution to that of the original binary shared task.", "This resulted in a dataset containing 9 , 828 non-complex words and 4 , 423 words marked with at least one proficiency annotation.", "Stratified 5-fold cross-validation was used resulting in a test size of 2 , 850 and total training size of 11 , 400 per fold.", "In all experiments, 5-fold stratified cross validation is performed and the average scores across folds presented.", "Table 1 shows the results of training the system using the annotations of one proficiency subgroup and the subsequent model performance across subgroups.", "Columns represent the training annotations used and the rows represent the results on the respective test sets.", "As a result of the small training size, the overall F1-S CORE achieved across classes is low.", "For instance, when all avail-T RAININGDATA Beginner Intermediate Advanced TESTPRECISIONRECALL F1-S COREPRECISIONRECALL F1-S COREPRECISIONRECALL F1-S CORE Beginner 0 .", "able labels are used for intermediate and advanced classes an F1-S CORE of over 75% is achieved as shown in Table 4.", "However, the results are still highly informative, as we observe that in all cases the best F1-S CORE is obtained when the original sub-group annotations are used.", "This finding supports the case that the needs of such sub-groups differ and are best predicted using models targeting them specifically.", "The PRECISION , RECALL and F1-S CORE across all categories are best when the model is trained using the annotations of the target subgroup.", "The only exception is RECALL for beginner , where the intermediate and advanced models perform the best (results underlined).", "However, it is worth noting that if an intermediate or advanced learner considers a word to be complex, it is highly likely that a beginner will too.", "This observation is further supported by the finding that whilst the Binary Labels 1 0 Beginner 2,263 27,433 Intermediate 5,203 24,493 Advanced 5,849 23,847 Table 5: Binary label distribution for words per proficiency class, 1 is complex and 0 is simple.", "advanced and intermediate models perform adequately on the beginner test set, the beginner model performs very poorly when predicting the needs of intermediate or advanced users.", "The advanced and intermediate models achieve higher F1-Scores than the beginner model.", "These results support the case that beginner word acquisition is more idiosyncratic than at an intermediate or advanced level where the concept of word complexity converges.", "Table 2 additionally shows that the complex annotations of a subgroup are the best predictors for that class.", "We observe that the best results for the native group occur when trained with native only annotations and the same holds for the non-native class.", "We perform experiments by training with native complexity annotations and observe the performance across non-native proficiency groups.", "The results of these are shown in Table 3, and as there is a larger training set the scores are higher than those in Table 1. We see that the native annotations perform best when predicting the advanced non-native word complexities.", "However, this is not the case for the beginner class.", "We also observe a pattern in native annotations being preferential for higher Figure 3: Graphs showing the top 5 correlated features against the absolute number of annotations for the native and non-native classes all values are significant ( N = 17250; p < . 001 ) Figure 4: The average percentage of complex words as identified by CWI models trained with advanced and beginner annotations on the Newsela dataset proficiency levels.", "(2) His frequent use of prepositions suggests he was rigorously educated in grammar .", "(3) The way he wrote shows he was very educated in grammar .", "We apply our beginner and advanced CWI models on an additional dataset, Newsela.", "4 Newsela contains articles which are rewritten by professional editors at differing levels of simplicity with each grade level as defined by the Common Core Standards (Porter et al., 2011).", "We take the highest, intermediate and lowest level of each article and perform CWI using the models trained with all advanced and beginner annotations.", "Our aim is to see if these models are able to differentiate between levels as CWI has been shown to be an important component in readability assessment systems (Maddela and Xu, 2018).", "In Figure 4, we see that the model trained with the annotations from beginners identifies a higher percentage of words as complex across levels when compared to the advanced model.", "Additionally, both models identify 4 https://newsela.com more complex words in the advanced texts than in the intermediate or beginner.", "These results show that models trained for specific audiences can result in a different concept of complexity.", "For instance, examples 2 and 3 show a sentence from an advanced and simplified article.", "Words in bold are identified as complex by the advanced model and italicised if found complex by the beginner model.", "We see that in the higher level sentence (2), two words are identified as difficult by both models and one word is identified as complex by only the beginner model.", "In the lower level article, the words identified as complex by both models have been simplified.", "This results in only one word being identified as complex by the model tailored for beginners.", "We know that text begins to be accessible for non-native readers if they are familiar with at least 90% of word content (Nation, 2006).", "Therefore, being able to model text understanding across audiences relies on audience specific models of word complexity as demonstrated in our example.", "As the absolute number of native and non-native annotators remained constant across annotations (i.e. 10 ), we explore the feature correlations for these subgroups.", "For instance, the word vowed in a given context has been marked as complex by 10 non-native and 1 native annotator.", "This indicates that the word might be more challenging for a nonnative audience than for native in the given context.", "Figure 3 shows the highest correlated features for the native and non-native groups, all of which are significant ( p < . 001 ).", "Overall, the correlations for the native class are higher than for non-native which is likely due to a more united perspective of complexity.", "This follows as individuals with a similar first language or educational background are more likely to annotate the same words as complex (Specia et al., 2012).", "correlation is that of word length: the positive correlation shows that the longer the word, the more likely it will belong to the complex class.", "Following this, for the native class we see that the number of syllables is second.", "Whilst the length of a word and the number of syllables are highly correlated ( 0 . 64 ), it is interesting to note that the number of syllables correlates more highly with the native notion of complexity than for non-native.", "This may be explained by the fact that syllable and phoneme awareness plays an independent role in the processing of text (Engen and Hien, 2002).", "This impact is especially pronounced in lower skilled readers, where due to a reduced vocabulary set, the development of precise phonological representations are not yet formed (Elbro, 1996).", "For the non-native class, the second highest correlated feature is KFCAT which represents the number of categories of text in which the word was present as given in the norms of Kucera and Francis (1967).", "The negative correlation shows that the more categories of text a word appears in, the less likely it is to be considered complex.", "This measure can also be considered as the specificity of the word.", "For instance, we see that the word grounds is found across a wide range of text categories and is rarely considered complex.", "Whereas words like altimeter and aneroid , which are highly specific to a particular domain, are considered complex in all contexts by both native and non-native readers.", "The number of categories that a word occurs in is correlated with the word's frequency ( 0 . 35 ).", "However, when you control for the word frequency, the effect of this correlation is even higher: 0 .", "40 and 0 .", "41 for non-native and native respectively.", "Therefore, the narrower the scope of application for a word the more likely it will be considered difficult.", "Finally, we see that psycholinguistic measures such as the word familiarity and imagability are highly correlated with both the native and nonnative absolute number of annotations.", "When considering imagability, the larger the img score the higher the imagability, for instance dog ' has a high img factor whereas decision ' has a low score as it cannot be easily associated with an image.", "The negative correlation shows that the higher the score the less likely the word is considered complex.", "Intuitively, it makes sense why this feature would be influential in determining word complexity.", "In fact, research on children's reading has shown that words high in imagability are easier to read than words low in imagability (Coltheart et al., 1988).", "It has been suggested that this occurs because low imagability words are acquired later in life than high imagability words.", "Finally, concreteness is one of the top five features correlated with the nonnative annotations.", "It has been found that the higher the concreteness of a word, the more likely it is to be comprehensible (Sadoski et al., 2000).", "Word length and frequency have been widely used in CWI systems and are reported to be good cross-linguistic predictors of complexity (Bingel et al., 2018).", "Additionally, psycholinguistic properties are considered important in word complexity estimation (Carroll and White, 1973).", "When investigating the feature importance for our binary models in Section 5, we find that the features with the highest importance across models are word length, frequency and imagability.", "We investigate whether the distribution of the feature values is dependent on the intended audience.", "Figure 5 contains two histograms presenting binned word lengths across proficiency classes.", "Words that have been marked as complex are grouped into 20 bins and the distribution of lengths plotted.", "We observe that beginners mark more shorter words as complex than either the intermediate or advanced class do.", "Generally, the distribution of lengths shifts to the right as proficiency increases.", "This same pattern is observed for the native and non-native classes, where non-native annotators are more likely to mark shorter words as complex than native.", "Figure 6 contains histograms presenting the binned frequencies for complex words (20 bins).", "For frequencies, we observe a clear difference between the beginner and intermediate/advanced classes.", "The beginner sub group has marked many more low frequency words as complex.", "For the advanced class, the range (difference in largest and smallest frequency value) is 259 whereas for beginners the range is 569 .", "Furthermore, the mean frequency values show that the advanced and intermediate classes, on average, are more likely to consider words with lower frequencies to be complex ( 15 . 09 and 16 . 22 ) whereas for beginners the mean is higher ( 22 . 63 ).", "As the advanced and intermediate classes have a narrower spread and lower mean, it is likely frequency based thresholding techniques would work well for these groups.", "When we consider the native and non-native frequency distributions, we notice the same pattern emerging between classes.", "The non-native class has many more low frequency words annotated as complex and the relationship between native and non-native closely resembles the one between advanced and beginner.", "Word frequency provides signal on the likelihood of an individual being exposed to the word.", "However, the actual likelihood of exposure will depend on whether an individual is a native or non-native speaker as well as their experience of the language.", "Finally, in Figure 7 we group imagability ratings into 3 bins representing high, medium and low scores.", "We see that for the advanced and intermediate classes most complex annotations fall in the middle range.", "However, for the beginner class there are still many high imagability words that are deemed as complex.", "It is worth noting, that the coverage of imagability is limited and therefore results should be considered more cautiously.", "Regarding the native and non-native imagability, we again see that the non-native class has slightly more higher imagability words marked as complex.", "To conclude, the relative relationships between beginner and advanced feature distributions very closely mirror the relationship between native and non-native.", "There is a clear trend across features based on the proficiency and experience the reader.", "Furthermore, the feature profiles of advanced nonnative speakers are more similar to that of a native speaker.", "As far as we are aware, this is the first work exploring how the thresholds of features vary across audiences for complexity.", "Investigating this is insightful, as there are numerous threshold based approaches to CWI (Zeng et al., 2005; Elhadad, 2006; Biran et al., 2011), therefore understanding how these thresholds differ for audiences can produce more informed techniques.", "Textual complexity is a subjective phenomenon that is dependent on the intended audience.", "We show that when considering lexical complexity, the best performing CWI models for a target proficiency level are trained with the labels of that sub-group.", "We investigate which features correlate most with the absolute number of native and non-native annotations as well as observe how the distributions of classic complexity features are dependent on the intended audience.", "We find strong similarities between the notion of word complexity for advanced non-native readers and native readers.", "Finally, we release a dataset for CWI with proficiency subgroup annotations.", "In future work we plan to collect additional annotations across classes, especially concentrating on beginners.", "We would also like to investigate how effective informed-thresholding techniques for CWI are compared to high resource systems.", "This work has been done while the second author was a Senior Research Associate at the University of Cambridge.", "We thank Cambridge English for supporting this research via the ALTA Institute.", "We are also grateful to the anonymous reviewers for their valuable feedback." ]
[ "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "objective", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "objective", "result", "result", "objective", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "other", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "objective", "result", "abstain", "method", "objective", "other", "other", "other" ]
[ "Natural language processing (NLP) models trained on people-generated data can be unreliable because, without any constraints, they can learn from spurious correlations that are not relevant to the task.", "We hypothesize that enriching models with speaker information in a controlled, educated way can guide them to pick up on relevant inductive biases.", "For the speaker-driven task of predicting code-switching points in EnglishSpanish bilingual dialogues, we show that adding sociolinguistically-grounded speaker features as prepended prompts significantly improves accuracy.", "We find that by adding influential phrases to the input, speaker-informed models learn useful and explainable linguistic information.", "To our knowledge, we are the first to incorporate speaker characteristics in a neural model for code-switching, and more generally, take a step towards developing transparent, personalized models that use speaker information in a controlled way.", "Imbalanced datasets, flawed annotation schemes, and even model architectures themselves can all cause neural models to encode and propagate biases by incorrectly correlating social information with labels for a task (Sun et al., 2019; Field et al., 2021).", "As a result, models may be brittle and offensive in the presence of racial or gender attributes (Kiritchenko and Mohammad, 2018; Nozza et al., 2021), unsuitable for processing mixed-language text or dialect variations (Sap et al., 2019; Kumar et al., 2021; Winata et al., 2021), or ones that can miscommunicate intents in translation setups.", "Contextualizing models in social factors is important for preventing these issues and building more socially intelligent and culturally sensitive NLP technologies (Hovy and Yang, 2021).", "useful inductive biases, thereby improving performance on person-oriented classification tasks.", "We test this hypothesis on the task of predicting code-switching (language mixing) in a multilingual dialogue, which is inherently linguistically and socially driven (Li, 2013).", "Prior approaches for predicting code-switching consider only shallow linguistic context (Dogruz et al., 2021).", "As we show in our experiments 1 , using a standard Transformer-based classifier (Conneau et al., 2020) trained with only linguistic context results in suboptimal and unstable models.", "Moreover, we believe code-switch prediction is a suitable first task for learning speaker-driven inductive biases; we can test whether models learn useful relationships between social attributes while minimizing the risk of building a model that perpetuates social prejudices.", "We ground the models in relevant social factors, such as age, native language, and language-mixing preference of the interlocutors, via text-based speaker descriptions or prompts (cf. Zhong et al., 2021; Wei et al., 2021).", "We find that prepending speaker prompts to dialogue contexts improves performance significantly, and leads to more stable generalizations.", "Our prompts are different from the embedding-based personas of Li et al. (2016) and the synthesized descriptions from Persona-Chat (Zhang et al., 2018), capturing theoretically grounded social and linguistic properties of speakers, as opposed to hobbies or occupations.", "To analyze the inductive biases that the models learn, we use SelfExplain (Rajagopal et al., 2021) an interpretable text classification model highlighting key phrases in the input text.", "We propose a new method for aggregating the interpretations produced by SelfExplain to explain model predictions and align them with sociolinguistic literature.", "We motivate our study of predicting codeswitching in 2, and describe the task and inter-1 All data and code will be available at https:// github.com/ostapen/Switch-and-Explain .", "pretable neural text classification models in 3.", "After outlining important ethical considerations in 4, we detail our experiments (5) and results (6), and provide an analysis of speaker-aware model generalizations that are grounded in prior psycholinguistic research on code-switching (7).", "Our overarching goal is to develop a general and theoretically-informed methodology to ground neural models in a social context, because a wide array of person-centric classification tasks, such as sentiment prediction or hate speech detection, can fail without proper social contextualization (Sap et al., 2019; Kiritchenko and Mohammad, 2018; Hovy and Yang, 2021).", "We choose a speaker-driven task that is ethically safer to experiment with (see a detailed discussion in 4): predicting code-switching in humanhuman dialogues.", "Code-switching is the alternation between languages within and between utterances (See Appendix A.1 for a detailed example of code-switched dialogue.) It is a languageand speaker-driven phenomenon, reflecting speaker identities and relationships between them, in addition to their linguistic backgrounds, preferences and topical constraints (Beatty-Martnez et al., 2020).", "Prior sociolinguistic work established the importance of speaker context for code-switching, and existing multilingual modelstrained with only monolingual linguistic contextare not speaker-grounded nor well-suited for dealing with mixed-language data, leaving gaps which we begin to address.", "Figure 1 provides a key motivating example of how global speaker features of two bilingual conversational participants influence their local speech production.", "Blue , whose native language is Spanish, begins speaking in Spanish, while Green responds in English.", "Following Green 's clarification question about the actor The Rock , Green begins in English, but will accommodate Blue (Ahn et al., 2020; Beatty-Martnez et al., 2020) to reply with el actor (Spanish), motivating the need for social context when processing mixed-language data.", "In this section we introduce the task of predicting code-switching points and describe the base model for it, with a self-explainable architecture as its backbone.", "We then describe how we incorporate speaker-grounding prompts into the model.", "Let d i = [ w 1 , w 2 , . . . , w u ] be an utterance (string of tokens) in the full dialogue D .", "Given a context window of size h , a model processes a local dialogue context: [ d i h , . . . , d i 1 , d i ] , where d i := [ w 1 , w 2 , . . . , w b ] , b { 1 , 2 , . . . , u } .", "In other words, we take the prefix of the current utterance d i up to an index b .", "Each word w j in the dialogue has a language tag l j associated with it.", "For the given dialogue context D up to boundary-word w b , a model must predict whether the language of the next word after w b will be code-switched (1), or the same (0).", "In our setup, a code-switch occurs between two consecutive words w b , w b +1 if the language of w b is English and the language of w b +1 is Spanish (or vice versa).", "In particular, a word with an ambiguous language, such as the proper noun Maria , cannot be a switch point; only words with unambiguous language tags are switched.", "This pre-3854 Prompt Speaker Description Example List ASH is first speaker, older, female, from Spanish speaking country, between English and Spanish prefers both, rarely switches languages.", "vents us from labeling monolingual utterances as code-switched only because they have an ambiguous term such as a proper noun.", "Speaker-Aware Grounding Each utterance in the dialogue context has a speaker associated with it.", "Let the set of all speakers in the dialogue context be S = { s 1 , s 2 , s 3 , . . . , s M } .", "We define a speaker-aware prompt P = { p 1 , p 2 , p 3 , . . . , p K } as a concatenation of K strings p i , each describing an attribute of a speaker in the dialogue.", "Together, P describes the unique attributes of all M speakers in the dialogue context.", "Our proposed speaker-guided models take as input P D = [ p 1 , . . . , p K , d i w , . . . , d i ] , the concatenation of prompts and dialogue context.", "We encode the inputs with a multilingual Transformer-based architecture (Devlin et al., 2019; Conneau et al., 2020) before using a linear layer to predict the presence or absence of a code-switch.", "We incorporate global information about each speaker in a dialogue using different prompt styles, generating a prompt P for a given dialogue context D .", "In theory, these prompts have the potential to change the model's priors by contextualizing dialogue with speaker information and should be more useful for predicting upcoming language switches.", "We consider two aspects when designing prompts.", "Content The prompt describes all speakers S in the dialogue using a set of speaker attributes A = { a 1 , a 2 , . . . , a T } .", "To create a description P m for speaker s m S , we combine phrases p s m 1 , p s m 2 , . . . , p s mT , such that each phrase corresponds to exactly one attribute.", "As Table 1 indicates, we use speaker IDs to tie a speaker to her description, and all prompts cover the full set of attributes, A , for all speakers in D .", "Form We consider three prompt forms: List , Sentence , and Partner .", "The prompt form determines both the resulting structure of prompt string P and the way we combine local attribute phrases p j to generate a speaker description P i .", "Table 1 provides concrete examples of List, Sentence, and Partner prompts for a pair of speakers.", "List and Sentence prompts do not explicitly relate speakers to each other: the final prompt P = { P 1 , . . . , P m , . . . , PM } concatenates individual speaker prompts P i .", "List forms combine all attributes in a speaker description P m with commas, while Sentence forms are more prose-like.", "These prompt forms are most straightforward to implement and simply concatenate each speaker profile without considering interactions of features.", "The model must implicitly learn how attributes between different speakers relate to one another in a way that influences code-switching behavior.", "Speaker entrainment or accommodation influences code-switching behavior (Bawa et al., 2020; Ahn et al., 2020; Mysln and Levy, 2015; Parekh 3855 et al., 2020).", "Thus, we also created Partner prompts to explicitly highlight relationships between speakers.", "We hypothesize that these are more useful than the List and Sentence forms, from which the model must implicitly learn speaker relationships.", "Partner prompts include an initial P i containing attribute qualities that all speakers share: P i := (cid:8) p a j | a j = v k , s S (cid:9) , where a j A and v k is a value taken on by attribute a j .", "As an example, all speakers may prefer Spanish, so P i will contain an attribute string p i capturing this.", "The final partner prompt is P partner = { P i , P 1 , P 2 , . . . , PM } , where speaker-specific descriptions P 1 , P 2 , . . . , PM highlight unique values of each speaker.", "We prepend prompts P to dialogue context D using [ EOS ] tokens for separation.", "We do not vary the feature order in a given prompt, but additional prompt tuning may reveal an optimal presentation of features in these prompts.", "Our proposed setup takes as input the dialogue context and a prepended speaker prompt.", "To explain predictions of the baseline and our speaker-aware setups, we use SelfExplain (Rajagopal et al., 2021), a framework for interpreting text-based deep learning classifiers using phrases from the input.", "SelfExplain incorporates a Locally Interpretable Layer (LIL) and a Globally Interpretable Layer (GIL).", "GIL retrieves the top-k relevant phrases in the training set for the given instance, while LIL ranks local phrases within the input according to their influence on the final prediction.", "LIL quantifies the effects that subtracting a local phrase representation from the full sentence have on the resulting prediction.", "We exclusively use LIL to highlight phrases in the speaker prompts and dialogues to identify both social factors and linguistic context influential to models; through post-hoc analysis, we can reveal whether these features can be corroborated with prior literature or indicate a model's reliance on spurious confounds.", "We do not use the GIL layer because we do not have instance-level speaker metadata; instead, speaker features are on the dialogue-level and will not yield useful top-k results.", "Figure 4 illustrates our full proposed model with two classification heads: one for prediction and one for interpretation.", "7.1 describes how we score phrases according to their influence on the final prediction.", "Data Privacy In line with prior behavioral studies, our work illustrates that sociolinguistic cues are essential for predicting code-switching points.", "To deploy our speaker-informed model, we must protect the identity and privacy of users through techniques such as federated machine learning: deploying local models to end-users without sending any user information back to the cloud (Konecn et al., 2016).", "Local models and data should be encrypted to prevent breaches and tampering with algorithms, as well as possible reconstruction of training data (Hitaj et al., 2017; Carlini et al., 2019; Zhang et al., 2020), minimizing the risk of leaking speaker information.", "Additionally, deployed systems should only collect and access information if the user agrees to it.", "All conversational participants voluntarily shared the data we use.", "Moreover, this research is important to conduct because there is evidence that human users react positively to appropriately adaptive technologies (Branigan et al., 2010).", "Specifically, initial experiments indicate that users rate dialogue systems that incorporate code-switching higher than ones that do not (or that do it less naturally) (Ahn et al., 2020; Bawa et al., 2020).", "A classifier, such as the one we explore in this work, can be very useful for developing a naturalistic dialogue system that is useful and enjoyable to use by people of diverse linguistic backgrounds.", "Our work focuses on English-Spanish code-switching which is widespread and accepted, but different regions and cultures have varying opinions of code-switching.", "It is important to understand these before building an application for a new language pair (Dogruz et al., 2021).", "Our task requires a dataset which not only has natural, mixed-language dialogue, but includes also information about its speakers.", "We use the Bangor Miami (Deuchar et al., 2014) dataset (BM) containing 56 transcribed dialogues in mixed English and Spanish.", "Most dialogues are between two speakers, but may contain three or four; another set of dialogues records only one speaker's side of the conversation.", "These monologues are still useful to study how linguistic cues influence code-switching.", "Moreover, language IDs are provided for every token.", "The dataset includes a ques-3856 tionnaire of self-reported information about each conversational participant; this includes dialogue-independent, macro-social features such as age, gender, and country of origin, as well as language preferences and speaker-provided linguistic ability.", "We identify each country according to the primary language (English, Spanish, or neither) spoken in the country and bin age features into four comparative groups ranging from youngest to oldest.", "An order feature indicates which speaker spoke first, second, etc. in the global dialogue context; we hypothesize that speakers may entrain, or change their speech to match, those who start a conversation (Ahn et al., 2020).", "Altogether, six features define our attribute set A .", "For each dialogue in BM, we extract all existing code-switch points; for a given switched word, we retain all left-most context in its containing utterance and vary the number of prior utterances that are included as context between 1, 2, 3, and 5.", "To generate negative examples, we select monolingual utterances by sampling from a binomial distribution with p = 0 .", "75 .", "For each retained utterance, we randomly choose three potential switch points (ex-tracting leftmost context in the same way), resulting in a dataset that is approximately 25% switched.", "Creating Splits Most speakers participate in only one of the 56 dialogues in the corpus.", "To help ensure the model sees new dialogue context in training and testing time, we split the train, validation, and test splits by conversation in a 60:20:20 ratio.", "For each dialogue, we compute the multilinguality index (M-Index) (Barnett et al., 2000), a measure between 0 and 1 indicating the mixedness in the text: 0 is monolingual text, while 1 is a code-switch at every word.", "We stratify the conversations by the M-Index and code-switching labels to enforce a more balanced distribution of monolingual and mixed-language conversations.", "We down-sample monolingual examples to bal-ance training and validation splits and report results on unbalanced validation and test sets.", "Table 2 shows the proportions of code-switched examples.", "Our final balanced training and validation splits have about 14,000 and 3,000 examples, while the unbalanced validation and test sets have approximately 7,000 and 9,000 examples, respectively.", "Marking Dialogue Turns The baseline setup does not incorporate speaker cues.", "Instead we use [ EOT ] and [ EOU ] tokens at the end of each utterance to signify end-of-turn and end-of-utterance, respectively.", "Given two consecutive utterances, an [ EOT ] signifies a change in speakers, while [ EOU ] indicates no change.", "In the speaker-informed setup, unique speaker IDs distinguish utterances from each speaker, and we prepend informative prompts characterizing the conversational participant(s).", "Prompts include user-reported metadata of personal preferences and characteristics.", "We use three prompt templates, as detailed in Section 3.", "We use XLM-RoBERTa (XLMR) (Conneau et al., 2020) to encode the text and jointly fine-tune XLMR on the code-switch prediction task.", "As a baseline, we use an XLMR model without prompt inputs P .", "Our speaker-prompted models, SP-XLMR, are trained by prepending speaker prompts to the dialogue context.", "The small size of our dataset results in higher variability in performance.", "To mitigate this, we select 10 random seeds, and train a given model setup (i.e., list prompt, no prompt, etc.) on each seed.", "The number of seeds is arbitrary; however, we choose a generous number of seeds to yield a tighter confidence interval for our results.", "We use 3 prompt types, resulting in 30 speaker-prompted models and 10 baseline models.", "We refer to speaker-prompted models as SP-XLMR and to the non-speaker baseline as simply XLMR .", "All models are trained using AdamW optimizers with a weight decay of 1 e 3 for a maximum of 10 epochs.", "SP-XLMR models are trained with a learning rate of 5 e 5 and XLMR models use a learning rate of 1 e 5 .", "To refer to a particular speaker-prompted model, we use a combination of prompt form and context size, for example, LIST -5.", "We report accuracy, F1, precision, and recall on the unbalanced validation and test sets.", "We use the Mann Whitney U significance test because it does not assume normally-distributed population means.", "curacy and F1 of XLMR , LIST , SENTENCE , and PARTNER models, across all context windows and seeds on the unbalanced validation and test sets.", "Each value is an average of 40 models.", "Adding prompt features boosts accuracy upwards of 5-8 percent points and F1 by .04-.05 compared to XLMR ; XLMR does not even surpass the majority baseline in accuracy.", "Based on validation set results, partner features are most helpful, confirming our sociolinguistically-driven hypothesis (see Section 3.2) Moreover, the standard deviation of XLMR accuracy is more than twice as large (3.66 on validation and 2.95 on test) as that of any speaker-prompted model.", "The improvements in accuracy and decrease in variation between models suggest that explicit speaker information guides models to learn relevant inductive biases for the codeswitching task.", "However, we cannot guarantee that the trained models will not reveal harmful social biases in other tasks.", "We see similar trends, regarding accuracy, F1, and standard deviation, in Table 4, which includes results for SP-XLMR and XLMR across the different context windows; each SP-XLMR and XLMR value is an average of 30 and 10 models, respectively.", "Larger context windows are helpful for both model types.", "Tables 8 and 9 include precision and recall scores for each prompt type and context window; in general, speaker-prompted models have upwards of .10 points higher precision than baseline XLMR , indicating that speaker information helps to identify valid switch points.", "As the context window increases, all speaker prompt types yield fairly similar performance.", "However, when context sizes are small (1 or 2 previous utterances only), Partner and Sentence prompts yield higher accuracy and precision than List models, perhaps because these prose-like formats are more useful for the model than a simple concatenate list of features.", "model performance.", "As a control, we generated synthetic descriptions for each speaker, including features such as favorite foods and weather, owned pets, and height.", "None of these attributes are discussed in the conversations and would not explicitly influence code-switch production.", "After generating descriptions in the Sentence and Partner format, we prepend them to dialogues using a context window of 5.", "According to the results in Table 5, these pseudo-descriptions significantly decrease performance, even relative to the baseline XLMR model Validation Test Model Acc.", "trained with a context window of 5.", "The results indicate that domain knowledge is useful to understand which speaker features to add to a model to improve performance, and they give more support to the claim that relevant speaker information helps guide models to useful inductive biases.", "Compared to baseline models, speaker models not only attain higher accuracy and F1 scores, but they also have a much smaller standard deviation in scores.", "For these experiments, we seek to explain our findings using the important phrases identified by LIL.", "Within a speaker prompt P , each speaker characteristic maps to its own phrase (i.e., from an English-speaking country ); in the dialogue, we extract 5-gram phrases using a sliding window.", "We detail our approach to scoring phrase influence and analyze key dialogue and speaker features.", "Our goals are to", "(a) identify phrases in the input whose removal will change the resulting model prediction and", "(b) identify phrases which contribute high confidence to the resulting model prediction.", "Let F be the full textual input consisting of sole dialogue context or dialogue context prepended with prompts, while ZF is the softmax output from our classifier.", "Let j be the index of the class predicted from the full input.", "LIL inputs ZF along with a series of masks, each corresponding to a local phrase in either the dialogue or the speaker prompt.", "Let nt be a local phrase, such that nt is either a speaker phrase p i or an n-gram in an utterance d i D .", "Using LIL, we quantify the effect of removing the representation of phrase nt from the representation of F by comparing the activation differences of 3858 Validation Test SP-XLMR XLM-R SP-XLMR XLM-R Ctx Acc.", "Z nt and Z f at index j , and we analyze the resulting sign and magnitude to address goals", "(a) and", "(b), respectively: C := (cid:26) 1 argmax Z nt = j 1 argmax Z nt = j (1) r ( nt ) = C | z nt j z F j | (2) where z nt j and z F j are the softmax scores of the phrase-ablated sentence and the full sentence, respectively, at index j , and r ( nt ) is the relevance score of nt .", "As Equations 1 and 2 indicate, we analyze a local phrase's score as follows: Sign A positive sign ( C = 1 ) indicates that the representation without nt does not change the resulting prediction.", "A negative score ( C = 1 ) indicates a more influential phrase because its ablation results in a different prediction.", "Magnitude corresponds to the weight of the contribution of a particular phrase.", "If the activation difference is high in magnitude, then nt strongly influences the resulting prediction.", "Magnitudes near 0 indicate a non-influential phrase.", "Our scoring approach differs slightly from the original implementation (see Appendix A.2).", "Given a context size, the dialogue phrase masks are identical for SP-XLMR and XLMR; thus, we directly compare which phrases are most informative in the presence and absence of speaker features.", "We consider only phrases which are influential enough to change a given model's prediction after their representations are subtracted from the full-sentence representations (phrases with a negative score).", "Setting context size to 5, we identify examples from the validation set for which the majority of SP-XLMR models (out of 30) predicted correctly and the majority of XLMR models (out of 10) predicted incorrectly.", "Nearly 95% of such examples are not switched, indicating that added speaker information helps improve model precision.", "We sample a portion of these instances for our analysis.", "For a given validation set example and model setup, we track all influential phrases and count the number of models for which each phrase is influential.", "To account for phrase interactions, we track the agreement on co-occurring pairs and trios of important phrases.", "We compare only top-10 influential phrases.", "We use 10 phrases because all models rank at least 10 phrases as influential (but not 15 or 20).", "Phrase scores in the topk , where k < 10 , tend to all be very similar.", "We are not interested in small-scale score differences, and thus, equally consider all phrases ranked in the top-10.", "We hypothesize that speaker models (1) exhibit more phrase agreements compared to baseline models and (2) use more helpful and relevant linguistic features for code-switch prediction.", "Most speaker models agree on which phrases are important.", "In addition to tracking which individual phrases are in the top-10, we analyze how many pairs and trios of phrases are in the top-10 list.", "Figure 2 indicates that the majority of speaker-prompted models (out of 30) tend to agree on the top-10 important phrase groupings, especially across single and pairwise groupings.", "The speaker models likely pick up on similar inductive biases, as revealed through the higher feature agreement among these models.", "Only around 38-40% of baseline models tend to agree on which phrases are most important, potentially explaining the higher standard deviation in results among the baseline models compared to the speaker models.", "Speaker models make better use of language information.", "On monolingual (negative) examples, both speaker-prompted and baseline models tend to look at a majority of monolingual phrases in the same languages (English or Spanish), and these 3859 0 10 20 30 40 50 60 Single Pair Trio % of Models That Agree on Top-10 Phrases P h r a s e G r o uu p i n g SP-XLMR-5 XLMR-5 **** **** **** Figure 2: Each bar indicates the average percent and standard deviation of XLMR (dashed green, N =10 ) and SP-XLMR (green, N =30 ) models that agree on the top-10 phrases.", "phrases are mainly located in the first quarter of tokens preceding the potential switch point.", "However, speaker models successfully predict many of these negative examples correctly, unlike baselines.", "In many cases, the speaker models have additional access to global speaker properties of the current speaker for example, never switches languages and this may also influence them to make the correct prediction given prior linguistic context.", "Even when baseline models have strong evidence for predicting no code-switch (i.e., ranking only monolingual phrases as important), they tend to misuse this history and randomly predict code-switches.", "On code-switched examples, speaker models continue to favor phrases that are nearest to the switchpoint, while baseline models are sensitive to phrases in early and late dialogue context.", "Using phrases closer to switch points may give better structural context from which to predict a switch.", "In several cases, speaker models correctly predict an English-to-Spanish switch and rank prior Spanish phrases as influential, while baseline models highly rank English phrases and predict no switch.", "We see a similar pattern in Spanish-to-English switches.", "Speaker information may help models learn, linguistically, what it means to code-switch.", "Linguistic preference features are most influential across model setups.", "For all speaker-prompted models, speakers' language and codeswitching preferences are the most influential on the resulting predictions.", "Country of origin information is helpful, too, but may be misleading: speakers may immigrate from a Spanish country but grow up speaking English; in such cases, the language information likely helps disambiguate any confusions.", "Following these linguistic features are relational features (speaker order) in the dialogue, and less often, age features, especially in partner models.", "Gender is almost never influential.", "Age may be correlated with linguistic preference features and is thus not influential on its own.", "Gender and age can interplay to define larger, dynamic social roles, which may influence language production.", "On their own, these static markers of identity do not significantly characterize one's speech patterns (Eckert, 2012; Ochs, 1992).", "However, shared macro-social attributes (age and gender) may be influential in partner models because the partici-pant constellation influences how speakers express themselves and modulate social distance (Giles and Baker, 2008; Mysln and Levy, 2015).", "influential in true and predicted code-switches.", "To study how linguistic preferences interact with code-switch behavior, we analyzed all true and predicted code-switch points according to the preferences of speakers in the conversations.", "Specifically, we tracked whether speakers preferred to codeswitch, speak English, speak Spanish, or preferred both English and Spanish.", "Looking at each feature in isolation, we counted the number of switch points that occur when at least one speaker prefers the feature, when the current speaker prefers the feature, and when other conversational partners (aside from the current speaker) prefer the feature.", "Table 6 in the Appendix indicates that preference to code-switch, to speak Spanish, or to speak both English and Spanish are most dominant in influencing code-switching behavior.", "Speaker-prompted models are able to learn this relationship.", "Moreover, we found that preferences of speakers other than the current speaker tend to be more influential in driving code-switching behavior, relating to the idea of speaker entrainment or accommodation.", "For more details, please refer to Appendix A.4.", "Ablating Features Using the best-performing setups on the validation set, namely Partner and Sentence models with 5 prior utterances for context, we identify influential speaker attributes using a leave-one-out-approach to mask out each attribute a i A .", "For each attribute, we train 10 ablated 3860 models and evaluate on the validation set.", "Note that this is different from the phrase ablations using LIL because we finetune the XLM-R encoder during the training process; in this setup, the ablated feature information is never backpropagated to update the encoder weights.", "The results of these experiments (see Appendix A.5) give some evidence that language preference, mixing, and age information have statistically significant effects on the performance of Partner-5 models, but this does not hold for the Sentence-5 models.", "We have strong evidence to believe that these speaker attributes have more complex underlying relationships and leave the exploration of these multi-feature interactions for future work.", "Our use of prompts 2 is similar to Zhong et al. (2021) and Wei et al. (2021), who rely on prompts to put models in different states for different tasks.", "Speaker Personas Open-domain dialogue agents which act according to a persona are more natural and engaging than the non-personalized baselines (Li et al., 2016); these personas can be short, superficial descriptions generated through crowd-sourcing (Zhang et al., 2018), gathered from Red-dit (Mazar et al., 2018), or self-learned (inferred) from dialogue context (Madotto et al., 2019; Cheng et al., 2019).", "These works, however, primarily evaluate dialogue content and only in one language (English) instead of analyzing how speaker properties influence the downstream dialogue structure.", "Addressing Model Bias Prior works for mitigating social biases feature adversarial learning (Pryzant et al., 2018; Elazar and Goldberg, 2018), counterfactual data augmentation (Zmigrod et al., 2019; Kaushik et al., 2020) or dataset balancing (Zhao et al., 2017), and more recently, using an interpretability-driven approach to uncover and controllably demote hidden biases (Han and Tsvetkov, 2021).", "Techniques for adapting to linguistic variants and mixed-language data include adversarial learning to pick up on key linguistic cues (Kumar et al., 2021), augmenting datasets with synthetic text (Winata et al., 2019) or examples of variants that models underperform on (Chopra et al., 2021), discriminative learning (Go-nen and Goldberg, 2019), and transfer learning with 2 Our prompts are data-dependent and fixed, and thus rather unrelated to the prompt tuning literature (Liu et al., 2021).", "Codeswitch Prediction The first work in codeswitch prediction (Solorio and Liu, 2008) uses Naive Bayes (NB) on lexical and syntactic features of shallow word context before switch boundaries from a small, self-collected dataset of English-Spanish conversations.", "Another NB approach predicts switch points on Turkish-Dutch social media data (Papalexakis et al., 2014), additionally using multi-word expressions and emoticons in their experiments.", "Piergallini et al. (2016) extend the techniques of the prior two works to Swahili-English codeswitched data.", "Two fine-grained logistic regression analyses (Fricke and Kootstra, 2016; Mysln and Levy, 2015) go beyond lexical information, incorporating psycholinguistic properties such as word accessibility and priming effects, and include binary features to code for properties such as speaker age and preceding utterance language.", "To the best of our knowledge, this is the first work incorporating sociolinguistically-grounded social factors in an interpretable neural model for codeswitch point prediction.", "Our speaker-aware models can better leverage mixed-language linguistic cues, compared to a text-only baseline: specifically, we showed performance gains of up to 7% in accuracy and .05 points in F1 scores on an imbalanced code-switching dataset.", "Our work is limited to one language pair and uses a small dataset.", "Thus, additional studies are necessary to assess the generalizability of our findings to other languages.", "Moreover, speaker identities can change dynamically in different settings.", "Linguistic preferences may also change over time.", "We could move beyond static personas, refining them using local dialogue context.", "In addition, speaker-grounded models must be carefully engineered to protect user privacy, using proxies for personal information and keeping private information away from shared resources.", "In the future, we would like to explore whether such speaker prompting can improve models in other person-centered tasks, e.g., coreference resolution (especially for datasets explicitly testing gender biases) or sentiment analysis.", "Using techniques such as data augmentation, we can explicitly guide models away from biases learned during training.", "With ethical considerations in mind, our work advances the state-of-the-art in building more adaptable and person-aware NLP technologies.", "We thank Vidhisha Balachandran, Dheeraj Rajo-gopal, Xiaochuang Han, Artidoro Pagnoni, and the anonymous reviewers for providing valuable feedback on our work.", "This work was supported in part by grant No. 2019785 from the United States-Israel Binational Science Foundation (BSF), National Science Foundation (NSF) grants No. 2007960, 2007656, 2125201 and 2040926, and by grant No.", "LU 856/13-1 from the Deutsche Forschungsge-meinschaft (DFG)." ]
[ "abstain", "objective", "result", "result", "objective", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "result", "objective", "abstain", "result", "method", "result", "objective", "other", "abstain", "result", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "method", "method", "method", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "method", "method", "method", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "other", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "objective", "result", "method", "abstain", "abstain", "abstain", "method", "abstain", "objective", "method", "method", "other", "other", "other" ]
[ "In online debates, users express different levels of agreement/disagreement with one another's arguments and ideas.", "Often levels of agree-ment/disagreement are implicit in the text and must be predicted to analyze collective opinions.", "Existing stance detection methods predict the polarity of a post's stance toward a topic or post, but don't consider the stance's degree of intensity.", "We introduce a new research problem, stance polarity and intensity prediction in response relationships between posts.", "This problem is challenging because differences in stance intensity are often subtle and require nuanced language understanding.", "Cyber argumentation research has shown that incorporating both stance polarity and intensity data in online debates leads to better discussion analysis.", "We explore five different learning models: Ridge-M regression, RidgeS regression, SVR-RF-R, pkudblab-PIP, and T-PAN-PIP for predicting stance polarity and intensity in argumentation.", "These models are evaluated using a new dataset for stance polarity and intensity prediction collected using a cyber argumentation platform.", "The SVR-RF-R model performs best for prediction of stance polarity with an accuracy of 70.43% and intensity with RMSE of 0.596.", "This work is the first to train models for predicting a post's stance polarity and intensity in one combined value in cyber argumentation with reasonably good accuracy.", "Many major online and social media and networking sites, such as Facebook, Twitter, and Wikipedia, have taken over as the new public forum for people to discuss and debate issues of national and international importance.", "With more participants in these debates than ever before, the volume of unstructured discourse data continues to increase, and the need for automatic processing of this data is prevalent.", "A critical task in processing online debates is to automatically determine the different argumentative relationships between online posts in a discussion.", "These relationships typically consist of a stance polarity (i.e., whether a post is supporting, opposing, or is neutral toward another post) and the degree of intensity of the stance.", "Automatically determining these types of relationships from a given text is a goal in both stance detection and argumentation mining research.", "Stance detection models seek to automatically determine a text's stance polarity (Favoring, Opposing, or Neutral) toward another text or topic based on its textual information (Mohammad et al., 2016).", "Likewise, argumentation mining seeks to determine the stance relationship (Supporting, Attacking, or Neutral) between argumentation components in a text (Stede and Schneider, 2018).", "However, in both cases, attention is only paid to the stance's polarity, while the intensity of the relationship is often ignored.", "Some studies have tried to incorporate intensity into their predictions by expanding the number of classes to predict (Strongly For, For, Other, Against, and Strongly Against); however, this expansion lowered their classification performance considerably compared classification without intensity (Sobhani et al., 2015).", "Thus, effective incorporation of stance intensity into stance classification remains an issue.", "Research in Cyber Argumentation has shown that incorporating both stance polarity and intensity information into online discussions improves the analysis of discussions and the various phenomena that arise during a debate, including opinion polarization (Sirrianni et al., 2018), and identifying outlier opinions (Arvapally et al., 2017), compared to using stance polarity alone.", "Thus, automatically identifying both the post's stance polarity and intensity, allows these powerful analytical models to be applied to unstructured debate data from platforms such as Twitter, Facebook, Wikipedia, comment threads, and online forums.", "To that end, in this paper, we introduce a new research problem, stance polarity and intensity prediction in a responsive relationship between posts, which aims to predict a text's stance polarity and intensity which we combine into a single continuous agreement value.", "Given an online post A, which is replying to another online post B, we predict the stance polarity and intensity value of A towards B using A's (and sometimes B's) textual information.", "The stance polarity and intensity value is a continuous value, bounded from -1.0 to +1.0, where the value's sign (positive, negative, or zero) corresponds to the text's stance polarity (favoring, opposing, or neutral) and the value's magnitude (0 to 1.0) corresponds to the text's stance intensity.", "Stance polarity and intensity prediction encapsulates stance detection within its problem definition and is thus a more difficult problem to address.", "While stance polarity can be identified through specific keywords (e.g., agree, disagree), the intensity is a much more fuzzy concept.", "The difference between strong opposition and weak opposition is often expressed through subtle word choices and conversational behaviors.", "Thus, to accurately predict agreement intensity, a learned model must understand the nuances between word choices in the context of the discussion.", "We explore five machine learning models for agreement prediction, adapted from the top-performing models for stance detection: Ridge-M regression, Ridge-S regression, SVR-RF-R, pkudblab-PIP, and T-PAN-PIP.", "These models were adapted from Mohammad et al. (2016), Sobhani et al. (2016), Mourad et al. (2018), Wei et al. (2016), and Dey et al. (2018) respectively.", "We evaluated these models on a new dataset for stance polarity and intensity prediction, collected over three empirical studies using our cyber argumentation platform, the Intelligent Cyber Argumentation System (ICAS) .", "This dataset contains over 22,000 online arguments from over 900 users discussing four important issues.", "In the dataset, each argument is manually annotated by their authoring user with an agreement value.", "Results from our empirical analysis show that the SVR-RF-R ensemble model performed the best for agreement prediction, achieving an RMSE score of 0.596 for stance polarity and intensity prediction, and an accuracy of 70% for stance detection.", "Further analysis revealed that the models trained for stance polarity and intensity prediction often had better accuracy for stance classification (po-larity only) compared to their counterpart stance detection models.", "This result demonstrates that the added difficulty of detecting stance intensity does not come at the expense of detecting stance polarity.", "To our knowledge, this is the first time that learning models can be trained to predict an online post's stance polarity and intensity simultaneously.", "The contributions of our work are the following: We introduce a new research problem called stance polarity and intensity prediction, which seeks to predict a post's agreement value that contains both the stance polarity (value sign) and intensity (value magnitude), toward its parent post.", "We apply five machine learning models on our dataset for agreement prediction.", "Our empirical results reveal that an ensemble model with many hand-crafted features performed the best, with an RMSE of 0.595, and that models trained for stance polarity and intensity prediction do not lose significant performance for stance detection.", "2.1 Stance Detection Stance detection research has a wide interest in a variety of different application areas including opinion mining (Hasan and Ng, 2013), sentiment analysis (Mohammad, 2016), rumor veracity (Der-czynski et al., 2017), and fake news detection (Lil-lie and Middelboe, 2019).", "Prior works have applied stance detection to many types of debate and discussion settings, including congressional floor debates (Burfoot et al., 2011), online forums (Hasan and Ng, 2013; Dong et al., 2017), persuasive essays (Persing and Ng, 2016), news articles (Hanselowski et al., 2018), and on social media data like Twitter (Mohammad et al., 2016).", "Approaches to stance detection depends on the type of text and relationship the stance is describing.", "For example, stance detection on Twitter often determines the author's stance (for/against/neutral) toward a proposition or target (Mohammad et al., 2016).", "In this work, we adapt the features sets and models used on the SemEval 2016 stance detection task Twitter dataset (Mohammad et al., 2016).", "This dataset has many similarities to our data in terms of post length and topics addressed.", "Approaches to Twitter stance detection include SVMs (Mohammad et al., 2016; Sobhani et al., 2016; Elfardy and Diab, 2016), ensemble classifiers (Tutek et al., 2016; Mourad et al., 2018), convolutional neural networks (Igarashi et al., 2016; Vijayaragha-van et al., 2016; Wei et al., 2016), recurrent neural networks (Zarrella and Marsh, 2016; Dey et al., 2018), and deep learning approaches (Sun et al., 2018; Sobhani et al., 2019).", "Due to the size of the dataset, the difference in domain, and time constraints, we did not test Sun et al. (2018)'s model in this work, because we could not gather sufficient argument representation features.", "Argumentation mining is applied to argumentative text to identify the major argumentative components and their relationships to one another (Stede and Schneider, 2018).", "While stance detection identifies the relationship between an author's stance toward a concept or target, argumentation mining identifies relationships between arguments, similar to our task in agreement prediction.", "However, unlike our task, argumentation mining typically de-fines arguments based on argument components, instead of treating an entire post as a single argument.", "In argumentation mining, a single text may contain many arguments.", "The major tasks of argumentation mining include:", "1) identify argumentative text from the nonargumentative text,", "2) classify argumentation components (e.g., Major Claim, Claims, Premise, etc.) in the text,", "3) determine the relationships between the different components, and", "4) classify the relationships as supporting, attacking, or neutral (Lippi and Torroni, 2016).", "End-to-end argument mining seeks to solve all the argumentation mining tasks at once (Persing and Ng, 2016; Eger et al., 2017), but most research focuses on one or two tasks at once.", "The most pertinent task to this work is the fourth task (though often times this task is combined with task 3).", "Approaches to this task include using textual entailment suites with syntactic features (Boltuzic and Snajder, 2014), or machine learning classifiers with different combinations of features including, structural and lexical features (Persing and Ng, 2016), sentiment features (Stab and Gurevych, 2017), and Topic modeling features (Nguyen and Litman, 2016).", "We use many of these types of features in our Ridge-S and SVR-RF-R models.", "Cyber argumentation systems help facilitate and improve understanding of large-scale online discussions, compared to other platforms used for debate, such as social networking and media platforms, online forums, and chat rooms (Klein, 2011).", "These systems typically employ argumentation frameworks, like IBIS (Kunz and Rittel, 1970) and Toul-min's structure of argumentation (Toulmin, 2003), to provide structure to discussions, making them easier to analyze.", "More specialized systems include features that improve the quality and understanding of discussions.", "Argumentation learning systems teach the users effective debating skills using argumentation scaffolding (Bell and Linn, 2000).", "More complex systems, like ICAS and the Deliberatorium (Klein, 2011), provide several integrated analytical models that identify and measure various phenomena occurring in the discussions.", "Our research group has developed an intelligent cyber argumentation system, ICAS, for facilitating large scale discussions among many users (Liu et al., 2007, 2010, 2011; Chanda and Liu, 2015; Liu et al., 2012; Arvapally et al., 2017; Sirrianni et al., 2018).", "ICAS an updated version of the OLIAS argumentation system (Arvapally and Liu, 2013).", "ICAS implements an IBIS structure (Kunz and Rittel, 1970), where each discussion is organized as a tree.", "In ICAS, discussions are organized by issue.", "Issues are important problems that need to be addressed by the community.", "Under each issue are several positions, which act as solutions or approaches toward solving the issue.", "Under each position, there are several arguments that argue for or against the parent position.", "Under these arguments, there can be any number of follow-on arguments that argue for or against the parent argument, and so on until the discussion has ended.", "Figure 1 provides a visualization of the discussion tree structure ICAS employs.", "In ICAS, arguments have two components: a textual component and an agreement value.", "The textual component is the written argument the user makes.", "ICAS does not limit the length of argument text; however, in practice, the average argument Figure 1: An example discussion tree structure used in ICAS.", "length is about 160 characters, similar to the length of a tweet.", "The agreement value is a numerical value that indicates the extent to which an argument agrees or disagrees with its parent.", "Unlike other argumentation systems, this system allows users to express partial agreement or disagreement with other posts.", "Users are allowed to select agreement values from a range of -1 to +1 at 0.2 increments that indicate different partial agreement values.", "Positive values indicate partial or complete agreement, negative values indicate partial or complete disagreement, and a value of 0 indicates indifference or neutrality.", "These agreement values represent each post's stance polarity (the sign) and intensity (the magnitude).", "These agreement values are distinctly different from other argumentation weighting schemes where argument weights represent the strength or veracity of an argument (see (Amgoud and Ben-Naim, 2018; Levow et al., 2014)).", "Each agreement value is selected by the author of the argument and is a mandatory step when posting.", "This section describes the models we applied to the stance polarity and intensity prediction problem.", "We applied five different models, adapted from top-performing stance classification models based on their performance and approach on the SemEval 2016 stance classification Twitter dataset (Mohammad et al., 2016).", "Our first two models use a linear ridge regression as the underlying model.", "We created two ridge regression models using two feature sets.", "The first ridge model (Ridge-M) used the feature set described in Mohammad et al. (2016) as their benchmark.", "They used word 1-3 grams and character 2-5 grams as features.", "We filtered out English stop words, tokens that existed in more than 95% of posts, and tokens that appear in less than 0.01% of posts for word N-grams and fewer than 10% for character N-grams.", "There were a total of 838 N-gram features for the Ridge-M model.", "The second ridge model (Ridge-S) used the feature set described in Sobhani, Mohammad, and Kir-itchenko's follow-up paper (2016).", "In that paper, they found the sum of trained word embeddings with 100 dimensions, in addition to the N-gram features outlined by Mohammad et al. (2016), to be the best-performing feature set.", "We trained a word-embedding (skip-gram word2vec) model on the dataset.", "For each post, and summed the embeddings for each token in the post were summed up and normalized by the total number of tokens of a post to generate the word embedding features.", "Ridge-S had 938 total features.", "This model (SRV-RF-R) consisted of an average-voting ensemble containing three different regression models: an Epsilon-Support Vector Regression model, a Random Forest regressor, and a ridge regression model.", "This model is an adaption of the ensemble model presented by Mourad et al. (2018) for stance detection.", "Their model used a large assortment of features, including linguistic features, topic features, tweet-specific features, labeled-based features, word-Embedding features, similarity features, context features, and sentiment lexicon features.", "They then used the feature selection technique reliefF (Kononenko et al., 1997) to select the top 50 features for usage.", "Due to the changes in context (Twitter vs. Cyber Argumenta-tion), we constructed a subset of their feature set, which included the following features 1 : Linguistic Features: Word 1-3 grams as binary vectors, count vectors, and tf-idf weighted vectors.", "Character 1-6 grams as count vectors.", "POS tag 1-3 grams concatenated with their words (ex: word1 pos1 . . . ) and concatenated to the end of the post (ex: word1, word2, . . . , POS1, POS2, . . . ).", "1 Please refer to the supplemental material for a full description of the feature set.", "post after LDA topic modeling (Blei et al., 2003) had run on the entire post corpus.", "Word Embedding Features: The 100-dimensional word embedding sums for each word in a post and the cosine similarity between the summed embedding vectors for the target post and its parent post.", "Lexical Features: Sentiment lexicon features outlined in Mourad et al. (2018), excluding the DAL and NRC Hashtag Lexicons.", "We tested using the top 50 features selected using reliefF and reducing the feature size to 50 using Principal Component Analysis (PCA), as well as using the full feature set.", "We found that the full feature set (2855 total) performed significantly better than the reliefF and PCA feature sets.", "We used the full feature set in our final model.", "The highest performing CNN model, pkudblab, applied to the SemEval 2016 benchmark dataset, was submitted by Wei et al. (2016).", "Their model applied a convolutional neural network on the word embedding features of a tweet.", "We modified this model for agreement prediction.", "The resulting model's (pkudblab-PIP) architecture is shown in Figure 2.", "We used pre-trained embeddings (300-dimension) published by the word2vec team (Mikolov et al., 2013).", "Given an input of word embeddings of size d by | s | , where d is the size of the word embedding and | s | is the normalized post length, the input was fed into a convolution layer.", "The convolution layer contained filters with window size ( m ) 3, 4, and 5 words long with 100 filters ( n ) each.", "Then the layers were passed to a max-pooling layer and finally passed through a fully-connected sigmoid layer to produce the final output value.", "We trained the model using a mean squared error loss function and used a 50% dropout layer after the max-pooling layer.", "The RNN model (T-PAN-PIP) is adapted from the T-PAN framework by Dey et al. (2018), which was one of the highest performing neural network models on the SemEval 2016 benchmark dataset.", "The T-PAN framework uses a two-phase LSTM model with attention, based on the architecture proposed by Du et al. (2017).", "We adapted this model for regression by making some modifications.", "Our Figure 2: The architecture of pkudblab-PIP for stance polarity and intensity prediction.", "adapted model (T-PAN-PIP) uses only a single-phase architecture, resembling Du et", "al.'s original design (2017), where the output is the predicted agreement value, instead of a categorical prediction.", "Figure 3 illustrates the architecture of T-PAN-PIP.", "It uses word embedding features (with embedding size 300) as input to two network branches.", "The first branch feeds the word embeddings into a bi-directional LSTM (Bi-LSTM) with 256 hidden units, which outputs the hidden states for each direction (128 hidden units each) at every time step.", "The other branch appends the average topic embedding from the topic text (i.e., the text of the post that the input is responding) to the input embeddings and feeds that input into a fully-connected softmax layer, to calculate what Dey et al. (2018) called the subjectivity attention signal.", "The subjectivity attention signals are a linear mapping of each input word's target augmented embedding to a scalar value that represents the importance of each word in the input relative to the target's text.", "These values serve as the attention weights that are used to scale the hidden state output of the Bi-LSTM.", "The weighted attention application layer combines the attention weighs to their corresponding hidden state output, as shown in (1).", "Where a s is the attention signal for word s , h s is the hidden layer output of the Bi-LSTM for word s , | s | is the total number of words, and Q is the resulting attention weighted vector of size 256, the size of the output of the hidden units of the Bi-LISTM.", "The output Q feeds into a fully-connected sigmoid layer and outputs the predicted agreement value.", "We train the model using a mean absolute error loss function.", "The dataset was constructed from three separate empirical studies collected in Fall 2017, Spring 2018, and Spring 2019.", "In each study, a class of undergraduate students in an entry-level sociology class was offered extra credit to participate in discussions in ICAS.", "Each student was asked to discuss four different issues relating to the content they were covering in class.", "The issues were:", "1) Healthcare: Should individuals be required by the government to have health insurance?", "2) Same Sex Adoption: Should same-sex married couples be allowed to adopt children?", "3) Guns on Campus: Should students with a concealed carry permit be allowed to carry guns on campus?", "4) Religion and Medicine: Should parents who believe in healing through prayer be allowed to deny medical treatment for their child?", "Under each issue, there were four positions (with the exception of the Healthcare issue for Fall 2017, which had only 3 positions) to discuss.", "The positions were constructed such that there was one strongly conservative position, one moderately conservative position, one moderately liberal position, and one strongly liberal position.", "The students were asked to post ten arguments under each issue.", "The combined dataset contains 22,606 total arguments from 904 different users.", "Of those arguments, 11,802 are replying to a position, and 10,804 are replying to another argument.", "The average depth of a reply thread tends to be shallow, with 52% of arguments on the first level (reply to position), 44% on the second level, 3% on the third level, and 1% on the remaining levels (deepest level was 5).", "When a student posted an argument, they were required to annotate their argument with an agree-Figure 4: A histogram of the different agreement values across all of the issues in the cyber argumentation.", "ment value.", "Overall, argument agreement values skew positive.", "Figure 4 displays a histogram of the agreement values for the arguments in the dataset.", "The annotated labels in this dataset are self-labeled, meaning that when a user replies to a post, they provide their own stance polarity and intensity label.", "The label is a reflection of the author's intended stance toward a post, where the post's text is a semantic description of that intention.", "While these label values are somewhat subjective, they are an accurate reflection of their author's agreement, which we need to capture to analyze opinions in the discussion.", "Self-annotated datasets like this one have been used in stance detection for argumentation mining in the past (see (Boltuzic and Snajder, 2014; Hasan and Ng, 2014)).", "In this study, we want to evaluate the models' performance on the stance polarity and intensity prediction problem.", "We separated the dataset into training and testing sets using a 75-25 split.", "For the neural network models (pkudblab-PIP and T-PAN-PIP), we separated out 10% of the training set as a validation set to detect over-fitting.", "The split was performed randomly without consideration of the discussion issue.", "Each issue was represented proportionally in the training and testing data sets with a maximum discrepancy of less than 1%.", "For evaluation, we want to see how well the regression models are able to predict the continuous agreement value for a post.", "We report the root-mean-squared error (RMSE) for the predicted results.", "We wanted to investigate whether training models for agreement prediction would degrade their performance for stance detection.", "Ideally, these models should learn to identify both stance intensity without impacting their ability to identify stance polarity.", "To test this, we compared each model to their original stance classification models described in their source papers.", "Thus, ridge-H is compared with an SVM trained on the same feature set (SVM-H), ridge-S is compared to a Linear-SVM trained on the same feature set (SVM-S), SVR-RF-R is compared to a majority-voting ensemble of a linear-SVM, Random Forest, and Nave Bayes classifier using the same feature set (SVM-RF-NB), pkudblab-PIP is compared to the original pkudblab model trained using a softmax cross-entropy loss function, and T-PAN-PIP is compared to the original T-PAN model trained using a softmax cross-entropy loss function.", "We trained the classification models for stance detection by converting the continuous agreement values into categorical polarity values.", "When converted into categorical values, all of the positive agreement values are classified as Favoring, all negative values are classified as Opposing, and zero values are classified as Neutral.", "In the dataset, 12,258 arguments are Favoring (54%), 8962 arguments are Opposing (40%), and 1386 arguments are Neutral (6%).", "To assess the stance detection performance of the models trained for agreement prediction, we converted the predicted continuous agreement values output by the models into the categorical values using the same method.", "For evaluation, we report both the accuracy value of the predictions and the macro-average F1-scores for the Favoring and Opposing classes on the testing set.", "This scoring scheme allows us to treat the Neutral category as a class that is not of interest (Mourad et al., 2018).", "The results for agreement prediction are shown in Table 1.", "A mean prediction baseline model is shown in the table to demonstrate the difficulty associated with the problem.", "The neural network models perform worse than both the ridge regression and ensemble models.", "Ridge-S performed slightly better than Ridge-M due to the sum word Model RMSE Baseline (Mean) 0.718 Ridge-M 0.620 Ridge-S 0.615 SVR-RF-R 0.596 pkudblab-PIP 0.657 T-PAN-PIP 0.623 Table 1: The results of the regression models for the Agreement prediction task.", "embedding features.", "The best performing model was the SVR-RF-R model with an RMSE of 0.596.", "We performed feature analysis on the SVR-RF-R model using ablation testing (i.e., removing one feature set from the model).", "Results showed that removing a single features set for each type of feature (Word N-grams, Character N-grams, POS N-grams, Topic features, Lexicon features, word embedding features, and cosine similarity feature) impacted the RMSE of the model by less than 0.005.", "Using only the N-gram features resulted in an RMSE of 0.599, which is only a 0.0047 decrease from the total.", "This result matches the difference between Ridge-M (only uses N-gram features) and Ridge-S (includes N-gram and word embedding features).", "Since the N-gram features contain most of the textual information, it had the most impact on the model, while the additional features had smaller effects on the model accuracy.", "We compare the models trained on the agreement prediction task to their classification model counterparts in terms of performance on the stance detection task.", "Tables 2 and 3 show the comparison between the models in terms of accuracy and (macro) F1-score.", "SVR-RF-R has the best accuracy and F1-score for stance detection, which outperformed its classifier counterpart (SVM-RF-NB) by 2.12% in accuracy and +0.016 in F1-score.", "Three of the models trained for stance polarity and intensity prediction, SVR-RF-R, Ridge-S, and T-PAN-PIP, outperformed their classifier counterparts in accuracy by 1-2% and F1-score by +0.009 on average.", "Two of the models trained for stance polarity and intensity prediction, Ridge-H and pkudblab-PIP, slightly un-derperformed their classifier counterparts in accuracy by -0.36% and F1-score by -0.011 on average.", "The models behaved very similarly on the agreement prediction problem, where the difference between the best performing model and the worst performing model is only 0.061.", "Overall, the best model received an RMSE of 0.596, which is reasonably good but can be improved.", "T-PAN-PIP had the worst performance, which is surprising, as it was the only model to include the parent post's information into its prediction, which should have helped improve its performance.", "It is possible that its architecture is unsuitable for agreement prediction; other architectures have been deployed that include a post's parent and ancestors into a stance prediction, which might be more suitable for agreement prediction.", "Future model designs should better incorporate a post's parent information into their predictions.", "The difference in performance between the agreement prediction models and the classification models on the stance detection task was small and sometimes better.", "This demonstrates that the models learning to identify stance intensity do so without significant loss of performance in identifying stance polarity.", "Larger gains in performance will likely require information about the post's author.", "Some post authors will state strong levels of agreement in their statements, but annotate their argument with weaker agreement levels.", "For example, one author wrote, Agree completely. Government should stay out of healthcare. and annotated that argument with an agreement value of +0.6.", "The authors were instructed on how to annotate their posts, but the annotations themselves were left to the post's author's discretion.", "Thus including author information into our models would likely improve the stance polarity and intensity prediction results.", "We introduce a new research problem called stance polarity and intensity prediction in a responsive relationship between posts, which predicts both an online post's stance polarity and intensity value toward another post.", "This problem encapsulates stance detection and adds the additional difficulty of detecting subtle differences in intensity found in the text.", "We introduced a new large empirical dataset for agreement prediction, collected using a cyber argumentation platform.", "We implemented five models, adapted from top-performing stance detection models, for evaluation on the new dataset for agreement prediction.", "Our empirical results demonstrate that the ensemble model SVR-RF-R performed the best for agreement prediction and models trained for agreement prediction learn to differentiate between intensity values without degrading their performance for determining stance polarity.", "Research into this new problem of agreement prediction will allow for a more nuanced annotation and analysis of online debate.", "We would like to acknowledge Md Mahfuzer Rahman and Najla Althuniyan for their efforts in developing the ICAS platform and planning the empirical studies.", "We are also grateful to the anonymous reviewers for their constructive input during the review process." ]
[ "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "objective", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "abstain", "abstain", "result", "abstain", "abstain", "objective", "objective", "method", "result", "other", "other", "other", "other", "method", "abstain", "other", "abstain", "other", "method", "abstain", "other", "other", "other", "other", "other", "other", "other", "method", "other", "method", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "method", "objective", "method", "other", "other", "method", "other", "other", "other", "method", "other", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "objective", "abstain", "objective", "objective", "objective", "abstain", "other", "other" ]
[ "Robustness and counterfactual bias are usually evaluated on a test dataset.", "However, are these evaluations robust?", "If the test dataset is perturbed slightly, will the evaluation results keep the same?", "In this paper, we propose a double perturbation framework to uncover model weaknesses beyond the test dataset.", "The framework first perturbs the test dataset to construct abundant natural sentences similar to the test data, and then diagnoses the prediction change regarding a single-word substitution.", "We apply this framework to study two perturbation-based approaches that are used to analyze models' robustness and counterfactual bias in English.", "(1) For robustness, we focus on synonym substitutions and identify vulnerable examples where prediction can be altered.", "Our proposed attack attains high success rates ( 96 . 0% 99 . 8% ) in finding vulnerable examples on both original and robustly trained CNNs and Transformers.", "(2) For counterfactual bias, we focus on substituting demographic tokens (e.g., gender, race) and measure the shift of the expected prediction among constructed sentences.", "Our method is able to reveal the hidden model biases not directly shown in the test dataset.", "Our code is available at https://github.com/chong-z/ nlp-second-order-attack .", "Recent studies show that NLP models are vulnerable to adversarial perturbations.", "A seemingly invariance transformation (a.k.a. adversarial perturbation) such as synonym substitutions (Alzantot et al., 2018; Zang et al., 2020) or syntax-guided paraphrasing (Iyyer et al., 2018; Huang and Chang, 2021) can alter the prediction.", "To mitigate the model vulnerability, robust training methods have been proposed and shown effective (Miyato et al., 2017; Jia et al., 2019; Huang et al., 2019; Zhou et al., 2020).", "based on a given test dataset or synthetic sentences constructed from templates (Ribeiro et al., 2020).", "Specifically, the robustness of a model is often evaluated by the ratio of test examples where the model prediction cannot be altered by semantic-invariant perturbation.", "We refer to this type of evaluations as the first-order robustness evaluation.", "However, even if a model is first-order robust on an input sentence x 0 , it is possible that the model is not robust on a natural sentence x 0 that is slightly modified from x 0 .", "In that case, adversarial examples still exist even if first-order attacks cannot find any of them from the given test dataset.", "Throughout this paper, we call x 0 a vulnerable example .", "The existence of such examples exposes weaknesses in models' understanding and presents challenges for model deployment.", "Fig. 1 illustrates an example.", "In this paper, we propose the double perturbation framework for evaluating a stronger notion of second-order robustness .", "Given a test dataset, we consider a model to be second-order robust if there is no vulnerable example that can be identified in the neighborhood of given test instances (2.2).", "In particular, our framework first perturbs the test set to construct the neighborhood, and then diagnoses the robustness regarding a single-word synonym substitution.", "Taking Fig. 2 as an example, the model is first-order robust on the input sentence x 0 (the prediction cannot be altered), but it is not second-order robust due to the existence of the vulnerable example x 0 .", "Our framework is designed to identify x 0 .", "We apply the proposed framework and quantify second-order robustness through two second-order attacks (3).", "We experiment with English sentiment classification on the SST-2 dataset (Socher et al., 2013) across various model architectures.", "Surprisingly, although robustly trained CNN (Jia et al., 2019) and Transformer (Xu et al., 2020) can achieve high robustness under strong attacks (Alzantot et al., 2018; Garg and Ramakr-ishnan, 2020) ( 23 . 0% 71 . 6% success rates), for around 96 .", "0% of the test examples our attacks can find a vulnerable example by perturbing 1.3 words on average.", "This finding indicates that these robustly trained models, despite being first-order robust, are not second-order robust.", "Furthermore, we extend the double perturbation framework to evaluate counterfactual biases (Kus-ner et al., 2017) (4) in English.", "When the test dataset is small, our framework can help improve the evaluation robustness by revealing the hidden biases not directly shown in the test dataset.", "Intuitively, a fair model should make the same prediction for nearly identical examples referencing different groups (Garg et al., 2019) with different protected attributes (e.g., gender, race).", "In our evaluation, we consider a model biased if substituting tokens associated with protected attributes changes the expected prediction, which is the average prediction among all examples within the neighborhood.", "For instance, a toxicity classifier is biased if it tends to increase the toxicity if we substitute straight gay in an input sentence (Dixon et al., 2018).", "In the experiments, we evaluate the expected sentiment predictions on pairs of protected tokens (e.g., ( he , she ), ( gay , straight )), and demonstrate that our method is able to reveal the hidden model biases.", "Our main contributions are: (1) We propose the double perturbation framework to diagnose the robustness of existing robustness and fairness evaluation methods.", "(2) We propose two second-order attacks to quantify the stronger notion of second-x 0 x 0 x (cid:48) 0 x 1 negative positive Figure 2: An illustration of the decision boundary.", "order robustness and reveal the models' vulnerabilities that cannot be identified by previous attacks.", "(3) We propose a counterfactual bias evaluation method to reveal the hidden model bias based on our double perturbation framework.", "In this section, we describe the double perturbation framework which focuses on identifying vulnerable examples within a small neighborhood of the test dataset.", "The framework consists of a neighborhood perturbation and a word substitution.", "We start with defining word substitutions.", "We focus our study on word-level substitution, where existing works evaluate robustness and counterfactual bias by directly perturbing the test dataset.", "For instance, adversarial attacks alter the prediction by making synonym substitutions, and the fairness literature evaluates counterfactual fairness by substituting protected tokens.", "We integrate the word substitution strategy into our framework as the component for evaluating robustness and fairness.", "For simplicity, we consider a single-word substitution and denote it with the operator .", "Let X V l be the input space where V is the vocabulary and l is the sentence length, p = ( p (1) , p (2) ) V 2 be a pair of synonyms (called patch words ), X p X denotes sentences with a single occurrence of p (1) (for simplicity we skip other sentences), x 0 X p be an input sentence, then x 0 p means substitute p (1) p (2) in x 0 .", "The result after substitution is: x (cid:48) 0 = x 0 p .", "Taking Fig. 1 as an example, where p = ( film , movie ) and x 0 = a deep and meaningful film , the perturbed sentence is x (cid:48) 0 = a deep and meaningful movie .", "Now we introduce other components in our framework.", "Instead of applying the aforementioned word substitutions directly to the original test dataset, our framework perturbs the test dataset within a small neighborhood to construct similar natural sentences.", "This is to identify vulnerable examples with respect to the model.", "Note that examples in the neighborhood are not required to have the same meaning as the original example, since we only study the prediction difference caused by applying synonym substitution p (2.1).", "Constraints on the neighborhood.", "We limit the neighborhood sentences within a small (cid:96) 0 norm ball (regarding the test instance) to ensure syntactic similarity, and empirically ensure the naturalness through a language model.", "The neighborhood of an input sentence x 0 X is: Neighbor k ( x 0 ) Ball k ( x 0 ) X natural , (1) where Ball k ( x 0 ) = { x | (cid:107) x x 0 (cid:107) 0 k, x X } is the (cid:96) 0 norm ball around x 0 (i.e., at most k different tokens), and X natural denotes natural sentences that satisfy a certain language model score which will be discussed next.", "Construction with masked language model.", "We construct neighborhood sentences from x 0 by substituting at most k tokens.", "As shown in Algorithm 1, the construction employs a recursive approach and replaces one token at a time.", "For each recursion, the algorithm first masks each token of the input sentence (may be the original x 0 or the x from last recursion) separately and predicts likely replacements with a masked language model (e.g., DistilBERT, Sanh et al. 2019).", "To ensure the naturalness, we keep the top 20 tokens for each mask with the largest logit (subject to a threshold, Line 9).", "Then, the algorithm constructs neighborhood sentences by replacing the mask with found tokens.", "We use the notation x in the following sections to denote the constructed sentences within the neighborhood.", "With the proposed double perturbation framework, we design two black-box attacks 1 to identify vulnerable examples within the neighborhood of the test set.", "We aim at evaluating the robustness for inputs beyond the test set.", "Adversarial attacks search for small and invariant perturbations on the model input that can alter the prediction.", "To simplify the discussion, in the following, we take a binary classifier f ( x ) : X { 0 , 1 } as an example to describe our framework.", "Let x 0 be the sentence from the test set with label y 0 , then the smallest perturbation under (cid:96) 0 norm distance is: 2 := argmin (cid:107) (cid:107) 0 s.t. f ( x 0 ) (cid:54) = y 0 .", "Here = p 1 p l denotes a series of substitutions.", "In contrast, our second-order attacks fix = p and search for the vulnerable x 0 .", "Second-order attacks study the prediction difference caused by applying p .", "For notation convenience we define the prediction difference F ( x ; p ) : 1 Black-box attacks only observe the model outputs and do not know the model parameters or the gradient.", "Taking Fig. 1 as an example, the prediction difference for x 0 on p is F ( x 0 ; p ) = f ( ...moving movie . ) f ( ...moving film . ) = 1 .", "Given an input sentence x 0 , we want to find patch words p and a vulnerable example x 0 such that f ( x 0 p ) (cid:54) = f ( x 0 ) .", "Follow Alzantot et al. (2018), we choose p from a predefined list of counter-fitted synonyms (Mrkic et al., 2016) that maximizes | f soft ( p (2) ) f soft ( p (1) ) | .", "Here f soft ( x ) : X [0 , 1] denotes probability output (e.g., after the softmax layer but before the final argmax), f soft ( p (1) ) and f soft ( p (2) ) denote the predictions for the single word, and we enumerate through all possible p for x 0 .", "Let k be the neighborhood distance, then the attack is equivalent to solving: x 0 = argmax x Neighbor k ( x 0 ) | F ( x ; p ) | .", "Brute-force attack ( SO-Enum ).", "A naive approach for solving Eq.", "(3) is to enumerate through Neighbor k ( x 0 ) .", "The enumeration finds the smallest perturbation, but is only applicable for small k (e.g., k 2 ) given the exponential complexity.", "Beam-search attack ( SO-Beam ).", "The efficiency can be improved by utilizing the probability output, where we solve Eq.", "(3) by minimizing the cross-entropy loss with regard to x Neighbor k ( x 0 ) : L ( x ; p ) := log(1 f min ) log( f max ) , (4) where f min and f max are the smaller and the larger output probability between f soft ( x ) and f soft ( x 3 We assume a binary classification task, but our framework is general and can be extended to multi-class classification. p ) , respectively.", "Minimizing Eq.", "(4) effectively leads to f min 0 and f max 1 , and we use a beam search to find the best x .", "At each iteration, we construct sentences through Neighbor 1 ( x ) and only keep the top 20 sentences with the smallest L ( x ; p ) .", "We run at most k iterations, and stop earlier if we find a vulnerable example.", "We provide the detailed implementation in Algorithm 2 and a flowchart in Fig. 3.", "In this section, we evaluate the second-order robustness of existing models and show the quality of our constructed vulnerable examples.", "We follow the setup from the robust training literature (Jia et al., 2019; Xu et al., 2020) and experiment with both the base (non-robust) and robustly trained models.", "We train the binary sentiment classifiers on the SST-2 dataset with bag-of-words (BoW), CNN, LSTM, and attention-based Original: 70% Negative Input Example: in its best moments , resembles a bad high school production of grease , without benefit of song .", "Base models.", "For BoW, CNN, and LSTM, all models use pre-trained GloVe embeddings (Pen-nington et al., 2014), and have one hidden layer of the corresponding type with 100 hidden size.", "Similar to the baseline performance reported in GLUE (Wang et al., 2019), our trained models have an evaluation accuracy of 81.4%, 82.5%, and 81.7%, respectively.", "For attention-based models, we train a 3-layer Transformer (the largest size in Shi et al. 2020) and fine-tune a pre-trained bert-base-uncased from HuggingFace (Wolf et al., 2020).", "The Transformer uses 4 attention heads and 64 hidden size, and obtains 82.1% accuracy.", "The BERT-base uses the default configuration and obtains 92.7% accuracy.", "Robust models (first-order).", "With the same setup as base models, we apply robust training methods to improve the resistance to word substitution attacks.", "Jia et al. (2019) provide a provably robust training method through Interval Bound Propagation (IBP, Dvijotham et al. 2018) for all word substitutions on BoW, CNN and LSTM.", "Xu et al. (2020) provide a provably robust training method on general computational graphs through a combination of forward and backward linear bound propagation, and the resulting 3-layer Transformer is robust to up to 6 word substitutions.", "For both works we use the same set of counter-fitted synonyms provided in Jia et al. (2019).", "We skip BERT-base due to the lack of an effective robust training method.", "Attack success rate (first-order).", "We quantify first-order robustness through attack success rate, which measures the ratio of test examples that an adversarial example can be found.", "We use first-order attacks as a reference due to the lack of a direct baseline.", "We experiment with two black-box attacks: (1) The Genetic attack (Alzantot et al., 2018; Jia et al., 2019) uses a population-based optimization algorithm that generates both syntactically and semantically similar adversarial examples, by replacing words within the list of counter-fitted synonyms.", "(2) The BAE attack (Garg and Ramakrishnan, 2020) generates coherent adversarial examples by masking and replacing words using BERT.", "For both methods we use the implementation provided by TextAttack (Morris et al., 2020).", "Attack success rate (second-order).", "We also quantify second-order robustness through attack success rate, which measures the ratio of test examples that a vulnerable example can be found.", "To evaluate the impact of neighborhood size, we experiment with two configurations: (1) For the small neighborhood ( k = 2 ), we use SO-Enum that finds the most similar vulnerable example.", "(2) For the large neighborhood ( k = 6 ), SO-Enum is not applicable and we use SO-Beam to find vulnerable examples.", "We consider the most challenging setup and use patch words p from the same set of counter-fitted synonyms as robust models (they are provably robust to these synonyms on the test set).", "We also provide a random baseline to validate the effectiveness of minimizing Eq.", "(4) (Appendix A.1).", "Quality metrics (perplexity and similarity).", "We quantify the quality of our constructed vulnerable examples through two metrics: (1) GPT-2 (Rad-ford et al., 2019) perplexity quantifies the naturalness of a sentence (smaller is better).", "We report the perplexity for both the original input examples and the constructed vulnerable examples.", "(2) (cid:96) 0 norm distance quantifies the disparity between two sentences (smaller is better).", "We report the distance between the input and the vulnerable example.", "Note that first-order attacks have different objectives and thus cannot be compared directly.", "We experiment with the validation split (872 examples) on a single RTX 3090.", "The average running time per example (in seconds) on base LSTM is 31.9 for Genetic, 1.1 for BAE, 7.0 for SO-Enum ( k = 2 ), and 1.9 for SO-Beam ( k = 6 ).", "We provide additional running time results in Appendix A.3.", "Table 1 provides an example of the attack result where all attacks are successful (additional examples in Appendix A.5).", "As shown, our second-order attacks find a vulnerable example by replacing grease musicals , and the vulnerable example has different predictions for bad and unhealthy .", "Note that, Genetic and BAE have different objectives from second-order attacks and focus on finding the adversarial example.", "Next we discuss the results from two perspectives.", "Second-order robustness.", "We observe that existing robustly trained models are not second-order robust.", "As shown in Table 2, our second-order attacks attain high success rates not only on the base models but also on the robustly trained models.", "For instance, on the robustly trained CNN and Transformer, SO-Beam finds vulnerable examples within a small neighborhood for around 96 .", "0% of the test examples, even though these models have improved resistance to strong first-order attacks (success rates drop from 62 . 0% 74 . 3% to 23 . 0% 71 . 6% for Genetic and BAE).", "4 This phenomenon can be explained by the fact that both first-order attacks and robust training methods focus on synonym substitutions on the test set, whereas our attacks, due to their second-order nature, find vul-4 BAE is more effective on robust models as it may use replacement words outside the counter-fitted synonyms.", "nerable examples beyond the test set, and the search is not required to maintain semantic similarity.", "Our methods provide a way to further investigate the robustness (or find vulnerable and adversarial examples) even when the model is robust to the test set.", "Quality of constructed vulnerable examples.", "As shown in Table 3, second-order attacks are able to construct vulnerable examples by perturbing 1.3 words on average, with a slightly increased perplexity.", "For instance, on the robustly trained CNN and Transformer, SO-Beam constructs vulnerable examples by perturbing 1.3 words on average, with the median 5 perplexity increased from around 165 to around 210.", "We provide metrics for first-order attacks in Appendix A.5 as they have different objectives and are not directly comparable.", "Furthermore, applying existing attacks on the vulnerable examples constructed by our method will lead to much smaller perturbations.", "As a reference, on the robustly trained CNN, Genetic attack constructs adversarial examples by perturbing 2.7 words on average (starting from the input exam-ples).", "However, if Genetic starts from our vulnerable examples, it would only need to perturb a single word (i.e., the patch words p ) to alter the prediction.", "These results demonstrate the weakness of the models (even robustly trained) for those inputs beyond the test set.", "We perform human evaluation on the examples constructed by SO-Beam.", "Specifically, we randomly 5 We report median due to the unreasonably large perplexity on certain sentences.", "select 100 successful attacks and evaluate both the original examples and the vulnerable examples.", "To evaluate the naturalness of the constructed examples, we ask the annotators to score the likelihood (on a Likert scale of 1-5, 5 to be the most likely) of being an original example based on the grammar correctness.", "To evaluate the semantic similarity after applying the synonym substitution p , we ask the annotators to predict the sentiment of each example, and calculate the ratio of examples that maintain the same sentiment prediction after the synonym substitution.", "For both metrics, we take the median from 3 independent annotations.", "We use US-based annotators on Amazon's Mechanical Turk 6 and pay $0.03 per annotation, and expect each annotation to take 10 seconds on average (effectively, the hourly rate is about $11).", "See Appendix A.2 for more details.", "As shown in Table 4, the naturalness score only drop slightly after the perturbation, indicating that our constructed vulnerable examples have similar naturalness as the original examples.", "As for the semantic similarity, we observe that 85% of the original examples maintain the same meaning after the synonym substitution, and the corresponding ratio is 71% for vulnerable examples.", "This indicates that the synonym substitution is an invariance transformation for most examples.", "In addition to evaluating second-order robustness, we further extend the double perturbation framework (2) to evaluate counterfactual biases by setting p to pairs of protected tokens.", "We show that our method can reveal the hidden model bias.", "In contrast to second-order robustness, where we consider the model vulnerable as long as there exists one vulnerable example, counterfactual bias focuses on the expected prediction, which is the average prediction among all examples within the neighborhood.", "We consider a model biased if the 6 https://www.mturk.com x x p x x p Figure 4: An illustration of an unbiased model vs. a biased model.", "expected predictions for protected groups are different (assuming the model is not intended to discriminate between these groups).", "For instance, a sentiment classifier is biased if the expected prediction for inputs containing woman is more positive (or negative) than inputs containing man .", "Such bias is harmful as they may make unfair decisions based on protected attributes, for example in situations such as hiring and college admission.", "Counterfactual token bias.", "We study a narrow case of counterfactual bias, where counterfactual examples are constructed by substituting protected tokens in the input.", "A naive approach of measuring this bias is to construct counterfactual examples directly from the test set, however such evaluation may not be robust since test examples are only a small subset of natural sentences.", "Formally, let p be a pair of protected tokens such as ( he , she ) or ( Asian , American ), X test X p be a test set (as in 2.1), we define counterfactual token bias by: B p ,k := E x Neighbor k ( X test ) F soft ( x ; p ) .", "We calculate Eq.", "(5) through an enumeration across all natural sentences within the neighborhood.", "7 Here Neighbor k ( X test ) = (cid:83) x X test Neighbor k ( x ) denotes the union of neighborhood examples (of distance k ) around the test set, and F soft ( x ; p ) : X V 2 [ 1 , 1] denotes the difference between probability outputs f soft (similar to Eq.", "(2)): F soft ( x ; p ) := f soft ( x p ) f soft ( x ) .", "7 For gender bias, we employ a blacklist to avoid adding gendered tokens during the neighborhood construction.", "This is to avoid semantic shift when, for example, p = ( he , she ) such that it may refer to different tokens after the substitution.", "The model is unbiased on p if B p ,k 0 , whereas a positive or negative B p ,k indicates that the model shows preference or against to p (2) , respectively.", "Fig. 4 illustrates the distribution of ( x, x p ) for both an unbiased model and a biased model.", "The aforementioned neighborhood construction does not introduce additional bias.", "For instance, let x 0 be a sentence containing he , even though it is possible for Neighbor 1 ( x 0 ) to contain many stereotyping sentences (e.g., contains tokens such as doctor and driving ) that affect the distribution of f soft ( x ) , but it does not bias Eq.", "(6) as we only care about the prediction difference of replacing he she .", "The construction has no information about the model objective, thus it would be difficult to bias f soft ( x ) and f soft ( x p ) differently.", "In this section, we use gender bias as a running example, and demonstrate the effectiveness of our method by revealing the hidden model bias.", "We provide additional results in Appendix A.4.", "We evaluate counterfactual token bias on the SST-2 dataset with both the base and debiased models.", "We focus on binary gender bias and set p to pairs of gendered pronouns from Zhao et al. (2018a).", "Base Model.", "We train a single layer LSTM with pre-trained GloVe embeddings and 75 hidden size (from TextAttack, Morris et al. 2020).", "The model has 82.9% accuracy similar to the baseline performance reported in GLUE.", "Debiased Model.", "Data-augmentation with gender swapping has been shown effective in mitigating gender bias (Zhao et al., 2018a, 2019).", "We augment the training split by swapping all male entities with the corresponding female entities and vice-versa.", "We use the same setup as the base LSTM and attain 82.45% accuracy.", "Metrics.", "We evaluate model bias through the proposed B p ,k for k = 0 , . . . , 3 .", "Here the bias for k = 0 is effectively measured on the original test set, and the bias for k 1 is measured on our constructed neighborhood.", "We randomly sample a subset of constructed examples when k = 3 due to the exponential complexity.", "Filtered test set.", "To investigate whether our method is able to reveal model bias that was hidden in the test set, we construct a filtered test set on which the bias cannot be observed directly.", "Let X test be the original validation split, we construct X filter by the equation below and empirically set (cid:15) = 0 .", "005 .", "We provide statistics in Table 5.", "Our method is able to reveal the hidden model bias on X filter , which is not visible with naive measurements.", "In Fig. 5, the naive approach ( k = 0 ) observes very small biases on most tokens (as con-structed).", "In contrast, when evaluated by our double perturbation framework ( k = 3 ), we are able to observe noticeable bias, where most p has a positive bias on the base model.", "This observed bias is in line with the measurements on the original X test (Appendix A.4), indicating that we reveal the correct model bias.", "Furthermore, we observe mitigated biases in the debiased model, which demonstrates the effectiveness of data augmentation.", "To demonstrate how our method reveals hidden bias, we conduct a case study with p = ( actor , actress ) and show the relationship between the bias B p ,k and the neighborhood distance k .", "We present the histograms for F soft ( x ; p ) in Fig. 6 and plot the corresponding B p ,k vs. k in the right-most panel.", "Surprisingly, for the base model, the bias is Figure 6: Left and Middle: Histograms for F soft ( x ; p ) (x-axis) with p = ( actor , actress ) .", "negative when k = 0 , but becomes positive when k = 3 .", "This is because the naive approach only has two test examples (Table", "5) thus the measurement is not robust.", "In contrast, our method is able to construct 141 , 780 similar natural sentences when k = 3 and shifts the distribution to the right (posi-tive).", "As shown in the right-most panel, the bias is small when k = 1 , and becomes more significant as k increases (larger neighborhood).", "As discussed in 4.1, the neighborhood construction does not introduce additional bias, and these results demonstrate the effectiveness of our method in revealing hidden model bias.", "of work has been proposed to study the vulnerability of natural language models, through transformations such as character-level perturbations (Ebrahimi et al., 2018), word-level perturbations (Jin et al., 2019; Ren et al., 2019; Yang et al., 2020; Hsieh et al., 2019; Cheng et al., 2020; Li et al., 2020), prepending or appending a sequence (Jia and Liang, 2017; Wallace et al., 2019a), and generative models (Zhao et al., 2018b).", "They focus on constructing adversarial examples from the test set that alter the prediction, whereas our methods focus on finding vulnerable examples beyond the test set whose prediction can be altered.", "Robustness beyond the test set.", "Several works have studied model robustness beyond test sets but mostly focused on computer vision tasks.", "Zhang et al. (2019) demonstrate that a robustly trained model could still be vulnerable to small perturbations if the input comes from a distribution only slightly different than a normal test set (e.g., images with slightly different contrasts).", "Hendrycks and Dietterich (2019) study more sources of common corruptions such as brightness, motion blur and fog.", "Unlike in computer vision where simple image transformations can be used, in our natural language setting, generating a valid example beyond test set is more challenging because language semantics and grammar must be maintained.", "Counterfactual fairness.", "Kusner et al. (2017) propose counterfactual fairness and consider a model fair if changing the protected attributes does not affect the distribution of prediction.", "We follow the definition and focus on evaluating the counterfactual bias between pairs of protected tokens.", "Existing literature quantifies fairness on a test dataset or through templates (Feldman et al., 2015; Kiritchenko and Mohammad, 2018; May et al., 2019; Huang et al., 2020).", "For instance, Garg et al. (2019) quantify the absolute counterfactual token fairness gap on the test set; Prab-hakaran et al. (2019) study perturbation sensitivity for named entities on a given set of corpus.", "Wallace et al. (2019b); Sheng et al. (2019, 2020) study how language generation models respond differently to prompt sentences containing mentions of different demographic groups.", "In contrast, our method quantifies the bias on the constructed neighborhood.", "This work proposes the double perturbation framework to identify model weaknesses beyond the test dataset, and study a stronger notion of robustness and counterfactual bias.", "We hope that our work can stimulate the research on further improving the robustness and fairness of natural language models.", "We thank anonymous reviewers for their helpful feedback.", "We thank UCLA-NLP group for the valuable discussions and comments.", "The research is supported NSF #1927554, #1901527, #2008173 and #2048280 and an Amazon Research Award.", "Intended use.", "One primary goal of NLP models is the generalization to real-world inputs.", "However, existing test datasets and templates are often not comprehensive, and thus it is difficult to evaluate real-world performance (Recht et al., 2019; Ribeiro et al., 2020).", "Our work sheds a light on quantifying performance for inputs beyond the test dataset and help uncover model weaknesses prior to the real-world deployment.", "Misuse potential.", "Similar to other existing adversarial attack methods (Ebrahimi et al., 2018; Jin et al., 2019; Zhao et al., 2018b), our second-order attacks can be used for finding vulnerable examples to a NLP system.", "Therefore, it is essential to study how to improve the robustness of NLP models against second-order attacks.", "Limitations.", "While the core idea about the double perturbation framework is general, in 4, we consider only binary gender in the analysis of counterfactual fairness due to the restriction of the English corpus we used, which only have words associated with binary gender such as he/she , wait-er/waitress , etc." ]
[ "abstain", "abstain", "abstain", "objective", "abstain", "method", "method", "objective", "method", "result", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "method", "other", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "method", "result", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "result", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "result", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "method", "abstain", "method", "method", "abstain", "method", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "objective", "result", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method" ]
[ "Large pre-trained vision-language (VL) models can learn a new task with a handful of examples and generalize to a new task without fine-tuning.", "However, these VL models are hard to deploy for real-world applications due to their impractically huge sizes and slow inference speed.", "To solve this limitation, we study prompt-based low-resource learning of VL tasks with our proposed method, FEWVLM, relatively smaller than recent few-shot learners.", "For FEWVLM, we pre-train a sequence-to-sequence transformer model with prefix language modeling (PrefixLM) and masked language modeling (MaskedLM).", "Furthermore, we analyze the effect of diverse prompts for few-shot tasks.", "Experimental results on VQA show that FEWVLM with prompt-based learning outperforms Frozen (Tsimpoukelli et al., 2021) which is 31 larger than FEWVLM by 18.2% point and achieves comparable results to a 246 larger model, PICa (Yang et al., 2021).", "In our analysis, we observe that (1) prompts significantly affect zero-shot performance but marginally affect few-shot performance, (2) models with noisy prompts learn as quickly as hand-crafted prompts given larger training data, and (3) MaskedLM helps VQA tasks while PrefixLM boosts captioning performance.", "Our code is publicly available at https://github.", "com/woojeongjin/FewVLM 1 Introduction Fine-tuning large pre-trained language models (PLMs) have led to strong results in various domains including vision-language tasks (Devlin et al., 2019; Raffel et al., 2020; Brown et al., 2020; Radford et al., 2021).", "Such large PLMs can learn a new task with a few examples or generalize to a new task without fine-tuning on any training examples, i.e., few-shot and zero-shot learnWork was mainly done while interning at Microsoft Azure AI.", "ing (Brown et al., 2020; Radford et al., 2021; Tsim-poukelli et al., 2021).", "Few-shot learning overcomes the challenges of data-hungry supervised learning, where collecting human-labeled data is costly and slow.", "However, recent few-shot models such as GPT3 (Brown et al., 2020), Frozen (Tsimpoukelli et al., 2021), and PICa (Yang et al., 2021) are too large to deploy in small or moderate computing machines due to their gigantic model sizes In this paper, we study low-resource learning of VL tasks with our proposed method, FEWVLM, a moderate-sized vision-language model, in which we fine-tune the model with no or a handful of training examples.", "For FEWVLM, we pre-train a sequence-to-sequence transformer model (Cho et al., 2021; Raffel et al., 2020) with prefix language modeling (PrefixLM) and masked language modeling (MaskedLM).", "This setup is more practical in that training and inference can be run economically using standard computing hardware and 2763 Transformer Encoder Transformer Decoder What position is this man playing?", "it is expensive to obtain a large number of quality training examples in the real world.", "In such a few-shot setting, task-specific prompts or task descriptions are important and have shown effectiveness in few-shot NLP tasks (Gao et al., 2021; Radford et al., 2021; Schick and Schtze, 2021a,b; Brown et al., 2020).", "To extend the success to VL tasks, we aim to answer the following questions for prompt-based low-resource VL learning.", "Q1) How does prompt design affect zero/few-shot learning on new tasks?", "Q2) Does prompt design still matter given larger training?", "Q3) How do different pre-training objectives affect zero/few-shot learning?", "To answer these questions, we explore various prompt formats including hand-crafted and noisy prompts on zero/few-shot VL learning datasets.", "In addition, we study pre-training objectives on few-shot tasks inspired by Raffel et al. (2020): prefix language modeling (PrefixLM) inspired by Raffel et al. (2020) and masked language modeling (MaskedLM).", "To this end, we investigate the model's performance on few-shot VL tasks including visual question answering (Goyal et al., 2017; Marino et al., 2019; Hudson and Manning, 2019), captioning (Agrawal et al., 2019; Young et al., 2014) (Fig. 1), and miniImageNet (Vinyals et al., 2016).", "In our empirical analysis, our FEWVLM with prompt-based learning outperforms Frozen (Tsim-poukelli et al., 2021) which is 31 larger than FEWVLM by 18.2% point on zero-shot VQAv2 and achieves comparable results to a 246 larger model, PICa (Yang et al., 2021).", "Furthermore, we observe that (1) prompts significantly affect zero-shot performance but marginally affect few-shot performance on new tasks (6.2 and 6.3), (2) models with noisy prompts learn as quickly as hand-crafted prompts given larger training data (6.5), and (3) MaskedLM helps few-shot VQA tasks while PrefixLM boosts captioning performance (6.6).", "Vision-language few-shot learning.", "Recently, several few-shot learners on vision-language tasks were proposed including GPT (Radford et al., 2019; Brown et al., 2020), Frozen (Tsimpoukelli et al., 2021), PICa (Yang et al., 2021), and SimVLM (Wang et al., 2021).", "Frozen (Tsim-poukelli et al., 2021) is a large language model based on GPT-2 (Radford et al., 2019), and is transformed into a multimodal few-shot learner by extending the soft prompting to incorporate a set of images and text.", "Their approach shows the few-shot capability on visual question answering and image classification tasks.", "Similarly, PICa (Yang et al., 2021) uses GPT-3 (Brown et al., 2020) to solve VQA tasks in a few-shot manner by providing a few in-context VQA examples.", "It converts images into textual descriptions so that GPT-3 can understand the images.", "SimVLM (Wang et al., 2021) is trained with prefix language modeling on weakly-supervised datasets.", "It demonstrates its effectiveness on a zero-shot captioning task.", "While these models achieve improvement on few-shot tasks, they are impractical to use in real-world applications due to their model sizes.", "Language model prompting.", "Providing prompts or task descriptions play an vital role in improving pre-trained language models in many tasks (Gao et al., 2021; Radford et al., 2021; Schick and Schtze, 2021a,b; Brown et al., 2020).", "Among them, GPT models (Radford et al., 2019; Brown et al., 2020) achieved great success in prompting 2764 a lady walking next to a bicycle Prefix LM carrying an umbrella Masked LM a lady walking next to a <text_1> carrying an <text_2> <text_1> bicycle <text_2> umbrella Input image Target text Input text Figure 3: Pre-training objectives.", "or task demonstrations in NLP tasks.", "In light of this direction, prompt-based approaches improve small pre-trained models in few-shot text classification tasks (Gao et al., 2021; Schick and Schtze, 2021a,b).", "CLIP (Radford et al., 2021) also explores prompt templates for image classification which affect zero-shot performance.", "We follow these core ideas so we aim to improve zero-shot and few-shot performance using prompts in vision-language tasks.", "In this work, we study the zero-shot and few-shot performance of vision-language models L .", "We introduce our analysis setup: problem formulation, analysis questions, downstream tasks and datasets, evaluation metrics, and baselines.", "For zero-shot tasks, a pre-trained VL model L have no access to training set D train and development set D dev , and directly makes inference on the test instances D test .", "For few-shot tasks, we compose a dev set D dev from training data and ensure that |D train | = |D dev | following Perez et al. (2021); Gao et al. (2021) to tune the hyper-parameters and select the model.", "We limit the sizes of training and development sets to meet the goal of learning from limited data.", "The size of D train and D dev are small i.e., we set the size of both to 16 in our study.", "We aim to answer the following questions in this study through experiments on multiple VL datasets.", "Q1) How does prompt design affect zero/few-shot learning on new tasks?", "Providing a pre-trained language model with task-specific prompts or significantly improves zero-shot and few-shot performance on NLP domains (Gao et al., 2021; Schick and Schtze, 2021a,b; Brown et al., 2020).", "For this question, we test several ad-hoc prompts on vision-language tasks and analyze how large zero-shot and few-shot performance is affected by different prompts, hand-crafted and noisy prompts, in Sec. 6.5.", "Q2) Does prompt design still matter given larger training data?", "As we will see in our experiments, prompts affect the zero/few-shot performance.", "However, prompts may have different effects when models are given different sizes of training data.", "To answer this question, we train models with different sizes of training data and various prompts, and compare the performance between different prompts.", "Q3) How do different pre-training objectives affect zero/few-shot performance?", "We study two different pre-training objectives on few-shot performance: prefix language modeling (PrefixLM) inspired by Raffel et al. (2020) and masked language modeling (MaskedLM).", "In this setup, we pre-train our model with different objectives and test the model on zero-shot and few-shot tasks in Sec. 6.6.", "In this work, we mainly focus on three tasks: visual question answering, captioning, and categorical learning.", "The visual question answering task requires models to answer a question to a given context image.", "We convert the visual question answering task into a generation task so that the model can generate answers in the zero-shot setting.", "The captioning task requires a model to generate descriptions for a given context image.", "The categorical learning requires a model to choose the correct category or class.", "We evaluate our model in an open-ended fashion to quantify fast learning of categories, in which it must generate correct labels unlike other classification methods.", "and Manning, 2019) for visual question answering tasks, and NoCaps (Agrawal et al., 2019), and Flickr30k (Young et al., 2014) for image captioning.", "1 We use Karpathy split (Karpathy and Li, 2015) for Flickr30k, which re-splits train and val images into 29,000 / 1,014 / 1,000 for train / validation / test.", "For categorical learning, we include miniImageNet (Vinyals et al., 2016), a meta learning dataset.", "Following (Tsimpoukelli et al., 2021), we use only meta test data to evaluate FEWVLM in a few-shot manner and test on 5-way k -shot setup, where 5 classes and k examples per class are given.", "2 3.4 Evaluation Metrics To evaluate few-shot performance, we randomly sample 5 different training and dev splits and measure average performance on the 5 splits.", "We fine-tune the vision-language models with 200 epochs for the few-shot setup and choose the best checkpoint on the dev set.", "For NoCaps task, it does not have training data.", "Thus we use the training data from COCO captioning in the experiments following Wang et al. (2021).", "We evaluate on the VQAv2 validation set, GQA test-dev, OK-VQA test set, test set of Karpathy split for Flickr30k captioning, and NoCaps validation set.", "We adopt accuracy for VQA datasets and miniImageNet, and CIDEr (Vedantam et al., 2015) and SPICE (Anderson et al., 2016) as evaluation metrics for captioning.", "We evaluate strong zero/few-shot vision-language learners for comparison: Frozen (Tsimpoukelli et al., 2021), PICa (Yang et al., 2021) for VQA", "2 For VQA and captioning, we include k samples in total, not per class.", "datasets and SimVLM (Wang et al., 2021) for captioning datasets.", "We include Unified VLP (Zhou et al., 2020) for few-shot VQAv2 and Flickr30k.", "Also, we compare them with fully fine-tuned models L full as upper bounds of few-shot models for each task; these models are fine-tuned on the entire datasets while few-shot models can access a small amount of data.", "For fully fine-tuned models L full , we borrow numbers from Uniter large (Chen et al., 2019) for VQAv2, Oscar (Li et al., 2020b) for GQA, SimVLM (Wang et al., 2021) and VinVL (Zhang et al., 2021) for NoCaps CIDER and SPICE respectively, and Unified VLP (Zhou et al., 2020) for Flickr30k captioning.", "We include VL-T5 no-vqa as a baseline which is pre-trained without visual question answering datasets (Cho et al., 2021).", "For miniImageNet, we include Frozen and AFHN (Li et al., 2020a).", "Frozen is designed for few-shot learning while AFHN is for meta learning, which is smaller and faster.", "Before diving into the analysis, we introduce our model, FEWVLM, to do zero/few-shot learning on VL tasks and answer the analysis questions we raised.", "We introduce FEWVLM architecture and pre-training objectives.", "We adopt an encoder-decoder architecture (Cho et al., 2021; Vaswani et al., 2017), to encode visual and text inputs and generate target text.", "We represent an input image with 36 object regions from a Faster R-CNN (Ren et al., 2015) trained on Visual Genome (Krishna et al., 2017).", "The sets of region representations are fed into the encoder by appending them to the text Cho et al. (2021).", "We train the model parameters by minimizing the negative 2766 log-likelihood of target text y tokens given input text x and image v : L = | y | (cid:88) i =1 log P ( y i | y <i , x, v ) .", "(1) The model is not task-specific, so it is a good option for zero/few-shot settings.", "We pre-train the models with both prefix language modeling (PrefixLM) and masked language modeling (MaskedLM).", "Fig. 3 illustrates the PrefixLM and MaskedLM.", "Prefix language modeling.", "We include prefix language modeling (PrefixLM) following Raffel et al. (2020).", "Given an image and a span of text, this objective randomly splits the text into two separate components; the former component with the given image is used as inputs to the encoder and the latter component is used as target text to be generated by the decoder.", "Masked language modeling.", "We follow Cho et al. (2021) to do masked language modeling.", "This objective is to replace random spans with numbered sentinel tokens, e.g., <text_1> , and then the masked text is fed into the encoder.", "Then the decoder generates the masked spans as target text.", "We randomly mask 15% of input text tokens and replace them with sentinel tokens.", "Pre-training data.", "To pre-train FEWVLM, we collect image-caption data from MS COCO (Lin et al., 2014; Chen et al., 2015) and Visual Genome (VG) (Krishna et al., 2017).", "The pre-training datasets contains 9.18M image-text pairs and 180K distinct images.", "In downstream tasks, we train our model with few-shot examples.", "Fig. 2 shows an illustration of FEWVLM in inference time.", "Given a prompt template P , we first get input text and target text using the template x, y = P ( input , label ) .", "Then we train model parameters by minimizing the negative log-likelihood in Eq.", "(1).", "In inference, we use the same prompt and the model generates the label text.", "Here we obtain the final label by removing the target prompt template.", "Prompts affect the performance of the vision-language model (Cho et al., 2021); we study the", "effect of different prompts on the zero-shot and few-shot performance on downstream tasks.", "Tables 1 and 11 show prompts we used in our experiments.", "The visual question answering tasks (VQA, OK-VQA, and GQA) require models to answer a question to a given context image.", "Recent approaches (Chen et al., 2019; Tan and Bansal, 2019; Su et al., 2020; Li et al., 2019, 2020b) tackle visual question answering tasks as multi-label classification over a predefined set of answer candidates.", "Instead, we approach the visual question answering tasks as a generation task so that the model can produce the answers without introducing any task-specific heads.", "In this setup, prompts act as constraints to guide the models to generate proper formats of answers; models might generate a sentence for VQA, which is not the correct format, without prompts.", "Therefore, we study several prompts for input and output as shown in Tables 1 and 11; we explore hand-crafted prompts (Table 1) and noisy prompts for ablation study (Table 11).", "Hand-crafted prompts.", "For input prompts, we explore three different templates: question: [Q] answer: and with the <text_1> sentinel token at the end.", "Similarly to masked language modeling, we expect models to generate words thanks to the sentinel token.", "For target prompts, we explore two different templates: [A] (an answer) and <text_1> [A] (an answer with a sentinel to-ken).", "Here, we aim to mimic MaskedLM's target text format, so the similar format helps the model quickly adapt to the new task.", "We call each prompt ID as in Table 1.", "Noisy prompts.", "To understand the effect of noisy prompts in zero/few-shot learning, we include irrelevant prompts, noisy tokens, and random sentences as in Table 11.", "Irrelevant prompts are random questions or instructions that mislead models to answer wrong questions or follow irrelevant instructions.", "Noisy tokens are randomly selected from T5's vocabulary, so we test how robust our model is to random tokens.", "Finally, random sentences are captions from MS COCO and this gives false information to models.", "In NoCaps and Flickr30k, we explore three handcrafted input prompts: a picture of , a photo of , and an image of .", "We study the effect of different 2767 Table 2: Zero-shot VQA results.", "word choices in this captioning task.", "While the three different words have similar meanings, they show different performance in zero-shot and few-shot tasks as we will see in our", "experiments..", "For target prompts, we just train the model with the original caption without any additional prompts.", "In miniImageNet, we train our model with a handcrafted input prompt, This is <text_1> , and target prompt, <text_1> [A] .", "We compare our model with and without prompts in this dataset to study whether prompts are helpful in categorical learning.", "In this section, we first discuss our main results on zero-shot and few-shot tasks and then answer the questions we raised: does prompt design matter in zero/few-shot learning?", "For pre-training, we set batch size 1,280 and 800 for FEWVLM base and FEWVLM large , respectively and pre-train them with 30 epochs.", "We use learning rate 1e-4 with 5% linear warmup.", "For few-shot learning, we train models with 200 epochs, learning rate 5e-5 and 5% linear warmup and choose the best checkpoint on the dev set.", "For FEWVLM, we use question: [Q] answer <text_1> (P3) as an input prompt and <text_1> [A] as a target prompt for visual question answering, and an image of (Q3) as an input prompt for captioning, which show the best performance.", "We will study the effect of different prompts in Sec. 6.5.", "The sizes of of D train and D dev are 16 on VQA and captioning tasks.", "For miniImageNet, we use This is <text_1> , and <text_1> [A] as input and target prompts.", "In this data, we test with {1, 3, 5}-shots per class.", "We evaluate the existing models in a zero-shot manner, in which models do not have access to any training data.", "Tables 2 and 4 show the results on VQA and captioning datasets, respectively.", "First, FEWVLM with the hand-crafted prompt (P3) achieves better performance than other baselines on VQA datasets.", "In particular, our FEWVLM base significantly outperforms Frozen 2768 Table 6: 5-way miniImageNet results.", "which is about 31 larger than ours.", "Also, PICa based on GPT3 (Brown et al., 2020) shows the best performance on OK-VQA.", "It is noticeable that our FEWVLM large , the 246 smaller model, achieves the comparable result to PICa.", "Compared to VL-T5 no-vqa which is the same architecture as ours, FEWVLM base improves VQAv2 performance by about 30% point.", "As we will see in the later section, our pre-training objectives and the prompts boost the VQA performance.", "On NoCaps, SimVLM huge shows the best performance.", "Our FEWVLM base significantly improves the performance compared to VL-T5 no-vqa .", "As we will see in the later section, our pre-training objectives and the prompts boost the VQA and captioning performance.", "Tables 3 and 5 show the few-shot performance on VQA and captioning datasets.", "Sizes of training and validation sets are 16 for FEWVLM, VL-T5 no-vqa , and Unified VLP; and Frozen and PICa use 4 and 16 in-context demonstration examples, respectively.", "On VQAv2 and OK-VQA, PICa shows the best performance while our FEWVLM large achieves the comparable result on VQAv2.", "OK-VQA requires external knowledge to answer unlike other VQA datasets, so larger models and large pre-training data (prior knowledge) are necessary to improve.", "Interestingly, FEWVLM base , which is trained with 4 training examples, outperforms Frozen.", "On captioning data, FEWVLM base notably outperforms VL-T5 no-vqa by 31.1% point on NoCaps CIDEr.", "Unified VLP slightly underperforms FEWVLM on Flickr30k captioning task.", "We conjecture that their architecture is based on a encoder-decoder transfomer and it is pre-trained with a captioning task (Zhou et al., 2020).", "Table 6 shows results on miniImageNet, where models must choose the correct class for each image.", "We train and evaluate FEWVLM in an generative manner; the model must generate correct label text to get the credit.", "FEWVLM significantly outperforms Frozen in all shots.", "Note that we train FEWVLM with a few training samples while Frozen uses them as in-context demonstration.", "Interestingly, FEWVLM with a hand-crafted prompt improves performance a lot on the 1-shot case, while it marginally improves on the 5-shot case.", "Here we examine the effect of different prompts on FEWVLM base in Table 7 and Figs.", "6, 5, and", "4. We test the model on VQAv2 and Flickr30k datasets.", "Table 7 shows the zero-shot performance on VQAv2 and Flickr30k.", "We observe that zero-shot results are remarkably affected by input prompts on both datasets.", "For input prompts, <text_1> in P1 and P3 helps the zero-shot predictions significantly compared to no prompt and P2.", "We conjecture that <text_1> guides the model to predict masked spans similarly to MaskedLM, so it improves the performance.", "On Flickr30k, we examine different word choices of prompts: a picture of (Q1), a photo of (Q2), and an image of (Q3).", "For instance, using an image of outperforms using no prompt by 21.4 point.", "It is noticeable that different word choices significantly affect the zero-shot results.", "We study various input prompts including irrelevant prompts, noisy tokens, and random sentences on VQAv2 (Fig. 4).", "First, noisy prompts and no prompt achieve near 0 accuracy on the zero-shot setting.", "In few-shot predictions, FEWVLM with 2769 0 10 20 30 50 100 200 Training size 0 10 20 30 40 50 ACC o n VQA v 2 hand-crafted no prompt irrelevant prompts noisy tokens random sentences Figure 4: VQAv2 results on noisy prompts.", "noisy prompts learns as quickly as hand-crafted prompts given larger data.", "For example, our model with noisy prompts achieves comparable results to the best hand-crafted prompt.", "Among all different types of noisy prompts, random sentences deteriorate performance the most.", "This is because the random sentences come from captions in MS COCO, so the model might choose the answer from wrong captions not from images.", "Interestingly, no prompt outperforms the other noisy prompts and even shows similar to or better than the handcrafted prompt with larger training data.", "We also observe a similar phenomenon on Flickr30k; no prompt performs similar to hand-crafted prompts in Fig.", "5. 10 20 30 50 100 200 300 Training size 30 35 40 45 50 ACC o n VQA v 2 <text_1> [A] [A] Figure 6: VQAv2 results on different target prompts.", "In addition, we explore two different target prompts, <text_1> [A] and [A] .", "We try to mimic the MaskedLM's target text format, so we add <text_1> to target prompt on VQA.", "This might help the model's fast adaptation to a new task since they share the same target prompt.", "In Fig. 6, we notice an interesting phenomenon; the target prompt [A] shows a larger variance than the other suggesting that introducing <text_1> helps the model quickly adapt to a new task.", "However, both prompts show similar results given larger training data, e.g., 300.", "We investigate how pre-training objectives affect different tasks.", "We pre-train FEWVLM with different pre-training objectives: masked language modeling (MaskedLM) and prefix language modeling (PrefixLM).", "VQA tasks while PrefixLM helps captioning tasks in zero-shot and few-shot settings.", "We conjecture that MaskedLM is to predict spans, which is analogous to predict correct answers to questions, and PrefixLM is to generate the rest of the given prefix, which is similar to captioning tasks.", "In other words, if the pre-training task is similar to the downstream tasks, then it will help performance further.", "When pre-training with both objectives, they create a synergetic effect and thus improve cross-task generalization.", "In this work, we present FEWVLM, a few-shot prompt-based learner on vision-language tasks.", "On diverse datasets, FEWVLM outperforms baselines and shows comparable results to PICa which is 246 larger than ours.", "We observe that prompts are vital in zero-shot and few-shot tasks and each pre-training objective helps different few-shot tasks.", "Also, we find out that models with larger training data are not significantly affected by noisy prompts.", "Future work includes exploring automatic prompt generation and diverse formats of few-shot tasks such as multiple-choice VQA.", "Finding optimal prompts require exhaustive engineering to achieve the best performance and leads to impressive results.", "We leave the exploration of these directions to future investigations." ]
[ "abstain", "abstain", "objective", "method", "method", "abstain", "result", "other", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "result", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "method", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "method", "abstain", "method", "method", "method", "method", "method", "abstain", "method", "method", "method", "method", "method", "abstain", "method", "method", "method", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "objective", "result", "abstain", "abstain", "abstain" ]
[ "Cross-lingual word embeddings ( CLWE ) are often evaluated on bilingual lexicon induction ( BLI ).", "Recent CLWE methods use linear projections, which underfit the training dictionary, to generalize on BLI .", "However, underfitting can hinder generalization to other downstream tasks that rely on words from the training dictionary.", "We address this limitation by retrofitting CLWE to the training dictionary, which pulls training translation pairs closer in the embedding space and overfits the training dictionary.", "This simple post-processing step often improves accuracy on two downstream tasks, despite lowering BLI test accuracy.", "We also retrofit to both the training dictionary and a synthetic dictionary induced from CLWE , which sometimes generalizes even better on downstream tasks.", "Our results confirm the importance of fully exploiting the training dictionary in downstream tasks and explains why BLI is a flawed CLWE evaluation.", "Cross-lingual word embeddings ( CLWE ) map words across languages to a shared vector space.", "Recent supervised CLWE methods follow a projection-based pipeline (Mikolov et al., 2013).", "Using a training dictionary, a linear projection maps pre-trained monolingual embeddings to a multilingual space.", "While CLWE enable many multilingual tasks (Klementiev et al., 2012; Guo et al., 2015; Zhang et al., 2016; Ni et al., 2017), most recent work only evaluates CLWE on bilingual lexicon induction ( BLI ).", "Specifically, a set of test words are translated with a retrieval heuristic (e.g., nearest neighbor search) and compared against gold translations.", "BLI accuracy is easy to compute and captures the desired property of CLWE that translation pairs should be close.", "does not always correlate with accuracy on downstream tasks such as cross-lingual document classification and dependency parsing (Ammar et al., 2016; Fujinuma et al., 2019; Glavas et al., 2019).", "Let's think about why that might be.", "BLI accuracy is only computed on test words.", "Consequently, BLI hides linear projection's inability to align all training translation pairs at once; i.e., projection-based CLWE underfit the training dictionary.", "Underfitting does not hurt BLI test accuracy, because test words are excluded from the training dictionary in BLI benchmarks.", "However, words from the training dictionary may be nonetheless predictive in downstream tasks; e.g., if good is in the training dictionary, knowing its translation is useful for multilingual sentiment analysis.", "In contrast, overfitting the training dictionary hurts BLI but can improve downstream models.", "We show this by adding a simple post-processing step to projection-based pipelines (Figure 1).", "After training supervised CLWE with a projection, we retrofit (Faruqui et al., 2015) the CLWE to the same training dictionary.", "This step pulls training translation pairs closer and overfits: the updated embeddings have perfect BLI training accuracy, but BLI test accuracy drops.", "Empirically, retrofitting improves accuracy in two downstream tasks other than BLI , confirming the importance of fully exploiting the training dictionary.", "Unfortunately, retrofitting to the training dictionary may inadvertently push some translation pairs further away.", "To balance between fitting the training dictionary and generalizing on other words, we explore retrofitting to both the training dictionary and a synthetic dictionary induced from the CLWE .", "Adding the synthetic dictionary keeps some correctly aligned translations in the original CLWE and can further improve downstream models by striking a balance between training and test BLI accuracy.", "In summary, our contributions are two-fold.", "First, we explain why BLI does not reflect downstream task accuracy.", "Second, we introduce two post-processing methods to improve downstream models by fitting the training dictionary better.", "This section reviews projection-based CLWE .", "We then discuss how BLI evaluation obscures the limitation of projection-based methods.", "Let X R d n be a pre-trained d -dimensional word embedding matrix for a source language, where each column x i R d is the vector for word i from the source language with vocabulary size n , and let Z R d m be a pre-trained word embedding matrix for a target language with vocabulary size m .", "Projection-based CLWE maps X and Z to a shared space.", "We focus on supervised methods that learn the projection from a training dictionary D with translation pairs ( i, j ) .", "Mikolov et al. (2013) first propose projection-based CLWE .", "They learn a linear projection W R d d from X to Z by minimizing distances between translation pairs in a training dictionary: min W ( i,j ) D Wx i z j 22 .", "Recent work improves this method with different optimization objectives (Dinu et al., 2015; Joulin et al., 2018), orthogonal constraints on W (Xing et al., 2015; Artetxe et al., 2016; Smith et al., 2017), pre-processing (Zhang et al., 2019), and subword features (Chaudhary et al., 2018; Czarnowska et al., 2019; Zhang et al., 2020).", "Projection-based methods underfit a linear projection has limited expressiveness and cannot perfectly align all training pairs.", "Unfortunately, this weakness is not transparent when using BLI as the standard evaluation for CLWE , because BLI test sets omit training dictionary words.", "However, when the training dictionary covers words that help downstream tasks, underfitting limits generalization to other tasks.", "Some BLI benchmarks use frequent words for training and infrequent words for testing (Mikolov et al., 2013; Conneau et al., 2018).", "This mismatch often appears in real-world data, because frequent words are easier to find in digital dicitonaries (Czarnowska et al., 2019).", "Therefore, training dictionary words are often more important in downstream tasks than test words.", "To fully exploit the training dictionary, we explore a simple post-processing step that overfits the dictionary: we first train projection-based CLWE and then retrofit to the training dictionary (pink parts in Figure 1).", "Retrofitting was originally introduced for refining monolingual word embeddings with synonym constraints from a lexical ontology (Faruqui et al., 2015).", "For CLWE , we retrofit using the training dictionary D as the ontology.", "Intuitively, retrofitting pulls translation pairs closer while minimizing deviation from the original CLWE .", "Let X and Z be CLWE trained by a projection-based method, where X = WX are the projected source embeddings and Z = Z are the target embeddings.", "We learn new CLWEX and Z by minimizing L = L a + L b , (2) where L a is the squared distance between the updated CLWE from the original CLWE : L a = X X 2 + Z Z 2 , (3) and L b is the total squared distance between translations in the dictionary: L b = ( i,j ) D ij x i z j 2 .", "(4) We use the same and as Faruqui et al. (2015) to balance the two objectives.", "Retrofitting tends to overfit.", "If is zero, minimizing L b collapses each training pair to the same vector.", "Thus, all training pairs are perfectly aligned.", "In practice, we use a non-zero for regularization, but the updated CLWE still have perfect training BLI accuracy (Figure 2).", "If the training dictionary covers predictive words, we expect retrofitting to improve downstream task accuracy.", "While retrofitting brings pairs in the training dictionary closer, the updates may also separate translation pairs outside of the dictionary because retrofitting ignores words outside the training dictionary.", "This can hurt both BLI test accuracy and downstream task accuracy.", "In contrast, projection-based methods underfit but can discover translation pairs outside the training dictionary.", "To keep the original CLWE 's correct translations, we retrofit to both the training dictionary and a synthetic dictionary induced from CLWE (orange, Figure 1).", "Early work induces dictionaries from CLWE through nearest-neighbor search (Mikolov et al., 2013).", "We instead use cross-domain similarity local scaling (Conneau et al., 2018, CSLS ), a translation heuristic more robust to hubs (Dinu et al., 2015) (a word is the nearest neighbor of many words).", "We build a synthetic dictionary D with word pairs that are mutual CSLS nearest neighbors.", "We then retrofit the CLWE to a combined dictionary D D .", "The synthetic dictionary keeps closely aligned word pairs in the original CLWE , which sometimes improves downstream models.", "We retrofit three projection-based CLWE to their training dictionaries and synthetic dictionaries.", "1 We evaluate on BLI and two downstream tasks.", "While retrofitting decreases test BLI accuracy, it often improves downstream models.", "We align English embeddings with six target languages: German ( DE ), Spanish ( ES ), French ( FR ), Italian ( IT ), Japanese ( JA ), and Chinese ( ZH ).", "We use 300-dimensional fastText vectors trained on Wikipedia and Common Crawl (Grave et al., 2018).", "We lowercase all words, only keep the 200K most frequent words, and apply five rounds of Iterative Normalization (Zhang et al., 2019).", "We use dictionaries from MUSE (Conneau et al., 2018), a popular BLI benchmark, with standard splits: train on 5K source word translations and test on 1.5K words for BLI .", "For each language, we train three projection-based CLWE : canonical correlation analysis (Faruqui and Dyer, 2014, CCA ), 1 Code at https://go.umd.edu/retro_clwe .", "Procrustes analysis (Conneau et al., 2018, PROC ), and Relaxed CSLS loss (Joulin et al., 2018, RCSLS ).", "We retrofit these CLWE to the training dictionary (pink in figures) and to both the training and the synthetic dictionary (orange in figures).", "In MUSE , words from the training dictionary have higher frequencies than words from the test set.", "2 For example, the most frequent word in the English-French test dictionary is torpedo, while the training dictionary has translations for frequent words such as the and good.", "As discussed in 2, more frequent words are likely to be more salient in downstream tasks, so underfitting these more frequent training pairs hurts generalization to downstream tasks.", "3 4.2 Intrinsic Evaluation: BLI We first compare BLI accuracy on both training and test dictionaries (Figure 2).", "We use CSLS to translate words with default parameters.", "The original projection-based CLWE have the highest test accuracy but underfit the training dictionary.", "Retrofitting to the training dictionary perfectly 2 https://github.com/facebookresearch/ MUSE/issues/24 3 A pilot study confirms that retrofitting to infrequent word pairs is less effective.", "fits the training dictionary but drops test accuracy.", "Retrofitting to the combined dictionary splits the difference: higher test accuracy but lower train accuracy.", "These three modes offer a continuum between BLI test and training accuracy.", "We compare CLWE on two downstream tasks: document classification and dependency parsing.", "We fix the embeddng layer of the model to CLWE and use the zero-shot setting, where a model is trained in English and evaluated in the target language.", "Document Classification Our first downstream task is document-level classification.", "We use MLDoc, a multilingual classification benchmark (Schwenk and Li, 2018) using the standard split with 1K training and 4K test documents.", "Following Glavas et al. (2019), we use a convolutional neural network (Kim, 2014).", "We apply 0 .", "5 dropout to the final layer, run Adam (Kingma and Ba, 2015) with default parameters for ten epochs, and report the average accuracy of ten runs.", "Dependency Parsing We also test on dependency parsing, a structured prediction task.", "We use Universal Dependencies (Nivre et al., 2019, v2.4) with the standard split.", "We use the biaffine parser (Dozat and Manning, 2017) in Al-lenNLP (Gardner et al., 2017) with the same hyper-parameters as Ahmad et al. (2019).", "To focus on the influence of CLWE , we remove part-of-speech features (Ammar et al., 2016).", "We report the average unlabeled attachment score ( UAS ) of five runs.", "Results Although training dictionary retrofitting lowers BLI test accuracy, it improves both downstream tasks' test accuracy (Figure 3).", "This confirms that over-optimizing the test BLI accuracy can hurt downstream tasks because training dictionary words are also important.", "The synthetic dictionary further improves downstream models, showing that generalization to downstream tasks must balance between BLI training and test accuracy.", "Qualitative Analysis As a qualitative example, coordinations improve after retrofitting to the training dictionary.", "For example, in the German sentence Das Lokal ist sauber, hat einen gemtlichen Raucherraum' und wird gut besucht, the bar (Das Lokal) has three properties: it is clean, has a smoking room, and is popular.", "However, without retrofitting, the final property besucht is connected to hat instead of sauber; i.e., the final clause stands on its own.", "After retrofitting to the English-German training dictionary, besucht is moved closer to its English translation visited and is correctly parsed as a property of the bar.", "Previous work proposes variants of retrofitting broadly called semantic specialization methods.", "Our pilot experiments found similar trends when replacing retrofitting with Counter-fitting (Mrkic et al., 2016) and Attract-Repel (Mrkic et al., 2017), so we focus on retrofitting.", "Recent work applies semantic specialization to CLWE by using multilingual ontologies (Mrkic et al., 2017), transferring a monolingual ontology across languages (Ponti et al., 2019), and asking bilingual speakers to annotate task-specific keywords (Yuan et al., 2019).", "We instead re-use the training dictionary of the CLWE .", "Synthetic dictionaries are previously used to iteratively refine a linear projection (Artetxe et al., 2017; Conneau et al., 2018).", "These methods still underfit because of the linear constraint.", "We instead retrofit to the synthetic dictionary to fit the training dictionary better while keeping some generalization power of projection-based CLWE .", "Recent work investigates cross-lingual contextualized embeddings as an alternative to CLWE (Eisenschlos et al., 2019; Lample and Conneau, 2019; Huang et al., 2019; Wu and Dredze, 2019; Conneau et al., 2020).", "Our method may be applicable, as recent work also applies projections to contextualized embeddings (Aldarmaki and Diab, 2019; Schuster et al., 2019; Wang et al., 2020; Wu et al., 2020).", "Popular CLWE methods are optimized for BLI test accuracy.", "They underfit the training dictionary, which hurts downstream models.", "We use retrofitting to fully exploit the training dictionary.", "This post-processing step improves downstream task accuracy despite lowering BLI test accuracy.", "We then add a synthetic dictionary to balance BLI test and training accuracy, which further helps downstream models on average.", "BLI test accuracy does not always correlate with downstream task accuracy because words from the training dictionary are ignored.", "An obvious fix is adding training words to the BLI test set.", "However, it is unclear how to balance between training and test words.", "BLI accuracy assumes that all test words are equally important, but the importance of a word depends on the downstream task; e.g., the is irrelevant in document classification but important in dependency parsing.", "Therefore, future work should focus on downstream tasks instead of BLI .", "We focus on retrofitting due to its simplicity.", "There are other ways to fit the dictionary better; e.g., using a non-linear projection such as a neural network.", "We leave the exploration of non-linear projections to future work.", "This research is supported by NSF grant IIS-1564275 and by ODNI, IARPA, via the BETTER Program contract #2019-19051600005.", "The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of ODNI, IARPA, or the U.S. Government.", "The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein." ]
[ "abstain", "abstain", "abstain", "method", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "objective", "result", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "other", "method", "other", "other", "abstain", "other", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "other", "other" ]
[ "Understanding how linguistic structure is encoded in contextualized embedding could help explain their impressive performance across NLP.", "Existing approaches for probing them usually call for training classifiers and use the accuracy, mutual information, or complexity as a proxy for the representation's goodness.", "In this work, we argue that doing so can be unreliable because different representations may need different classifiers.", "We develop a heuristic, DIRECTPROBE , that directly studies the geometry of a representation by building upon the notion of a version space for a task.", "Experiments with several linguistic tasks and contextualized embeddings show that, even without training classifiers, DIRECTPROBE can shine light into how an embedding space represents labels, and also anticipate classifier performance for the representation.", "Distributed representations of words (e.g., Peters et al., 2018; Devlin et al., 2019) have propelled the state-of-the-art across NLP to new heights.", "Recently, there is much interest in probing these opaque representations to understand the information they bear (e.g., Kovaleva et al., 2019; Conneau et al., 2018; Jawahar et al., 2019).", "The most commonly used strategy calls for training classifiers on them to predict linguistic properties such as syntax, or cognitive skills like numeracy (e.g. Kassner and Schtze, 2020; Perone et al., 2018; Yaghoobzadeh et al., 2019; Krasnowska-Kieras and Wrblewska, 2019; Wallace et al., 2019; Pruksachatkun et al., 2020).", "Using these classifiers, criteria such as accuracy or model complexity are used to evaluate the representation quality for the task (e.g. Goodwin et al., 2020; Pimentel et al., 2020a; Michael et al., 2020).", "task.", "However, their ability to reveal the information in a representation is occluded by numerous factors, such as the choice of the optimizer and the initialization used to train the classifiers.", "For example, in our experiments using the task of preposition supersense prediction (Schneider et al., 2018), we found that the accuracies across different training runs of the same classifier can vary by as much as 8% !", "(Detailed results can be found in Appendix F.)", "Indeed, the very choice of a classifier influences our estimate of the quality of a representation.", "For example, one representation may achieve the best classification accuracy with a linear model, whereas another may demand a multi-layer per-ceptron for its non-linear decision boundaries.", "Of course, enumerating every possible classifier for a task is untenable.", "A common compromise involves using linear classifiers to probe representations (Alain and Bengio, 2017; Kulmizev et al., 2020), but doing so may mischaracterize representations that need non-linear separators.", "Some work recognizes this problem (Hewitt and Liang, 2019) and proposes to report probing results for at least logistic regression and a multi-layer percep-tron (Eger et al., 2019), or to compare the learning curves between multiple controls (Talmor et al., 2020).", "However, the success of these methods still depends on the choices of classifiers.", "In this paper, we pose the question: Can we evaluate the quality of a representation for an NLP task directly without relying on classifiers as a proxy?", "Our approach is driven by a characterization of not one, but all decision boundaries in a representation that are consistent with a training set for a task.", "This set of consistent (or approximately consistent) classifiers constitutes the version space for the task (Mitchell, 1982), and includes both simple (e.g., linear) and complex (e.g., non-linear) classifiers for the task.", "However, perfectly characterizing the version space for a problem presents computational challenges.", "To develop an approximation, we note that any decision boundary partitions the underlying feature space into contiguous regions associated with labels.", "We present a heuristic approach called DIRECTPROBE , which builds upon hierarchical clustering to identify such regions for a given task and embedding.", "The resulting partitions allow us to directly probe the embeddings via their geometric properties.", "For example, distances between these regions correlate with the difficulty of learning with the representation: larger distances between regions of different labels indicates that there are more consistent separators between them, and imply easier learning, and better generalization of classifiers.", "Further, by assigning test points to their closest partitions, we have a parameter-free classifier as a side effect, which can help benchmark representations without committing to a specific family of classifiers (e.g., linear) as probes.", "Our experiments study five different NLP tasks that involve syntactic and semantic phenomena.", "We show that our approach allows us to ascertain, without training a classifier,", "(a) if a representation admits a linear separator for a dataset,", "(b) how different layers of BERT differ in their representations for a task,", "(c) which labels for a task are more confusable,", "(d) the expected performance of the best classifier for the task, and", "(e) the impact of fine-tuning.", "In summary, the contributions of this work are: 1. We point out that training classifiers as probes is not reliable, and instead, we should directly analyze the structure of a representation space.", "2. We formalize the problem of evaluating representations via the notion of version spaces and introduce DIRECTPROBE , a heuristic method to approximate it directly which does not involve training classifiers.", "3. Via experiments, we show that our approach can help identify how good a given representation will be for a prediction task.", "1 2 Representations and Learning In this section, we will first briefly review the relationship between representations and model learning.", "Then, we will introduce the notion of (cid:15) -version spaces to characterize representation quality.", "The problem of training a predictor for a task can be divided into two sub-problems:", "(a) representing data, and", "(b) learning a predictor (Bengio et al., 2013).", "The former involves transforming input objects x words, pairs of words, sentences,", "etc. into a representation E ( x ) R n that provides features for the latter.", "Model learning builds a classifier h over E ( x ) to make a prediction, denoted by the function composition h ( E ( x )) .", "Figure 1 illustrates the two sub-tasks and their roles in figuratively transporting an input x towards a prediction with a probability of error below a small (cid:15) .", "For the representation E 1 , the best classifier h 1 falls short of this error requirement.", "The representation E 4 does not need an additional classifier because it is identical to the label.", "The representations E 2 and E 3 both admit classifiers h 2 and h 3 that meet the error threshold.", "Further, note, that E 3 leaves less work for the classifier than E 2 , suggesting that it is a better representation as far as this task is concerned.", "The quality of a representation E for a task is a function of both the performance and the complexity of the best classifier h over that representation.", "Two observations follow from the above discussion.", "First, we cannot enumerate every possible classifier to find the best one.", "Other recent work, such 2 In addition to performance and complexity of the best classifier, other aspects such as sample complexity, the stability of learning, etc are also important.", "We do not consider them in this work: these aspects are related to optimization and learnability, and more closely tied to classifier-based probes.", "as that of Xia et al. (2020) make a similar point.", "Instead, we need to resort to an approximation to evaluate representations.", "Second, trivially, the best representation for a task is identical to an accurate classifier; in the illustration in Figure 1, this is represented by E 4 .", "However, such a representation is over-specialized to one task.", "In contrast, learned representations like BERT promise task-independent representations that support accurate classifiers.", "Given a classification task, we seek to disentangle the evaluation of a representation E from the classifiers h that are trained over it.", "To do so, the first step is to characterize all classifiers supported by a representation.", "Classifiers are trained to find a hypothesis (i.e., a classifier) that is consistent with a training set.", "A representation E admits a set of such hypotheses, and a learner chooses one of them.", "Consider the top-left example of Figure 2. There are many classifiers that separate the two classes; the figure shows two linear ( h 1 and h 2 ) and one non-linear ( h 3 ) example.", "Given a set H of classifiers of interest, the subset of classifiers that are consistent with a given dataset represents the version space with respect to H (Mitchell, 1982).", "To account for errors or noise in data, we define an (cid:15) -version space: the set of hypothesis that can achieve less than (cid:15) error on a given dataset .", "Let us formalize this definition.", "Suppose H represents the whole hypothesis space consisting of all possible classifiers h of interest.", "The (cid:15) -version space V (cid:15) ( H , E, D ) expressed by a representation E for a labeled dataset D is defined as: V (cid:15) ( H , E, D ) (cid:44) { h H | err ( h, E, D ) (cid:15) } (1) where err represents training error.", "Note that the (cid:15) -version space V (cid:15) ( H , E, D ) is only a set of functions and does not involve any learning.", "However, understanding a representation requires examining its (cid:15) -version spacea larger one would allow for easier learning.", "In previous work, the quality of a representation E for a task represented by a dataset D is measured via properties of a specific h V (cid:15) ( H , E, D ) , typically a linear probe.", "Commonly measured properties include generalization error (Kim et al., 2019), minimum description length of labels (Voita and Titov, 2020) and complexity (Whitney et al., 2020).", "Instead, we seek to directly evaluate the (cid:15) -version space of a representation for a task, without committing to a restricted set of probe classifiers.", "Although the notion of an (cid:15) -version space is well defined, finding it for a given representation and task is impossible because it involves enumerating all possible classifiers.", "In this section, we will describe a heuristic to approximate the (cid:15) -version space.", "We call this approach as DIRECTPROBE .", "Each classifier in V (cid:15) ( H , E, D ) is a decision boundary in the representation space that separates examples with different labels (see Figure 2, left).", "The decisions of a classifier can be mimicked by a set of piecewise linear functions.", "Figure 2 (middle) shows two examples.", "At the top is the simple case with linearly separable points.", "At the bottom is a more complicated case with a circular separator.", "The set of the piecewise linear function that matches its decisions needs at least three lines.", "The ideal piecewise linear separator partitions training points into groups, each of which contains points with exactly one label.", "These groups can be seen as defining convex regions in the embedding space (see Figure 2, left).", "Any classifier in V (cid:15) ( H , E, D ) must cross the regions between the groups with different labels; these are the regions that separate labels from each other, as shown in the gray areas in Figure 2 (middle).", "Inspired by this, we posit that these regions between groups with different labels, and indeed the partitions themselves, offer insight into V (cid:15) ( H , E, D ) .", "Although finding the set of all decision boundaries remain hard, finding the regions between convex groups that these piecewise linear functions splits the data into is less so.", "Grouping data points in this fashion is related to a well-studied problem, namely clustering, and several recent works have looked at clustering of contextualized representations (Reimers et al., 2019; Aharoni and Goldberg, 2020; Gupta et al., 2020).", "In this work, we have a new clustering problem with the following criteria:", "(i) All points in a group have the same label.", "We need to ensure we are mimicking the decision boundaries.", "(ii) There are no overlaps between the convex hulls of each group.", "If convex hulls of two groups do not overlap, there must exist a line that can separates them, as guaranteed by the hyperplane separation theorem.", "(iii) Minimize the number of total groups.", "Otherwise, a simple solution is that each data point becomes a group by itself.", "Note that the criteria do not require all points of one label to be grouped together.", "For example, in Figure 2 (bottom right), points of the circle (i.e., ) class are in three subgroups.", "To summarize what we have so far: we transformed the problem of finding the (cid:15) -version space into a clustering problem with specific criteria.", "Next, let us see a heuristic for partitioning the training set into clusters based on these criteria.", "To find clusters as described in 3.2, we define a simple bottom-up heuristic clustering strategy that forms the basis of DIRECTPROBE (Algorithm 1).", "In the beginning, each example x i with label y i in a training set D is a cluster C i by itself.", "In each iteration, we select the closest pair of clusters with the same label and merge them (lines 4, 5).", "If the convex hull of the new cluster does not overlap with all other clusters 3 , we keep this new cluster (line 9).", "Otherwise, we flag this pair (line 7) and 3 Note that all other clusters here means the other clusters with different labels.", "There is no need to prohibit overlap between clusters with the same label since they might be merged in the next iterations.", "We define the distance between clusters C i and C j as the Euclidean distance between their centroids.", "Although Algorithm 1 does not guarantee the minimization criterion in 3.2 since it is greedy heuristic, we will see in our experiments that, in practice, it works well.", "Overlaps Between Convex Hulls A key point of Algorithm 1 is checking if the convex hulls of two sets of points overlap (line 6).", "Suppose we have two sets C = { x C 1 , . . . , x Cn } and C (cid:48) = { x C (cid:48) 1 , . . . , x C (cid:48) m } .", "We can restate this as the problem of checking if there is some vector w (cid:60) n and a number b (cid:60) such that: x Ci C, w (cid:62) E ( x Ci ) + b 1 , x C (cid:48) j C (cid:48) , w (cid:62) E ( x C (cid:48) j ) + b 1 .", "where E is the representation under investigation.", "We can state this problem as a linear program that checks for feasibility of the system of inequalities.", "If the LP problem is feasible, there must exist a separator between the two sets of points, and they do not overlap.", "In our implementation, we use the Gurobi optimizer (Gurobi Optimization, 2020) to solve the linear programs.", "4 3.4 Noise: Controlling (cid:15) Clusters with only a small number of points could be treated as noise.", "Geometrically, a point in the neighborhood of other points with different labels 4 Algorithm 1 can be made faster by avoiding unnecessary calls to the solver.", "Appendix A gives a detailed description of these techniques, which are also incorporated in the code release.", "could be thought of as noise.", "Other clusters can not merge with it because of the no-overlap constraint.", "As a result, such clusters will have only a few points (one or two in practice).", "If we want zero error rate on the training data, we can keep these noise points; if we allow a small error rate (cid:15) , then we can remove these noise clusters.", "In our experiments, for simplicity, we keep all clusters.", "Before looking at the analysis offered by the partitions obtained via DIRECTPROBE in 5, let us first enumerate the English NLP tasks and representations we will encounter.", "Our main experiments focus on BERT base,cased , and we also show additional analysis on other contextual representations: ELMo (Peters et al., 2018) 5 , BERT large,cased (Devlin et al., 2019), RoBERTa base and RoBERTa large (Liu et al., 2019b).", "We refer the reader to the Appendix C for further details about these embeddings.", "We use the average of subword embeddings as the token vector for the representations that use subwords.", "We use the original implementation of ELMo, and the HuggingFace library (Wolf et al., 2020) for the others.", "We conduct our experiments on five NLP tasks that cover the varied usages of word representations (token-based, span-based, and token pairs) and include both syntactic and semantic prediction problems.", "The Appendix D has more details about the tasks to help with replication.", "Preposition supersense disambiguation represents a pair of tasks, involving classifying a preposi-tion's semantic role (SS-role) and semantic function (SS-func) .", "Following the previous work (Liu et al., 2019a), we only use the single-token prepositions in the Streusle v4.2 corpus (Schneider et al., 2018).", "Part-of-speech tagging (POS) is a token level prediction task.", "We use the English portion of the parallel universal dependencies treebank (ud-pud Nivre et al., 2016).", "We use the dataset of semeval2010 task 8 (Hen-drickx et al., 2010).", "To represent the pair of nominals, we concatenate their embeddings.", "Some nominals could be spans instead of individual tokens, and we represent them via the average embedding of the tokens in the span.", "Dependency relation (DEP) is the task of predicting the syntactic dependency relation between a token w head and its modifier w mod .", "We use the universal dependency annotation of the English web treebank (Bies et al., 2012).", "As with semantic relations, to represent the pair of tokens, we concatenate their embeddings.", "The key starting point of this work is that restricting ourselves to linear probes may be insufficient.", "To validate the results of our analysis, we evaluate a large collection of classifiersfrom simple linear classifiers to two-layers neural networksfor each task.", "For each one, we choose the best hyper-parameters using cross-validation.", "From these classifiers, we find the best test accuracy of each task and representation.", "All classifiers are trained with the scikit-learn library (Pedregosa et al., 2011).", "To reduce the impact of randomness, we trained each classifier 10 times with different initializations, and report their average accuracy.", "The Appendix E summarizes the best classifiers we found and their performance.", "DIRECTPROBE helps partition an embedding space for a task, and thus characterize its (cid:15) -version space.", "Here, we will see that these clusters do indeed characterize various linguistic properties of the representations we consider.", "linear separability of representations for a task.", "The best scenario is when the number of clusters equals the number of labels.", "In this case, examples with the same label are placed close enough by the representation to form a cluster that is separable from other clusters.", "A simple linear multi-class classifier can fit well in this scenario.", "In contrast, if the number of clusters is more than the number of labels, then some labels are distributed across multiple clusters (as in Figure 2, bottom).", "There must be a non-linear decision boundary.", "Consequently, Embedding Linear SVM #Clusters Training Accuracy BERT base,cased 100 17 BERT large,cased 100 17 RoBERTa base 100 17 RoBERTa large 99.97 1487 ELMo 100 17 Table 1: Linearity experiments on POS tagging task.", "In other words, using the clusters, and without training a classifier, we can answer the question: can a linear classifier fit the training set for a task with a given representation?", "To validate our predictions, we use the training accuracy of a linear SVM (Chang and Lin, 2011) classifier.", "If a linear SVM can perfectly fit ( 100% accuracy) a training set, then there exist linear decision boundaries that separate the labels.", "Table 1 shows the linearity experiments on the POS task, which has 17 labels in total.", "All representations except RoBERTa large have 17 clusters, suggesting a linearly separable space, which is confirmed by the SVM accuracy.", "We conjecture that this may be the reason why linear models usually work for BERT-family models.", "Of course, linear separability does not mean the task is easy or that the best classifier is a linear one.", "We found that, while most representations we considered are linearly separable for most of our tasks, the best classifier is not always linear.", "We refer the reader to Appendix E for the full results.", "As we mentioned in 3.1, a learning process seeks to find a decision boundary that separates clusters with different labels.", "Intuitively, a larger gap between them would make it easier for a learner to find a suitable hypothesis h that generalizes better.", "We use the distance between convex hulls of clusters as an indicator of the size of these gaps.", "We note that the problem of computing the distance between convex hulls of clusters is equivalent to finding the maximum margin separator between them.", "To find the distance between two clusters, we train a linear SVM (Chang and Lin, 2011) that Figure 3: Here we juxtapose the minimum distances between clusters and the best classifier accuracy for all 12 layers.", "separates them and compute its margin.", "The distance we seek is twice the margin.", "For a given representation, we are interested in the minimum distance across all pairs of clusters with different labels.", "Higher layers usually have larger (cid:15) -version spaces.", "Different layers of BERT play different roles when encoding liguistic information (Tenney et al., 2019).", "To investigate the geometry of different layers of BERT, we apply DIRECTPROBE to each layer of BERT base,cased for all five tasks.", "Then, we computed the minimum distances among all pairs of clusters with different labels.", "By comparing the minimum distances of different layers, we answer the question: how do different layers of BERT differ in their representations for a task?", "Figure 3 shows the results on all tasks.", "In each subplot, the horizontal axis is the layer index.", "For each layer, the blue circles (left vertical axis) is the best classifier accuracy, and the red triangles (right vertical axis) is the minimum distance described above.", "We observe that both best classifier accuracy and minimum distance show similar trends across different layers: first increasing, then decreasing.", "It shows that minimum distance correlates with the best performance for an embedding space, though it is not a simple linear relation.", "Another interesting observation is the decreasing performance and minimum distance of higher layers, which is also corroborated by Ethayarajh (2019) and Liu et al. (2019a).", "Fine-tuning expands the (cid:15) -version space.", "Past work (Peters et al., 2019; Arase and Tsujii, 2019; Merchant et al., 2020) has shown that fine-tuning pre-trained models on a specific task improves performance, and fine-tuning is now the de facto procedure for using contextualized embeddings.", "In this experiment, we try to understand why fine-tuning can improve performance.", "Without training classifiers, we answer the question: What changes in the embedding space after fine-tuning?", "We conduct the experiments described in 5.2.1 on the last layer of BERT base,cased before and after fine-tuning for all tasks.", "Table 2 shows the results.", "We see that after fine-tuning, both the best classifier accuracy and minimum distance show a big boost.", "It means that fine-tuning pushes the clusters away from each other in the representation space, which results in a larger (cid:15) -version space.", "As we discussed in 5.2, a larger (cid:15) -version space admits more good classifiers and allows for better generalization.", "Small distances between clusters can confuse a classifier.", "By comparing the distances between clusters, we can answer the question: Which labels for a task are more confusable?", "We compute the distances between all the pairs of labels based on the last layer of BERT base,cased .", "6 Based on an even split of the distances, we partition all label pairs into three bins: small, medium, and large.", "For each task, we use the predictions of the best classifier to compute the number of mis-6 For all tasks, BERT base,cased space (last layer) is linearly separable.", "So, the number of label pairs equals the number of cluster pairs.", "classified label pairs for each bin.", "For example, if the clusters associated with the part of speech tags ADV and ADJ are close to each other, and the best classifier misclassified ADV as ADJ, we put this error pair into the bin of small distance.", "The distribution of all errors is shown in Table 3. This table shows that a large majority of the misclassified labels are concentrated in the small distance bin.", "For example, in the supersense role task (SS-role), 97 .", "17% of the errors happened in small distance bin.", "The number of label pairs of each bin is shown in the parentheses.", "Table 3 shows that small distances between clusters indeed confuse a classifier and we can detect it without training classifiers.", "We can predict the expected performance of the best classifier.", "Any h V (cid:15) ( H , E, D ) is a predictor for the task D on the representation E .", "As a by-product of the clusters from DIRECTPROBE , we can define a predictor.", "The prediction strategy is simple: for a test example, we assign it to its closest cluster.", "7 Indeed, if the label of the cluster is the true label of the test point, then we know that there exists some classifier that allows this example to be correctly labeled.", "We can verify the label every test point and compute the aggregate accuracy to serve as an indicator of the generalization ability of the representation at hand.", "We call this accuracy the intra-accuracy .", "In other words, without training classifiers, we can answer the question: given a representation, what is the expected performance of the best classifier for a task?", "Figure 4 compares the best classifier accuracy, and the intra-accuracy of the last layer of different embeddings.", "Because our assignment strategy is similar to nearest neighbor classification (1-kNN), which assigns the unlabelled test point to its closest labeled point, the figure also compares to the 1-kNN accuracy.", "First, we observe that intra-accuracy always outperforms the simple 1-kNN classifier, showing that DIRECTPROBE can use more information from the representation space.", "Second, we see that the intra-accuracy is close to the best accuracy for some tasks (Supersense tasks and POS tagging).", "Moreover, all the pearson correlation coefficients between best accuracy and intra-accuracy (showed in the parentheses alongside each task title) suggest a high linear correlation between best classifier accuracy and intra-accuracy.", "That is, the intra-accuracy can be a good predictor of the best classifier accuracy for a representation.", "From this, we argue that intra-accuracy can be interpreted as a benchmark accuracy of a given representation without actually training classifiers.", "The distances between a test point and all the clusters from the training set can not only be used to predict the label but also can be used to identify difficult examples as per a given representation.", "Doing so could lead to re-annotation of the data, and perhaps lead to cleaner data, or to improved embeddings.", "Using the supersense role task, we show a randomly chosen example of a mismatch 7 To find the distance between the convex hull of a cluster and a test point, we find a max-margin separating hyperplane by training a linear SVM that separates the point from the cluster.", "The distance is twice the distance between the hyperplane and the test point.", "The data labels the word our as GESTALT , while the embedding places it in the neighborhood of POSSESSOR .", "The annotation guidelines for these labels (Schneider et al., 2017) notes that GESTALT is a supercategory of POSSESSOR .", "The latter is specifically used to identify cases where the possessed item is alienable and has monetary value.", "From this definition, we see that though the annotated label is GESTALT , it could arguably also be a POSSESSOR if phone numbers are construed as alienable possessions that have monetary value.", "Importantly, it is unclear whether BERT base,cased makes this distinction.", "Other examples we examined required similarly nuanced analysis.", "This example shows DIRECTPROBE can be used to identify examples in datasets that are potentially mislabeled, or at least, require further discussion.", "In addition to the classifier based probes described in the rest of the paper, a complementary line of work focuses on probing the representations using a behavior-based methodology.", "Controlled test sets (Senel et al., 2018; Jastrzebski et al., 2017) are designed and errors are analyzed to reverse-engineer what information can be encoded by the model (e.g., Marvin and Linzen, 2018; Ravichander et al., 2021; Wu et al., 2020).", "Another line of work probes the space by opening up the representation space or the model (e.g., Michel et al., 2019; Voita et al., 2019).", "There are some efforts to inspect the space from a geometric perspective (e.g., Ethayarajh, 2019; Mimno and Thompson, 2017).", "Our work extends this line of work to connect the geometric structure of embedding space with classifier performance without actually training a classifier.", "Recent work (Pimentel et al., 2020b; Voita and Titov, 2020; Zhu and Rudzicz, 2020) probe representations from an information theoretic perspective.", "These efforts still need a probability distribution p ( y | x ) from a trained classifier.", "In 5.3, we use clusters to predict labels.", "In the same vein, the conditional probability p ( y | x ) can be obtained by treating the negative distances between the test point x and all clusters as predicted scores and normalizing via softmax.", "Our formalization can fit into the information theoretic analysis and yet avoid training a classifier.", "Our analysis and experiments open new directions for further research: Novel pre-training target: The analysis presented here informs us that larger distance between clusters can improve classifiers.", "This could guide loss function design when pre-training representations.", "Quality of a representation: In this paper, we focus on the accuracy of a representation.", "We could seek to measure other properties (e.g., complexity) or proxies for them.", "These analytical approaches can be applied to the (cid:15) -version space to further analyze the quality of the representation space.", "Theory of representation: Learning theory, e.g. VC-theory (Vapnik, 2013), describes the learnabil-ity of classifiers; representation learning lacks of such theoretical analysis.", "The ideas explored in this work ( (cid:15) -version spaces, distances between clusters being critical) could serve as a foundation for an analogous theory of representations.", "In this work, we ask the question: what makes a representation good for a task?", "We answer it by developing DIRECTPROBE , a heuristic approach builds upon hierarchical clustering to approximate the (cid:15) -version space.", "Via experiments with several contextualized embeddings and linguistic tasks, we showed that DIRECTPROBE can help us understand the geometry of the embedding space and ascertain when a representation can successfully be employed for a task.", "We thank the members of the Utah NLP group and Nathan Schneider for discussions and valuable insights, and reviewers for their helpful feedback.", "We also thank the support of NSF grants #1801446 (SATC) and #1822877 (Cyberlearning)." ]
[ "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "objective", "method", "result", "abstain", "abstain", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "result", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "result", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "other", "other", "other", "other", "objective", "other", "other", "method", "other", "method", "objective", "other", "method", "method", "other", "other", "objective", "abstain", "objective", "result", "other", "other" ]
[ "We analyze if large language models are able to predict patterns of human reading behavior.", "We compare the performance of language-specific and multilingual pretrained transformer models to predict reading time measures reflecting natural human sentence processing on Dutch, English, German, and Russian texts.", "This results in accurate models of human reading behavior, which indicates that transformer models implicitly encode relative importance in language in a way that is comparable to human processing mechanisms.", "We find that BERT and XLM models successfully predict a range of eye tracking features.", "In a series of experiments, we analyze the cross-domain and cross-language abilities of these models and show how they reflect human sentence processing.", "When processing language, humans selectively attend longer to the most relevant elements of a sentence (Rayner, 1998).", "This ability to seamlessly evaluate relative importance is a key factor in human language understanding.", "It remains an open question how relative importance is encoded in computational language models.", "Recent analyses conclude that the cognitively motivated at-tention mechanism in neural models is not a good indicator for relative importance (Jain and Wallace, 2019).", "Alternative methods based on salience (Bastings and Filippova, 2020), vector normalization (Kobayashi et al., 2020), or subset erasure (De Cao et al., 2020) are being developed to increase the post-hoc interpretability of model predictions but the cognitive plausibility of the underlying representations remains unclear.", "In human language processing, phenomena of relative importance can be approximated indirectly by tracking eye movements and measuring fixation Figure 1: From the fixation times in milliseconds of a single subject in the ZuCo 1.0 dataset, the feature vector described in Section 3.2 for the wors Mary would be [2 , 233 , 233 , 431 , 215 . 5 , 1 , 1 , 1] .", "duration (Rayner, 1977).", "It has been shown that fixation duration and relative importance of text segments are strongly correlated in natural reading, so that direct links can be established on the token level (Malmaud et al., 2020).", "In the example in Figure 1, the newly introduced entity Mary French is fixated twice and for a longer duration because it is relatively more important for the reader than the entity Laurence , which had been introduced in the previous sentence.", "Being able to reliably predict eye movement patterns from the language input would bring us one step closer to understand the cognitive plausibility of these models.", "Contextualized neural language models are less interpretable than conceptually motivated psycholinguistic models but they achieve high performance in many language understanding tasks and can be fitted successfully to cognitive features such as self-paced reading times and N400 strength (Merkx and Frank, 2020).", "Moreover, approaches to directly predict cognitive signals (e.g., brain activity) indicate that neural representations implicitly encode similar information as humans (Wehbe et al., 2014; Abnar et al., 2019; Sood et al., 2020; Schrimpf et al., 2020).", "However, it has not been analyzed to which extent transformer language models are able to directly predict human behavioral metrics such as gaze patterns.", "be improved even further if their inductive bias is adjusted using human cognitive signals such as eye tracking, fMRI, or EEG data (Hollenstein et al., 2019; Toneva and Wehbe, 2019; Takmaz et al., 2020).", "While psycholinguistic work mainly focuses on very specific phenomena of human language processing that are typically tested in experimental settings with constructed stimuli (Hale, 2017), we focus on directly generating token-level predictions from natural reading.", "We fine-tune transformer models on human eye movement data and analyze their ability to predict human reading behavior focusing on a range of reading features, datasets, and languages.", "We compare the performance of monolingual and multilingual transformer models.", "Multilingual models represent multiple languages in a joint space and aim at a more universal language understanding.", "As eye tracking patterns are consistent across languages for certain phenomena, we hypothesize that multilingual models might provide cognitively more plausible representations and outperform language-specific models in predicting reading measures.", "We test this hypothesis on 6 datasets of 4 Indo-European languages, namely English, German, Dutch and Russian.", "1 We find that pretrained transformer models are surprisingly accurate at predicting reading time measures in four Indo-European languages.", "Multilingual models show an advantage over language-specific models, especially when fine-tuned on smaller amounts of data.", "Compared to previous psycholinguistic reading models, the accuracy achieved by the transformer models is remarkable.", "Our results indicate that transformer models implicitly encode relative importance in language in a way that is comparable to human processing mechanisms.", "As a consequence, it should be possible to adjust the inductive bias of neural models towards more cognitively plausible outputs without having to resort to large-scale cognitive datasets.", "Using eye movement data to modify the inductive bias of language processing models has resulted in improvements for several NLP tasks (e.g., Barrett et al. 2016; Hollenstein and Zhang 2019).", "It has also been used as a supervisory signal in multi-task learning scenarios (Klerke et al., 2016; Gonzalez-1 Code available on GitHub: https://github.com/ DS3Lab/multilingual-gaze Garduno and Sgaard, 2017) and as a method to fine-tune the attention mechanism (Barrett et al., 2018).", "We use eye tracking data to evaluate how well transformer language models predict human sentence processing.", "Therefore, in this section, we discuss previous work on probing transformers models as well as on modelling human sentence processing.", "Contextualized neural language models have become increasingly popular, but our understanding of these black box algorithms is still rather limited (Gilpin et al., 2018).", "Current intrinsic evaluation methods do not capture the cognitive plausibility of language models (Manning et al., 2020; Gladkova and Drozd, 2016).", "In previous work of interpreting and probing language models, human behavioral data as well as neuroimaging recordings have been leveraged to understand the inner workings of the neural models.", "For instance, Ettinger (2020) explores the linguistic capacities of BERT with a set of psycholinguistic diagnostics.", "Toneva and We-hbe (2019) propose an interpretation approach by learning alignments between the models and brain activity recordings (MEG and fMRI).", "Hao et al. (2020) propose to evaluate language model quality based on the degree to which they exhibit humanlike behavior such as predictability measures collected from human subjects.", "However, their metric does not reveal any details about the commonalities between the model and human sentence processing.", "The benefits of multilingual models are controversial.", "Transformer models trained exclusively on a specific language often outperform multilingual models trained on various languages simultaneously, even after fine-tuning.", "This curse of multilinguality (Conneau et al., 2020; Vulic et al., 2020) has been shown for Spanish (Canete et al., 2020), Finnish (Virtanen et al., 2019) and Dutch (Vries et al., 2019).", "In this paper we investigate whether a similar effect can be observed when leveraging these models to predict human behavioral measures, or whether in that case the multilingual models provide more plausible representations of human reading due to the common eye tracking effects across languages.", "Previous work of neural modelling of human sentence processing has focused on recurrent neural networks, since their architecture and learn-Language", "ing mechanism appears to be cognitively plausible (Keller, 2010; Michaelov and Bergen, 2020).", "However, recent work suggests that transformers perform better at modelling certain aspects of the human language understanding process (Hawkins et al., 2020).", "While Merkx and Frank (2020) and Wilcox et al. (2020) show that the psychometric predictive power of transformers outperforms RNNs on eye tracking, self-paced reading times and N400 strength, they do not directly predict cognitive features.", "Schrimpf et al. (2020) show that contextualized monolingual English models accurately predict language processing in the brain.", "Context effects are known to influence fixations times during reading (Morris, 1994).", "The notion of using contextual information to process language during reading has been well-established in psycholinguistics (e.g., Inhoff and Rayner 1986 and Jian et al. 2013).", "However, to the best of our knowledge, we are the first to study to which extent the representations learned by transformer language models entail these human reading patterns.", "Compared to neural models of human sentence processing, we predict not only individual metrics but a range of eye tracking features covering the full reading process from early lexical access to late syntactic processing.", "By contrast, most models of reading focus on predicting skipping probability (Reichle et al., 1998; Matthies and Sgaard, 2013; Hahn and Keller, 2016).", "Sood et al. (2020) propose a text saliency model which predicts fixation durations that are then used to compute the attention scores in a transformer network.", "We predict eye tracking data only from naturalistic reading studies in which the participants read full", "sentences or longer spans of naturally occurring text in their own speed.", "The data from these studies exhibit higher ecological validity than studies which rely on artificially constructed sentences and paced presentation (Alday, 2019).", "To conduct a cross-lingual comparison, we use eye tracking data collected from native speakers of four languages (see Table 1 for details).", "English The largest number of eye tracking data sources are available for English.", "We use eye tracking features from three English corpora: (1) The Dundee corpus (Kennedy et al., 2003) contains 20 newspaper articles from The Independent , which were presented to English native readers on a screen five lines at a time.", "(2) The GECO corpus (Cop et al., 2017) contains eye tracking data from English monolinguals reading the entire novel The Mysterious Affair at Styles by Agatha Christie.", "The text was presented on the screen in paragraphs.", "(3) The ZuCo corpus (Hollenstein et al., 2018, 2020) includes eye tracking data of full sentences from movie reviews and Wikipedia articles.", "3 Dutch The GECO corpus (Cop et al., 2017) additionally contains eye tracking data from Dutch readers, which were presented with the same novel in their native language.", "German The Potsdam Textbook Corpus (PoTeC, Jger et al. 2021) contains 12 short passages of 158 words on average from college-level biology and physics textbooks, which are read by expert and laymen German native speakers.", "The full passages were presented on multiple lines on the screen.", "Russian The Russian Sentence Corpus (RSC, Laurinavichyute et al. 2019) contains 144 naturally occurring sentences extracted from the Russian National Corpus.", "4 Full sentences were presented on the screen to monolingual Russian-speaking adults one at a time.", "A fixation is defined as the period of time where the gaze of a reader is maintained on a single location.", "Fixations are mapped to words by delimiting the boundaries around the region on the screen belonging to each word w .", "A word can be fixated more than once.", "For each token w in the input text, we predict the following eight eye tracking features that encode the full reading process from early lexical access up to subsequent syntactic integration.", "Word-level characteristics We extract basic features that encode word-level characteristics: (1) number of fixations ( NFIX ), the number of times a subject fixates w , averaged over all subjects; (2) mean fixation duration (MFD), the average fixation duration of all fixations made on w , averaged over all subjects; (3) fixation proportion ( FPROP ), the number of subjects that fixated w , divided by the total number of subjects.", "Early processing We also include features to capture the early lexical and syntactic processing, based on the first time a word is fixated: (4) first fixation duration (FFD), the duration, in milliseconds, of the first fixation on w , averaged over all subjects; (5) first pass duration (FPD), the sum of all fixations on w from the first time a subject fixates w to the first time the subject fixates another token, averaged over all subjects.", "disambiguation, based on words which were fixated more than once: (6) total reading time (TRT), the sum of the duration of all fixations made on w , averaged over all subjects; (7) number of re-fixations ( NREFIX ), the number of times w is fixated after the first fixation, i.e., the maximum between 0 and the NFIX -1, averaged over all subjects; (8) re-read proportion (REPROP ), the number of subjects that fixated w more than once, divided by the total number of subjects.", "The values of these eye tracking features vary over different ranges (see Appendix A).", "FFD, for example, is measured in milliseconds, and average values are around 200 ms, whereas REPROP is a proportional measure, and therefore assumes floating-point values between 0 and", "1. We standardize all eye tracking features independently (range: 0100), so that the loss can be calculated uniformly over all feature dimensions.", "Eye movements depend on the stimulus and are therefore language-specific but there exist universal tendencies which remain stable across languages (Liversedge et al., 2016).", "For example, the average fixation duration in reading ranges from 220 to 250 ms independent of the language.", "Furthermore, word characteristics such as word length, frequency and predictability affect fixation duration similarly across languages but the effect size depends on the language and the script (Laurinavichyute et al., 2019; Bai et al., 2008).", "The word length effect, i.e., the fact that longer words are more likely to be fixated, can be observed across all four languages included in this work (see Appendix A).", "We compare the ability to predict eye tracking features in two models: BERT and XLM.", "Both models are trained on the transformer architecture (Vaswani et al., 2017) and yield state-of-the-He is of three quarters Irish andone quarterFrenchdescent.", "art results for a wide range of NLP tasks (Liang et al., 2020).", "The multilingual BERT model simply concatenates the Wikipedia input from 104 languages and is optimized by performing masked token and next sentence prediction as in the monolingual model (Devlin et al., 2019) without any cross-lingual constraints.", "In contrast, XLM adds a translation language modeling objective, by explicitly using parallel sentences in multiple languages as input to facilitate cross-lingual transfer (Lam-ple and Conneau, 2019).", "Both BERT and XLM use subword tokenization methods to build shared vocabulary spaces across languages.", "We use the pretrained checkpoints from the Hug-gingFace repository for monolingual and multilingual models (details in Table 2).", "5 5 Method We fine-tune the models described above on the features extracted from the eye tracking datasets.", "The eye tracking prediction uses a model for token regression, i.e., the pretrained language models with a linear dense layer on top of it.", "The final dense layer is the same for all tokens, and performs a projection from the dimension of the hidden size of the model (e.g., 768 for BERT-EN or 1,280 for XLM -100) to the dimension of the eye tracking feature space (8, in our case).", "The model is trained for the regression task using the mean squared error (MSE) loss.", "Training Details We split the data into 90% training data, 5% validation and 5% test data.", "We initially tuned the hyper-parameters manually and set the following values for all models: We use an AdamW optimizer (Loshchilov and Hutter, 2018) with a learning rate of 0 .", "00005 and a weight decay of 0 .", "01 .", "The batch size varies depending on the 5 https://huggingface.co/transformers/ pretrained_models.html model dimensions (see Appendix C.2).", "We employ a linear learning rate decay schedule over the total number of training steps.", "We clip all gradients exceeding the maximal value of", "1. We train the models for 100 epochs, with early stopping after 7 epochs without an improvement on the validation accuracy.", "Evaluation Procedure As the features have been standardized to the range 0100, the mean absolute error (MAE) can be interpreted as a percentage error.", "For readability, we report the prediction accuracy as 100 MAE in all experiments.", "The results are averaged over batches and over 5 runs with varying random seeds.", "For a single batch of sentences, the overall MAE is calculated by concatenating the words in each sentence and the feature dimensions for each word, and padding to the maximum sentence length.", "The per-feature MAE is calculated by concatenating the words in each sentence.", "For example, for a batch of B sentences, each composed of L words, and G eye tracking features per word, the overall MAE is calculated over a vector of B*L*G dimensions.", "In contrast, the MAE for each individual feature is calculated over a vector of B*L dimensions.", "Tables 3 and 4 show that all models predict the eye tracking features with more than 90% accuracy for English and Dutch.", "For English, the BERT models yield high performance on all three datasets with standard deviations below 0.15.", "The results for the XLM models are slightly better on average but exhibit much higher standard deviations.", "Similar to the results presented by Lample and Conneau (2019), we find that more training data from multiple languages improves prediction performance.", "For instance, the XLM -100 model achieves higher accuracy than the XLM -17 model in all cases.", "For Model Dundee (en) GECO (en) ZuCo (en) ALL (en) BERT-EN 92.63 (0.05) 93.68 (0.14) 93.42 (0.02) 93.71 (0.06) BERT-MULTI 92.73 (0.06) 93.73 (0.12) 93.74 (0.05) 93.74 (0.07) XLM-EN 90.41 (2.16) 91.15 (1.42) 92.03 (2.11) 90.88 (1.50) XLM-ENDE 92.79 (0.15) 93.89 (0.12) 93.76 (0.15) 93.96 (0.08) XLM -17 92.11 (1.68) 91.79 (1.75) 92.05 (2.25) 93.80 (0.38) XLM -100 92.99 (0.05) 93.04 (1.40) 93.97 (0.09) 93.96 (0.06) Table 3: Prediction accuracy over all eye tracking features for the English corpora, including the concatenated dataset.", "the smaller non-English datasets, PoTeC (de) and RSC (ru), the multilingual XLM models clearly outperform the monolingual models.", "For the English datasets, the differences are minor.", "Size Effects More training data results in higher prediction accuracy even when the eye tracking data comes from various languages and was recorded in different reading studies by different devices (ALL-LANGS, fine-tuning on the data of all four languages together).", "However, merely adding more data from the same language (ALL (en), fine-tuning on the English data from Dundee, GECO and ZuCo together) does not result in higher performance.", "To analyze this further, we perform an ablation study on varying amounts of training data.", "The results are shown in Figure 3 for Dutch and English.", "The performance of the XLM models remains stable even with a very small percentage of eye tracking data.", "The performance of the BERT models, however, drops drastically when fine-tuning on less than 20% of the data.", "Similar to Merkx and Frank (2020) and Hao et al. (2020) we find that the model architecture, along with the composition and size of the training corpus have a significant impact on the psycholinguistic modeling performance.", "Eye Tracking Features The accuracy results are averaged over all eye tracking features.", "For a better understanding of the prediction output, we plot the true and the predicted values of two selected features ( FPROP and NFIX ) for two example sentence in Figure", "2. In both examples, the model predictions strongly correlate with the true values.", "The difference to the mean baseline is more pronounced for the FIXPROP feature.", "Figure 4 presents the quantitative differences across models in predicting the individual eye tracking features.", "6 Across all datasets, first pass duration (FPD) and number of re-fixations ( NREFIX ) are the most accurately predicted features.", "Proportions ( FPROP and REPROP ) are harder to predict because these features are even more dependent on subject-specific characteristics.", "Nevertheless, when comparing the prediction accuracy of each eye tracking feature to a baseline which always predicts the mean values, the predicted features FPROP and REPROP achieve the largest improvements relative to the mean baseline.", "See Figure 5 for a comparison between all features for the best performing model XLM -100 on all six datasets.", "language models' abilities on predicting human reading behavior only from pretraining on textual input, we take the provided model checkpoints and use them to predict the eye tracking features without any fine-tuning.", "The detailed results are presented in Appendix D.1.", "The achieved accuracy aggregated over all eye tracking features lies between 75-78% for English.", "For Dutch, the models achieve 6 Plots for the remaining datasets are in Appendix D.2 1 2 5 10 20 40 60 80 100 % of training data 75 80 85 90 95 A cc u r a c y GECO (nl) BERT-nl BERT-multi XLM-17 XLM-100 1 2 5 10 20 40 60 80 100 % of training data 75 80 85 90 95 A cc u r a c y ALL (en) BERT-en BERT-multi XLM-en XLM-17 XLM-100 Figure 3: Data ablation study for Dutch and English.", "84% accuracy but for Russian merely 65%.", "Across the same languages the results between the different language models are only minimal.", "However, on the individual eye tracking features, the pretrained models do not achieve any improvements over the mean baseline (see Appendix D.1).", "For the main experiment, we always tested the models on held-out data from the same dataset.", "In this section, we examine the influence of dataset properties (text domain and language) on the prediction accuracy.", "In a second step, we analyze the influence of more universal input characteristics (word length, text readability).", "Figure 6 shows the results when evaluating the eye tracking predictions on out-of-domain text for the English datasets.", "For instance, we fine-tune the model on the newspaper articles of the Dundee corpus and test on the literary novel of the GECO corpus.", "We can see that the overall prediction accuracy across all eye tracking features is constantly above.", "90% in all combinations.", "This shows that our eye tracking prediction model is able to generalize across domains.", "We find that the cross-domain capabilities of BERT are slightly better than for XLM.", "BERT-EN performs best in the cross-domain evaluation, possibly because its training data is more domain-general since it includes text from Wikipedia and books.", "Figure 7 shows the results for cross-language evaluation to probe the language transfer capabilities of the multilingual models.", "We test models fine-tuned on language A on the test set of language B. It can be seen that BERT-MULTI generalizes better across languages than the XLM models.", "This might be due to the fact that the multilingual BERT model is trained on one large vocabulary of many languages but the XLM models are trained with a cross-lingual objective and language information.", "Hence, during fine-tuning on eye tracking Figure 6: Cross-domain evaluation on pretrained English models.", "data from one language the XLM models lose some of their cross-lingual abilities.", "Our results are in line with Pires et al. (2019) and Karthikeyan et al. (2020), who showed that BERT learns multilingual representations in more than just a shared vocabulary space but also across scripts.", "When fine-tuning BERT-MULTI on English or Dutch data and testing on Russian, we see surprisingly high accuracy across scripts, even outperforming the in-language results.", "The XLM models, however, show the expected behavior where transferring within the same script (Dutch, English, German) works much better than transferring between the Latin and Cyrillic script (Russian).", "Gaze patterns are strongly correlated with word length.", "Figure 8 shows that the models accurately learn to predict higher fixation proportions for longer words.", "We observe that the predictions of the XLM -100 model follow the trend in the original data most accurately.", "Similar patterns emerge for the other languages (see Appendix D.3).", "Notably, the pretrained models before fine-tuning do not reflect the word length effect.", "On the sentence level, we hypothesize that eye tracking features are easier to predict for sentences with a higher readability.", "Figure 9 shows the accuracy for predicting the number of fixations ( NFIX ) in a sentence relative to the Flesch reading ease score.", "Interestingly, the pretrained models without fine-tuning conform to the expected behavior and show a consistent increase in accuracy for sentences with a higher reading ease score.", "After fine-tuning on eye tracking data, this behavior is not as visible anymore since the language models achieve constantly high accuracy independent of the readability of the sentences.", "on the structural complexity of the text (see Appendix B for a description of the Flesch Reading Ease score).", "Our results indicate that language models trained purely on textual input are more calibrated towards such structural characteristics, i.e., the number of syllables in a word and the number of words in a sentences.", "Hence, the Flesch reading ease score might not be a good approximation for text readability.", "In future work, comparing eye movement patterns and text difficulty should rely on readability measures that take into account lexical, semantic, syntactic, and discourse features.", "This might reveal deviating patterns between pretrained and fine-tuned models.", "Our analyses indicate that the models learn to take properties of the input into account when predicting eye tracking patterns.", "These processing strategies are similar to those observed in humans.", "Nevertheless, the connection between readability and relative importance in text needs to be analysed in more detail to establish how well these properties are learned by the language models.", "While the superior performance of pretrained transformer language models has been established, we have yet to understand to which extent these models are comparable to human language processing behavior.", "We take a step in this direction by fine-tuning language models on eye tracking data to predict human reading behavior.", "We find that both monolingual and multilingual models achieve surprisingly high accuracy in predicting a range of eye tracking features across four languages.", "Compared to the XLM models, BERT 0 20 40 60 80 100 Flesch reading ease 86 88 90 92 94 96 P r e d i c t i o n a cc u r a c y GECO (nl) Fine-tuned BERT-nl BERT-multi XLM-17 XLM-100 Pre-trained BERT-nl BERT-multi XLM-17 XLM-100 0 20 40 60 80 100 Flesch reading ease 86 88 90 92 94 96 P r e d i c t i o n a cc u r a c y ALL (en) Fine-tuned BERT-en BERT-multi XLM-17 XLM-100 Pre-trained BERT-en BERT-multi XLM-17 XLM-100 Figure 9: Prediction accuracy for NFIX relative to the Flesch reading ease score of the sentence.", "MULTI is more robust in its ability to generalize across languages, without being explicitly trained for it.", "In contrast, the XLM models perform better when fine-tuned on less eye tracking data.", "Generally, fixation duration features are predicted more accurately than fixation proportion, possibly because the latter show higher variance across subjects.", "We observe that the models learn to reflect characteristics of human reading such as the word length effect and higher accuracy in more easily readable sentences.", "The ability of transformer models to achieve such high results in modelling reading behavior indicates that we can learn more about the commonalities between language models and human sentence processing.", "By predicting behavioral metrics such as eye tracking features we can investigate the cognitive plausibility within these models to adjust or intensify the human inductive biases.", "Lena Jger was partially funded by the German Federal Ministry of Education and Research under grant 01|S20043." ]
[ "method", "method", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "objective", "abstain", "result", "abstain", "abstain", "result", "abstain", "other", "other", "method", "abstain", "abstain", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "other", "other", "other", "other", "objective", "method", "other", "other", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "other", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "result", "other", "abstain", "abstain", "abstain", "result", "result", "objective", "other" ]
[ "Aspect category detection (ACD) in sentiment analysis aims to identify the aspect categories mentioned in a sentence.", "In this paper, we formulate ACD in the few-shot learning scenario.", "However, existing few-shot learning approaches mainly focus on single-label predictions.", "These methods can not work well for the ACD task since a sentence may contain multiple aspect categories.", "Therefore, we propose a multi-label few-shot learning method based on the prototypical network.", "To alleviate the noise, we design two effective attention mechanisms.", "The support-set attention aims to extract better prototypes by removing irrelevant aspects.", "The query-set attention computes multiple prototype-specific representations for each query instance, which are then used to compute accurate distances with the corresponding prototypes.", "To achieve multi-label inference, we further learn a dynamic threshold per instance by a policy network.", "Extensive experimental results on three datasets demonstrate that the proposed method significantly outperforms strong baselines.", "Aspect category detection (ACD) (Pontiki et al., 2014, 2015) is an important task in sentiment analysis.", "It aims to identify the aspect categories mentioned in a given sentence from a predefined set of aspect categories.", "For example, in the sentence the cheesecake is tasty and the staffs are friendly , two aspect categories, i.e. food and service , are mentioned.", "The performance of existing approaches for the ACD task (Zhou et al., 2015; Schouten et al., 2018; Hu et al., 2019) relies heavily on the scale of the labeled dataset.", "They usually suffer from limited data and fail to generalize well to novel aspect categories with only a few labeled Shiwan Zhao is the corresponding author.", "instances.", "On the one hand, it is time-consuming and labor-intensive to annotate large-scale datasets.", "On the other hand, given a large dataset, many long-tail aspects still suffer from data sparsity.", "Few-shot learning (FSL) provides a solution to address the above challenges.", "FSL learns like a human, identifying novel classes with limited supervised information by exploiting prior knowledge.", "Many efforts have been devoted to FSL (Ravi and Larochelle, 2017; Finn et al., 2017; Snell et al., 2017; Wang et al., 2018; Gao et al., 2019).", "Among these methods, the prototypical network (Snell et al., 2017) is a promising approach, which is simple but effective.", "It follows the meta-learning paradigm by building a collection of N -way K shot meta-tasks.", "A meta-task aims to infer a query set with the help of a small labeled support set.", "It first learns a prototype for each class in the support set.", "Then the query instance is predicted by measuring the distance with N prototypes in the embedding space.", "scenario, which aims to detect aspect categories accurately with limited training instances.", "However, ACD is a multi-label classification problem since a sentence may contain multiple aspect categories.", "Most FSL works learn a single-label classifier and can not work well to address the ACD task.", "The reasons are two-fold.", "Firstly, the sentences of each class (i.e., aspect category) in the support set are diverse and contain noise from irrelevant aspects.", "As displayed in Figure 1, there are three classes in the support set, and each class has two instances.", "The aspect categories food and salon tend to be noise for this meta-task, making it hard to learn a good prototype for each class in the support set.", "Secondly, the query set is also noisy.", "Figure 1 demonstrates three different cases.", "The first sentence mentions two aspects hotel and room cleanliness out of the support set.", "We need to detect both aspects accurately as multi-label classification.", "When detecting each of them, the other aspect acts as noise and makes the task hard.", "The second sentence is an easy case with a single aspect staff owner .", "The third sentence mentions the aspect staff owner out of the support set, while the aspect service is noise for this meta-task.", "In summary, the noise from both the support set and query set makes the few-shot ACD a challenging task.", "To this end, we propose a multi-label FSL method based on the prototypical network (Snell et al., 2017).", "We alleviate the noise in the support set and query set by two effective attention mechanisms.", "Concretely, the support-set attention tries to extract the common aspect of each class.", "By removing the noise (i.e., irrelevant aspects), the support-set attention can yield better prototypes.", "Then for a query instance, the query-set attention utilizes the prototypes to compute multiple prototype-specific query representations, in which the irrelevant aspects are removed.", "Given the better prototypes and the corresponding prototype-specific query representations, we can compute accurate distances between the query instance and the prototypes in the embedding space.", "We detect the aspect categories in the query instance by ranking the distances.", "To select the positive aspects from the ranking, we design a policy network (Williams, 1992) to learn a dynamic threshold for each instance.", "The threshold is modeled as the action of the policy network with continuous action space.", "We formulate ACD as a multi-label FSL problem and design a multi-label FSL method based on the prototypical network to solve the problem.", "To the best of our knowledge, we are the first to address ACD in the few-shot scenario.", "To alleviate the noise from the support set and query set, we design two effective attention mechanisms, i.e., support-set attention and query-set attention.", "Experimental results on the three datasets demonstrate that our method outperforms strong baselines significantly.", "Aspect Category Detection Previous works for ACD can mainly be divided into two types: unsupervised and supervised methods.", "Unsupervised approaches extract aspects by mining semantic association (Su et al., 2006) or co-occurrence frequency (Hai et al., 2011; Schouten et al., 2018).", "These methods require a large corpus to mine aspect knowledge and have limited performance.", "Supervised methods address this task via hand-crafted features (Kiritchenko et al., 2014), automatically learning useful representations (Zhou et al., 2015), multi-task learning (Xue et al., 2017; Hu et al., 2019), or topic-attention model (Movahedi et al., 2019).", "The above methods detect aspect categories out of a pre-defined set, which cannot handle the unseen classes.", "These challenges motivate us to investigate this task in the few-shot scenario.", "Few-Shot Learning Few-shot learning (FSL) (Fe-Fei et al., 2003; Fei-Fei et al., 2006) is close to real artificial intelligence, which borrows the learning process from the human.", "By incorporating the prior knowledge, it obtains new knowledge fast with limited supervised information.", "Many works have been proposed for FSL, which can be mainly divided into four research directions.", "One promising direction is distance-based methods.", "These methods measure the distance between instances in the feature embedding space.", "The siamese network (Koch et al., 2015) infers the similarity score between an instance pair.", "Others compare the cosine similarity (Vinyals et al., 2016) or Euclidean distance (Snell et al., 2017).", "The relation network (Sung et al., 2018) exploits a neural network to learn the distance metric.", "Afterward, SA E query Softmax Label of query MSE Loss 1 1 0 0.5 0.5 0 Normalize Support-set Attention SA E SA E E QA QA QAH i 1 Support set Aspect category 1 Aspect category 2 Aspect category 3 H i 2 H i 1 H i 2 H i 1 H i 2 i = 1 i = 2 i = 3 H q H i 1 H i 2 v i attention matrix W i aspect-wise attention r i 1 r i 2 E SA QA Encoder Support-set Attention Query-set Attention Euclidean distance prototype r i K-shot instances of aspect i common aspect vector Figure 2: The left part depicts the main network for an example N -way K -shot meta-task with a query instance ( N = 3 , K = 2 ).", "Garcia and Bruna (2018) utilize graph convolution network to extract the structural information of classes.", "The second direction focuses on the optimization of networks.", "Model-agnostic meta-learning (MAML) algorithm (Finn et al., 2017) learns a good initialization of the model and updates the model by a few labeled examples.", "Meta networks (Munkhdalai and Yu, 2017) achieve rapid generalization via fast parameterization.", "The third type is based on hallucination (Wang et al., 2018; Li et al., 2020).", "This research line directly deals with data deficiency by learning to augment, which designs a generator on the base classes and then hallucinates novel class data to augment few-shot samples.", "The last direction introduces a weight generator to predict classification weight given a few novel class samples, either based on attention mechanism (Gidaris and Komodakis, 2018) or Gaussian distribution (Guo and Cheung, 2020).", "A recent work Proto-HATT (Gao et al., 2019) is similar to ours.", "Proto-HATT is based on the prototypical network (Snell et al., 2017), which deals with the text noise in the relation classification task by employing hybrid attention at both the instance-level and the feature-level.", "This method is designed for single-label FSL.", "Compared with it, our method designs two attention mechanisms to alleviate the noise on the support set and query set, respectively.", "The collaboration of two attentions helps compute accurate distances between the query instance and prototypes, and then improves multi-label FSL.", "with single-label FSL, the multi-label FSL has been underexplored.", "Previous works focus on image synthesis (Alfassy et al., 2019) and signal processing (Cheng et al., 2019).", "Rios and Kavuluru (2018) develop few-shot and zero-shot methods for multi-label text classification when there is a known structure over the label space.", "Their approach relies on label descriptors and the hierarchical structure of the label spaces, which limits its application in practice.", "Hou et al. (2020) propose to address the multi-label intent detection task in the FSL scenario.", "It calibrates the threshold by kernel regression.", "Different from this work, we learn a dynamic threshold per instance in a reinforced manner.", "In the few-shot ACD scenario, each meta-task contains a support set S and a query set Q .", "The meta-task is to assign the query instance to the class(es) of the support set.", "An instance may be a multi-aspect sentence.", "Thus a query sentence may describe more than one class out of the support set 1 .", "Therefore, we define the few-shot ACD as a multi-label few-shot classification problem.", "Suppose in an N -way K -shot meta-task, the support set is S = { ( x i 1 , ...x iK ) , y i } N i =1 , where each x i", "1 We found that the probability of a query instance belonging to more than one class is around 4.5% in the ACD dataset, i.e. FewAsp, by randomly sampling 10,000 5-way 5-shot meta-tasks with 5 query sentences for each class.", "is a sentence and ( x i 1 , ..., x iK ) all contain the aspect category y i .", "A query instance is ( x q , y q ) , where y q is a binary label vector indicating the aspects in x q out of N classes.", "Figure 2 presents the main network by an example 3-way 2-shot meta-task.", "It is composed of three modules, i.e., encoder, support-set attention (SA) and query-set attention (QA).", "Each class in the support set contains K instances, which are fed into the encoder to obtain K encoded sequences.", "Next, SA module extracts a prototype for this class from the encoded sequences.", "After obtaining N prototypes, we feed a query instance into the QA module to compute multiple prototype-specific query representations, which are then used to compute the Euclidean distances with the corresponding prototypes.", "Finally, we normalize the negative distances to obtain the ranking of prototypes and then select the positive predictions (i.e., aspect categories) by a dynamic threshold.", "Next, we will introduce the modules of our method in detail.", "Given an input sentence x = { w 1 , w 2 , ..., w n } , we first map it into an embedding sequence { e 1 , e 2 , ..., e n } by looking up the pre-trained GloVe embeddings (Pennington et al., 2014).", "Then we encode the embedding sequence by a convolutional neural network (CNN) (Zeng et al., 2014; Gao et al., 2019).", "The convolution kernel slides with the window size m over the embedding sequence.", "We gain the contextual sequence H = { h 1 , h 2 , ..., h n } , H R n d : h i = CNN( e i m 1 2 , ..., e i + m 1 2 ) (1) where CNN( ) is a convolution operation.", "The advantages of CNN are two-fold: first, the convolution kernel can extract n-gram features on the receptive field.", "For example, the bi-gram feature of hot dog could help detect the aspect category food ; second, CNN enables parallel computing over inputs, which is more efficient (Xue and Li, 2018).", "In each class of the support set, the K -shot instances describe a common aspect, i.e., the target aspect of interest 2 .", "As shown in Figure 1, two 2 In almost all cases, there is only one common aspect in the K instances.", "We randomly sample 10,000 5-way 5-shot meta-tasks, and found that the probability of containing more than one common aspect in each class is less than 0.086%.", "The probability will be much lower in the 10-way scenario.", "sentences, Cleanliness was great, and the food was really good and People have mentioned, bed bugs on yelp!! , share the common aspect room cleanliness .", "The former contains two aspect categories room cleanliness and food .", "In this example meta-task, it is an instance of the class room cleanliness .", "However, when sampling other meta-tasks, the instance may be used to represent the class food .", "This leads to confusion and makes learning a good prototype difficult.", "To deal with the issue brought by multi-aspect sentences, we first need to identify the common aspect.", "As depicted in the right part of Figure 2, we compute the common aspect vector by the combination of the K -shot instances.", "We then regard the vector as a condition and inject it into the attention mechanism to make our attention mechanism aspect-wise.", "Common Aspect Vector The encoded K -shot instances of a class contain one common aspect and some irrelevant aspects.", "Among these aspects, the common aspect is the majority.", "Thus, we simply conduct a word-level average to extract the common aspect vector v i R d .", "The average operation highlights the common aspect, but cannot completely eliminate noisy aspects.", "To further reduce the noise of irrelevant aspects in each instance, we use the common aspect as the condition in the attention mechanism.", "Aspect-Wise Attention To make the attention mechanism adapt to the condition, we have two designs.", "First, we directly use the common aspect vector to compute the attention with each instance (see Eq. 4), which filters out the irrelevant aspects of each instance to some extent.", "Second, we exploit the idea of dynamic conditional network, which has been demonstrated effective in FSL (Zhao et al., 2018).", "By predicting a dynamic attention matrix with the common aspect vector, our attention mechanism can further adapt to the condition, i.e., the common aspect vector of the class.", "Specifically, we learn different perspectives of the condition by simply repeating the common aspect vector (Vaswani et al., 2017).", "Then it is fed into a linear layer to obtain the attention matrix W i for class i .", "where ( v i e M ) R e M d is the operation repeatedly concatenating v i for e M times.", "The linear layer has parameter matrix W R d e M and bias b R d .", "This layer is shared in the classes of all meta-tasks, which is learned to be class-agnostic.", "Thus in the testing phase, it can generate aspect-wise attention for a novel class.", "Then in class i of the support set, we exploit the common aspect vector and attention matrix to calculate a denoised representation for every instance.", "The denoised representation r ij for the j -th instance is computed as below.", "In this way, the support-set attention is adapted to the condition and is also class-specific.", "Thus it tends to focus on the correct aspect even for a multi-aspect sentence representing different classes.", "Finally, the average of denoised representations for K -shot instances is the prototype of this class.", "r i = avg( r i 1 , r i 2 , ..., r iK ) (5) After processing all classes in the support set, we obtain N prototypes { r 1 , r 2 , ..., r N } .", "A query instance may also contain multiple aspects, making the sentence noisy.", "To deal with the noise in a query instance, we select the relevant aspects from the query instance by the QA module.", "Specifically, we first process the query instance by the encoder and obtain the encoded instance H q .", "Then we feed H q into the QA module to obtain multiple prototype-specific query representations r iq by the N prototypes.", "The QA module tries to focus on the aspect category which is similar to the prototype.", "In Eq.", "6, the attention is non-parametric.", "It can reduce the dependence on parameters and can accelerate the adaptation to unseen classes.", "For a query instance, we compute the Euclidean distance (ED) between each prototype and its prototype-specific query representation, and we obtain N distances.", "Next, we normalize the negative distances as the final prediction, which is a ranking of the prototypes.", "where y q is the ground-truth.", "We also normalize y q to ensure the consistency between the prediction and the ground-truth.", "Learning Dynamic Threshold (DT) To select the positive aspects from the ranking (see Eq. 7) for a query instance, we further learn a dynamic threshold.", "The threshold is modeled by a policy network (Williams, 1992), which has a continuous action space following Beta distribution (Chou et al., 2017).", "Given a query instance, we define the state as [( r 1 r 1 q ) 2 ; ... ; ( r N r Nq ) 2 ; y ] .", "We feed the state into the policy network and obtain the parameters a and b of a Beta distribution.", "Then we sample a threshold from Beta ( | a, b ) .", "The reward score is the F1 score for this instance based on .", "We also introduce a reference score , which is the F1 score based on a baseline action, i.e., the mode of Beta ( | a, b ) : a 1 a + b 2 .", "The training objective is defined as below to minimize the negative expected reward.", "We construct three few-shot ACD datasets from Yelp aspect (Bauman et al., 2017), which is a large-scale multi-domain dataset for aspect recommendation.", "We group all instances by aspects and choose 100 aspect categories.", "Following Han et al. (2018), we split the 100 aspects without intersection into 64 aspects for training, 16 aspects for validation, and 20 aspects for testing.", "According to the sentence type, i.e., single-aspect or multi-aspect 3 , we sample different types of sentences from each group and construct three datasets: FewAsp(single), FewAsp(multi), and FewAsp, which are composed of single-aspect, multi-aspect, and both types of sentences, respectively.", "Note that FewAsp is randomly sampled from the original set of each class, which can better reflect the data distribution in real applications.", "The statistics of the three datasets are shown in Table 1.", "Evaluation Metrics Previous single-label FSL (Snell et al., 2017) usually evaluates performance by accuracy.", "In the multi-label setting, we choose AUC (Area Under Curve) and macro-f1 as the evaluation metrics.", "AUC is utilized for model selection and macro-f1 is computed with a threshold.", "In our experiments, we found that for all methods in three datasets, the overall best thresholds are 0.3 in the 5-way setting and 0.2 in the 10-way setting.", "Thus we choose them for evaluating the baselines.", "Training Details We first train the main network with MSE loss L (Eq. 8).", "Then we initialize the main network with the learned parameters and jointly train the policy network with L t (Eq. 9).", "The implementation details are described in the appendix.", "3 A sentence contains a single aspect or multiple aspects.", "Our approach is named as Proto-AWATT (aspect-wise attention).", "We validate the effectiveness of the proposed method by comparing with the following popular approaches.", "Matching Network (Vinyals et al., 2016): It is a metric-based attention method, where distance is measured by cosine similarity.", "Prototypical Network (Snell et al., 2017): It computes the average of embedded support examples for each class as the prototype, and then measures the distance between the embedded query instance and each prototype.", "Relation Network (Sung et al., 2018): It utilizes a neural network to learn the relation metric.", "Graph Network (Garcia and Bruna, 2018): It casts FSL as a supervised message passing task by graph neural network.", "IMP (Allen et al., 2019): It proposes infinite mixture prototypes to represent each class by a set of clusters, with the number of clusters determined directly from the data.", "noise with hybrid instance-level and feature-level attention mechanisms.", "We report the experimental results of various methods in Table 2, Table 3, Table 4 and Table 5.", "The best scores on each metric are marked in bold.", "The experimental results demonstrate the effectiveness of our method.", "Overall Performance AUC and macro-f1 scores of all the methods are shown in Table 2, Table 3 and Table 4.", "Firstly, we observe that our method Proto-AWATT achieves the best results on almost all evaluation metrics of the three datasets.", "This reveals the effectiveness of the proposed method.", "Secondly, compared to Proto-HATT, Proto-AWATT achieves significant improvement.", "It is worth noting that the average improvement of macro-f1 on three datasets is 4.99%.", "This exhibits that the SA and QA modules successfully reduce noise for few-shot ACD.", "Meanwhile, accurate distance measurement between prototypes and the prototype-specific query representations can facilitate the detection of multiple aspects in the query instance.", "Then we found that all methods on Fe-wAsp(multi) perform consistently worse than the counterparts on FewAsp(single) and FewAsp.", "This is because more aspects increase the complexity of the dataset.", "On FewAsp(multi), Proto-AWATT still outperforms other methods in most settings, Models Proto-HATT Proto-AWATT AUC F1 AUC F1 GloVe + CNN 0.9063 57.26 0.9206 65.65 GloVe + LSTM 0.9137 59.46 0.9357 66.86 BERT 0.8971 57.33 0.9459 70.09 DistilBERT 0.9067 59.57 0.9451 70.23 Table 6: Ablation study of using different encoders in the 10-way 5-shot scenario on FewAsp.", "In general, the 10-way scenario contains much more noise than the 5-way.", "We observe that compared to Proto-HATT, Proto-AWATT achieves more significant improvements in the 10-way scenario than the 5-way.", "The results further indicate that Proto-AWATT can really alleviate the noise.", "Ablation Study Table 5 depicts the results of ablation study.", "Firstly, without the SA module, the performances of Proto-AWATT drop a lot.", "In particular, AUC drops by 3.43%, and macro-f1 drops by 17.23% relatively.", "This verifies that the SA module helps reduce noise and extract better prototypes.", "We can also see that without attention matrix W i in SA causes consistent decreases on all metrics.", "This suggests that predicting dynamic attention matrix for each class is effective, which makes the SA module extract better prototypes.", "Then we found that without the QA module, Proto-AWATT significantly performs worse.", "This validates that for a query instance, computing multiple prototype-specific query representations helps obtain accurate distances for ranking, which facilitates the multi-label predictions.", "Finally, when removing DT and using a static threshold ( = 0 . 2 in the 10-way setting), it causes a slight decrease.", "This shows that learning dynamic threshold is effective.", "We further compare DT with two alternative dynamic threshold methods: (1) MS (mean standard deviation of the threshold by cross-validation); (2) a kernel regression (KR) 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 Threshold 0 10 20 30 40 50 60 M a c r o F 1 Proto-HATT Proto-AWATT w/o DT Proto-AWATT Proto-AWATT w/o DT w/ KR Figure 3: Macro-f1 scores for different thresholds on 10-way 5-shot setting of FewAsp.", "approach which is proposed by Hou et al. (2020) to calibrate the threshold.", "Comparing with MS and KR, our method slightly outperforms them.", "This is because DT benefits from reinforcement learning and directly optimizes the evaluation metrics.", "Different Encoders We also compare the performances of our method with a strong baseline Proto-HATT when using different encoders to obtain the contextual sequence H .", "The results are reported in Table 6.", "The output of pre-trained encoders, i.e., BERT (Devlin et al., 2019) or DistilBERT (Sanh et al., 2019), are directly used as the contextual sequence.", "We observe that Proto-AWATT significantly outperforms the strong baseline Proto-HATT on all encoders.", "Effects of Thresholds As depicted in Figure 3, we analyze the impact of different thresholds on the macro-f1 score during inference.", "We can see that Proto-AWATT without DT consistently outperforms Proto-HATT in various thresholds.", "Macro-f1 scores of the two methods are getting worse as grows.", "However, the declines in Proto-HATT are more significant.", "At = 0 .", "9 , the macro-f1 of Proto-HATT drops nearly to 0.", "Proto-AWATT without DT still achieves much higher macro-f1.", "This indicates that the proposed two attention mechanisms help extract an accurate ranking of prototypes.", "The ranking is less sensitive to the threshold, which makes our method robust and stable.", "We also found that learning threshold by DT benefits from a reinforced way, which slightly outperforms KR and the best static threshold.", "We further analyze Proto-AWATT by visualizing the extracted representations from the support set", "and query set, respectively.", "The representations are visualized by t-SNE (Maaten and Hinton, 2008).", "To observe the performance in a challenging situation, we choose the testing set from FewAsp(multi) as an example.", "Support Set Figure 4 presents the visualization of extracted prototypes from two methods.", "We randomly sample 5 classes and then sample 50 times of 5-way 5-shot meta-tasks for the five classes.", "Then for each class, we have 50 prototype vectors.", "We observe that prototype vectors from our approach are more separable than those from Proto-HATT.", "This further indicates that the SA module can alleviate noise and thus yield better prototypes.", "Query Set We randomly sample 5 classes and then sample 20 times of 5-way 5-shot meta-tasks for these classes.", "Each meta-task has 5 query instances per class.", "Thus we have 25 20 = 500 query instances.", "It is worth noting that our model learns N prototype-specific query representations for each query instance.", "We choose the representations according to the ground-truth label.", "However, Proto-HATT only outputs a single representation for a query instance.", "As depicted in Figure 5, we can see that the representations learned by our method are obviously more separable than those by Proto-HATT.", "This further reveals that Proto-AWATT can obtain accurate prototype-specific query representations, which contributes to computing accurate distances.", "In this paper, we formulate the aspect category detection (ACD) task in the few-shot learning (FSL) scenario.", "Existing FSL methods mainly focus on single-label predictions.", "They can not work well for the ACD task since a sentence may contain multiple aspect categories.", "Therefore, we propose a multi-label FSL method based on the prototypical network.", "Specifically, we design two effective attention mechanisms for the support set and query set to alleviate the noise from both sets.", "To achieve multi-label inference, we further learn a dynamic threshold per instance by a policy network with continuous action space.", "Extensive experimental results in three datasets demonstrate that our method outperforms strong baselines significantly.", "We sincerely thank all the anonymous reviewers for providing valuable feedback.", "This work is supported by the National Science and Technology Major Project, China (Grant No. 2018YFB0204304)." ]
[ "abstain", "method", "abstain", "abstain", "objective", "method", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "objective", "objective", "method", "objective", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "other", "abstain", "result", "abstain", "result", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "method", "abstain", "abstain", "objective", "method", "result", "objective", "other", "other" ]
[ "Chen Xu 1 , Bojie Hu 2 , Yanyang Li 3 , Yuhao Zhang 1 , Shen Huang 2 , Qi Ju 2 , Tong Xiao 1,4 , Jingbo Zhu 1,4 1 NLP Lab, School of Computer Science and Engineering,", "xuchenneu,blamedrlee,yoohaozhang } @outlook.com , { bojiehu,springhuang,damonju } @tencent.com , { xiaotong,zhujingbo } @mail.neu.edu.cn", "Abstract Encoder pre-training is promising in end-to-end Speech Translation (ST), given the fact that speech-to-translation data is scarce.", "But ST encoders are not simple instances of Automatic Speech Recognition (ASR) or Machine Translation (MT) encoders.", "For example, we find that ASR encoders lack the global context representation, which is necessary for translation, whereas MT encoders are not designed to deal with long but locally attentive acoustic sequences.", "In this work, we propose a Stacked Acoustic-and-Textual Encoding (SATE) method for speech translation.", "Our encoder begins with processing the acoustic sequence as usual, but later behaves more like an MT encoder for a global representation of the input sequence.", "In this way, it is straightforward to incorporate the pre-trained models into the system.", "Also, we develop an adaptor module to alleviate the representation inconsistency between the pre-trained ASR encoder and MT encoder, and develop a multi-teacher knowledge distillation method to preserve the pre-training knowledge.", "Experimental results on the LibriSpeech En-Fr and MuST-C EnDe ST tasks show that our method achieves state-of-the-art BLEU scores of 18.3 and 25.2.", "To our knowledge, we are the first to develop an end-to-end ST system that achieves comparable or even better BLEU performance than the cascaded ST counterpart when large-scale ASR and MT data is available 1 .", "End-to-end Speech Translation (E2E ST) has become popular recently for its ability to free designers from cascading different systems and shorten Corresponding author", "Promising results on small-scale tasks are generally favorable.", "However, speech-to-translation paired data is scarce.", "Researchers typically use pre-trained Automatic Speech Recognition (ASR) and Machine Translation (MT) models to boost ST systems (Be-rard et al., 2018).", "For example, one can initialize the ST encoder using a large-scale ASR model (Bansal et al., 2019).", "But we note that, despite significant development effort, our end-to-end ST system with pre-trained models was not able to outperform the cascaded ST counterpart when the ASR and MT data size was orders of magnitude larger than that of ST (see Table 1).", "In this paper, we explore reasons why pretraining has been challenging in ST, and how pre-trained ASR and MT models might be used together to improve ST. We find that the ST encoder plays both roles of acoustic encoding and textual encoding.", "This makes it problematic to view an ST encoder as either an individual ASR encoder or an individual MT encoder.", "More specifically, there are two problems.", "Modeling deficiency: the MT encoder tries to capture long-distance dependency structures of language, but the ASR encoder focuses more on local dependencies in the input sequence.", "Since the ST encoder is initialized by the pre-trained ASR encoder (Berard et al., 2018), it fails to model large contexts in the utterance.", "But a large scope of representation learning is necessary for translation (Yang et al., 2018).", "Representation inconsistency: on the decoder side of ST, the MT decoder is in general used to initialize the model.", "The assumption here is that the upstream component is an MT-like encoder, whereas the ST encoder actually behaves more like an ASR encoder.", "We address these problems by marrying the world of ASR encoding with the world of MT encoding.", "We propose a Stacked Acoustic-and-Textual Encoding (SATE) method to cascade the ASR encoder and the MT encoder.", "It first reads and processes the sequence of acoustic features as a usual ASR encoder.", "Then an adaptor module passes the acoustic encoding output to an MT encoder with two principles: informative and adaptive.", "In this way, pre-trained ASR and MT encoders can work for what we would originally design them, and the incorporation of pre-trained models into ST is more straightforward.", "In addition, we develop a multi-teacher knowledge distillation method to robustly train the ST encoder and preserve the pre-trained knowledge during fine-tuning (Yang et al., 2020).", "We test our method in a Transformer-based end-to-end ST system.", "Experimental results on the LibriSpeech En-Fr and MuST-C En-De speech translation benchmarks show that it achieves the state-of-the-art performance of 18.3 and 25.2 BLEU points.", "Under a more challenging setup, where the large-scale ASR and MT data is available, SATE achieves comparable or even better performance than the cascaded ST counterpart.", "We believe that we are the first to present an end-to-end system that can beat the strong cascaded system in unrestricted speech translation tasks.", "Speech translation aims at learning models that can predict, given some speech in the source language, the translation into the target language.", "The earliest of these models were cascaded: they treated ST as a pipeline of running an ASR system and an MT system sequentially (Ney, 1999; Mathias and Byrne, 2006; Schultz et al., 2004).", "This allows the use of off-the-shelf models, and was (and is) popular in practical ST systems.", "However, these systems were sensitive to the errors introduced by different component systems and the high latency of the long pipeline.", "As another stream in the ST area, end-to-end methods have been promising recently (Berard et al., 2016; Weiss et al., 2017; Berard et al., 2018).", "The rise of end-to-end ST can be traced back to the success of deep neural models (Duong et al., 2016).", "But, unlike other well-defined tasks in deep learning, annotated speech-to-translation data is scarce, which prevents well-trained ST models.", "A simple solution to this issue is data augmentation (Pino et al., 2019, 2020).", "This method is model-free but generating large-scale synthetic data is time consuming.", "As an alternative, researchers used multi-task learning (MTL) to robustly train the ST model so that it could benefit from additional guide signals (Weiss et al., 2017; Anastasopoulos and Chiang, 2018; Berard et al., 2018; Sperber et al., 2019; Dong et al., 2021).", "Generally, MTL requires a careful design of the loss functions and more complicated architectures.", "In a similar way, more recent work pre-trains different components of the ST system, and consolidates them into one.", "For example, one can initialize the encoder with an ASR model, and initialize the decoder with the target-language side of an MT model (Berard et al., 2018; Bansal et al., 2019; Stoian et al., 2020).", "More sophisticated methods include better training and fine-tuning (Wang et al., 2020a,b), the shrink mechanism (Liu et al., 2020), the adversarial regularizer (Alinejad and Sarkar, 2020), and etc.", "Although pre-trained models have quickly become dominant in many NLP tasks, they are still found to underperform the cascaded model in ST. This motivates us to explore the reasons why this happens and methods to solve the problems accordingly.", "Following previous work in end-to-end models (Be-rard et al., 2016; Weiss et al., 2017), we envision an encoding-decoding process in which an input sequence is encoded into a representation vector, and the vector is then decoded into an output sequence.", "In such a scenario, all end-to-end ST, ASR and MT systems can be viewed as instances of the same architecture.", "Then, components of these systems can be pre-trained and re-used across them.", "An underlying assumption here is that the ST encoder is doing something quite similar to what the MT (or ASR) encoder is doing.", "However, Sperber et al. (2018) find that the ASR model benefits from a small attention window, which is inconsistent with the MT model (Yang et al., 2018).", "To verify this, we compare the behavior of ST, ASR and MT encoders.", "We choose Transformer as the base architecture (Vaswani et al., 2017) and run experiments on the MuST-C En-De corpus.", "We report the results on the MuST-C En-De tst-COMMON test data.", "For stronger systems, we use Connectionist Temporal Classification (CTC) (Graves et al., 2006) as the auxiliary loss on the encoders when we train the ASR and ST systems (Watanabe et al., 2017; Karita et al., 2019; Bahar et al., 2019).", "The CTC loss forces the encoders to learn alignments between speech and transcription.", "It is necessary for the state-of-the-art performance (Watanabe et al., 2018).", "Here we define the localness of a word as the sum of the attention weights to the surrounding words (or features) within a fixed small window 2 .", "The window size is 10% of the sequence length.", "Figure", "1(a) shows the localness of the attention weights for different layers of the encoders.", "We see that the ST and ASR encoders prefer local attention which indicates a kind of short-distance dependencies in processing acoustics feature sequences.", "Whereas the MT encoder generates a 2 Here we treat the attention weight of Transformer as a distribution over all positions.", "more global distribution of attention weights for word sequences, especially when we stack more layers.", "This result arises a new question: Is local attention sufficient for speech translation?", "Then, we design another experiment to examine if the high localness in attention weights of the ASR and ST encoders is due to the bias imposed by CTC.", "In Figure", "1(b), we use the CTC loss in the intermediate layer and show the average localness of the layers above or below CTC.", "The CTC loss demonstrates strong preference for locally attentive models.", "The upper-level layers act more like an MT encoder, that is, the layers with no CTC loss generates more global distributions.", "Taking this further, Figure", "1(c) demonstrates a slightly higher BLEU score when we free more upper-level layers from the guide of CTC.", "Meanwhile, the word error rate (WER) increases because only lower parts of the model are learned in a standard manner of ASR.", "Now we have some hints: the ST encoder is not a simple substitution of the ASR encoder or the MT encoder.", "Rather, they are complementary to each other, that is, we need the ASR encoder to deal with the acoustic input, and the MT encoder to generate the representation vector that can work better with the decoder.", "In speech translation, we want the encoder to represent the input speech to some sort of decoder-friendly representations.", "We also want the encoder to be natural for pre-training.", "In the following, we describe, Stacked Acoustics-and-Textual Encoding (SATE), a new ST encoding method to meet these requirements, and improvements of it.", "Unlike previous work, the SATE method does not rely on a single encoder to receive the signal from both the CTC loss and the feedback of the decoder.", "Instead, it is composed of two encoders: the first does exactly the same thing as the ASR encoder (call it acoustic encoder ), and the other generates a higher-level globally-attentive representation on top of the acoustic encoder (call it textual encoder ).", "See Figure 2 for the architecture of SATE.", "The acoustic encoder is trained by CTC in addition to the supervision signal from the translation loss.", "Let ( x, y s , y t ) be an ST training sample, where x is the input feature sequence of the speech, y s is the transcription of x , and y t is the translation in the target language.", "We define the output of the acoustic encoder as: h s = E s ( x ) (1) where E s ( ) is the encoding function.", "Then, we add a Softmax layer on h s to predict the CTC label path = ( 1 , , T ) , where T is the length of the input sequence.", "The probability of path P ( | h s ) is the product of the probability P ( t | h st ) at every time t based on conditionally independent assumption: P ( | h s ) T (cid:89) t P ( t | h st ) (2) CTC works by summing over the probability of all possible alignment paths ( y s ) between x and y s , as follows: PCTC ( y s | h s ) = (cid:88) ( y s ) P ( | h s ) (3) Then, the CTC loss is defined as: LCTC = log PCTC ( y s | h s ; CTC ) (4) where CTC is the model parameters of the acoustic encoder and the CTC output layer.", "Its output is defined as: h s = A ( h s , P ( | h s )) (5) We leave the design of the adaptor to Section 4.2.", "Furthermore, we stack the textual encoder on the adaptor.", "The output h t is defined as: h t = E t ( h s ) (6) where E t ( ) is the textual encoder.", "The acoustic encoder is followed by an adaptor.", "It receives h s and P ( | h s ) , and produces a new representation required by the textual encoder.", "Let A ( , ) be the adaptor module.", "h t is fed into the decoder for computing the translation probability P Trans ( y t | h t ) , as in standard MT systems.", "We define the translation loss as: L Trans = log P Trans ( y t | h t ; ST ) (7) where ST is all model parameters except for the CTC output layer.", "Finally, we interpolate LCTC and L Trans (with coefficient ) for the loss of the entire model: L = LCTC + (1 ) L Trans (8) Since the textual encoder works for the decoder only, it is trained as an MT encoder.", "In this way, the acoustic and textual encoders can do what we would originally expect them to do: the acoustic encoder deals with the acoustic input (i.e., ASR en-coding), and the textual encoder generates a representation for translation (i.e., MT encoding).", "Also, SATE is friendly to pre-training.", "One can simply use an ASR encoder as the acoustic encoder, and use an MT encoder as the textual encoder.", "Note that SATE is in general a cascaded model, in response to the pioneering work in ST (Ney, 1999).", "It can be seen as cascading the ASR and MT systems in an end-to-end fashion.", "Now we turn to the design of the adaptor.", "Note that the pre-trained MT encoder assumes that the input is a word embedding sequence.", "Simply stacking the MT encoder and the ASR encoder obviously does not work well.", "For this reason, the adaptor fits the output of the ASR encoder (i.e., the acoustic encoder) to what an MT encoder would like to see.", "We follow two principles in designing the adaptor: adaptive and informative .", "We need an adaptive representation to make the input of the textual encoder similar to that of the MT encoder.", "To this end, we generate the soft contextual representation that shares the same latent space with the embedding layer of the MT encoder.", "As shown in Eq.", "(2), the CTC output P ( t | h st ) indicates the alignment probability over the vocabulary at time t .", "Instead of replacing the representation by the embedding of the most-likely token (Liu et al., 2020), we employ a soft token which is the expectation of the embedding over the distribution from CTC.", "Let W e be the embedding matrix of the textual encoder, we define the soft representation h s soft as: h s soft = P ( | h s ) W e (9) Also, an informative representation should contain information in the original input (Peters et al., 2018).", "The output acoustic representation of the ASR encoder generally involves paralinguistic information, such as emotion, accent, and emphasis.", "They are not expressed in the form of text explicitly but might be helpful for translation.", "For example, the generation of the declarative or exclamatory sentences depends on the emotions of the speakers.", "We introduce a single-layer neural network to learn to map the acoustic representation to the latent space of the textual encoder, which preserves the acoustic information: h s map = ReLU ( W map h s + b map ) (10) where W map and b map are the trainable parameters.", "The final output of the adaptor is defined to be: A ( h s , P ( | h s )) = h s map + (1 ) h s soft (11) where is the weight of h s map and set to 0.5 by default.", "Figure 3 shows the architecture of the adaptor.", "Note that, in the adaptor, we do not change the sequence length for textual encoding because such a way is simple for implementation and shows satisfactory results in our experiments.", "Although there is a length inconsistency issue, the sequence representation of the speech should be similar with the Mapping Layer w e you I h i t h a t ca n = (cid:76) Output CTC Distribution Acoustic Representation Embedding Soft Embedding Figure 3: The architecture of the adaptor.", "correspond transcription.", "Shrinking the sequence simply results in information incompleteness.", "We will investigate this issue in the future.", "Another improvement here is that we develop a multi-teacher knowledge distillation (MTKD) method to preserve the pre-trained knowledge during fine-tuning (Hinton et al., 2015).", "The ST model mimics the teacher distribution by minimizing the cross-entropy loss between the teacher and student (Liu et al., 2019).", "For a training sample ( x, y s , y t ) , we define two loss functions: LKD CTC = T (cid:88) m =1 | V | (cid:88) k =1 Q ( m = v k | x ; ASR ) log P ( m = v k | x ; CTC ) (12) LKD Trans = | y t | (cid:88) n =1 | V | (cid:88) k =1 Q ( y tn = v k | y s ; MT ) log P ( y tn = v k | x ; ST ) (13) where v k is the word indexed by k and V is the vocabulary shared among the ST, ASR, and MT models.", "Q ( | ) is the teacher distribution and P ( | ) is the student distribution.", "ASR , CTC , MT and ST are the model parameters.", "We can rewrite Eq.", "(8) to obtain a new loss: L = (cid:0) LCTC + (1 ) LKD CTC (cid:1) +(1 ) (cid:0) L Trans + (1 ) LKD Trans (cid:1) (14) where both and are the hyper-parameters that balance the preference between the teacher distribution and the ground truth.", "We consider restricted and unrestricted settings on speech translation tasks.", "We run experiments on the LibriSpeech English-French (En-Fr) (Ko-cabiyikoglu et al., 2018) and MuST-C English-German (En-De) (Gangi et al., 2019) corpora, which correspond to the low-resource and high-resource datasets respectively.", "Available ASR and MT data is only from the ST data under the restricted setting.", "For comparison in practical scenarios, the unrestricted setting allows the additional data for ASR and MT models.", "LibriSpeech En-Fr Followed previous work, we use the clean speech translation training set of 100 hours, including 45K utterances and doubled translations of Google Translate .", "We select the model on the dev set (1,071 utterances) and report results on the test set (2,048 utterances).", "MuST-C En-De MuST-C is a multilingual speech translation corpus extracted from the TED talks.", "We run the experiments on the English-German speech translation dataset of 400 hours speech with 230K utterances.", "We select the model on the dev set (1,408 utterances) and report results on the tst-COMMON set (2,641 utterances).", "Unrestricted Setting We use the additional ASR and MT data for pre-training.", "The 960 hours LibriSpeech ASR corpus is used for the English ASR model.", "We extract 10M sentences pairs from the WMT14 English-French and 18M sentence pairs from the Opensubtitle2018 3 English-German translation datasets.", "Preprocessing Followed the preprocessing recipes of ESPnet (Inaguma et al., 2020), we remove the utterances of more than 3,000 frames and augment speech data by speed perturbation with factors of 0.9, 1.0, and 1.1.", "The 80-channel log-mel filterbank coefficients with 3-dimensional pitch features are extracted for speech data.", "We use the lower-cased transcriptions without punctuations.", "The text is tokenized using the scripts of Moses (Koehn et al., 2007).", "We learn Byte-Pair Encoding (Sennrich et al., 2016) subword segmentation with 10,000 merge operations based on a shared source and target vocabulary for all datasets.", "All experiments are implemented based on the ESPnet toolkit 4 .", "We use the Adam optimizer with 1 = 0 .", "9 , 2 = 0 .", "997 and adopt the default learning schedule in ESPnet.", "We apply dropout with a rate of 0.1 and label smoothing (cid:15) ls = 0 .", "1 for regularization.", "For reducing the computational cost, the input speech features are processed by two convolutional layers, which have a stride of 2 2 and down-sample the sequence by a factor of 4 (Weiss et al., 2017).", "The encoder consists of 12 layers for both the ASR and vanilla ST models, and 6 layers for the MT model.", "The encoder of SATE includes an acoustic encoder of 12 layers and a textual encoder of 6 layers.", "The decoder consists of 6 layers for all models.", "The weight of CTC objective for multitask learning is set to 0.3 for all ASR and ST models.", "The coefficients and are set to 0.5 in Eq.", "(14) for the MTKD method.", "Under the restricted setting, we employ the Transformer architecture, where each layer comprises 256 hidden units, 4 attention heads, and 2048 feed-forward size.", "For the unrestricted setting, we use the superior architecture Conformer (Gulati et al., 2020) on the ASR and ST tasks and widen the model by increasing the hidden size to 512 and attention heads to 8.", "The ASR 5 and MT models pre-train with the additional data and fine-tune the model parameters with the task-specific data.", "During inference, we average the model parameters on the best 5 checkpoints based on the performance of the development set.", "We use beam search with a beam size of 4 for all models.", "Different from previous work, we report the case-sensitive SacreBLEU 6 (Post, 2018) for future standardization comparison across papers.", "Results on MuST-C En-De Table 2 summaries the experimental results on the MuST-C En-De task.", "Under the restricted setting, the cascaded ST model translates the output of the ASR model, which degrades the performance compared with the MT model that translates from the reference transcription.", "The performance of the E2E ST baseline with pre-training is only slightly lower than the cascaded counterpart.", "SATE outperforms the baseline 4 https://github.com/espnet/espnet 5 We use the pre-trained ASR model offered by ESPnet.", "6 BLEU+case.mixed+numrefs.1+smooth.exp+tok.13a +version.1.4.14 Method Restricted Unrestricted ESPnet MT 27.63 ESPnet Cascaded 23.65 MT 26.9 31.1 Cascaded ST 23.3 28.1 ESPnet E2E ST 22.33 E2E ST 22.1 23.6 +Pre-training 23.1 25.6 SATE 23.3 23.6 +Pre-training 24.1 27.3 +MTKD 24.7 27.9 +SpecAug 25.2 28.1 Table 2: BLEU scores [ % ] on the test set of MuST-C En-De corpus.", "model significantly.", "This demonstrates the superiority of stacked acoustic and textual encoding for the speech translation task.", "Incorporating the pre-trained ASR and MT models into SATE releases the encoding burden of the model and achieves a remarkable improvement.", "The MTKD method provides a strong supervised signal and forces the model to preserve the pre-trained knowledge.", "Furthermore, we utilize the SpecAugment (Park et al., 2019) which is applied in the input speech features for better generalization and robustness 7 .", "It yields a remarkable improvement of 1.9 BLEU points over the cascaded baseline and achieves a new state-of-the-art performance.", "Under the unrestricted setting, the large-scale ASR and MT data is available, whereas the ST data is scarce.", "This leads to the cascaded method outperforms the vanilla E2E method with a huge margin of 4.5 BLEU points.", "The pre-training only slightly closes the gap due to the modeling deficiency and representation inconsistency.", "SATE incorporates the pre-trained models fully, which achieves a significant improvement of 3.7 BLEU points.", "With the MTKD and SpecAugment methods, we achieve a comparable performance of 28.1 BLEU points.", "To our knowledge, we are the first to develop an end-to-end ST system that achieves comparable performance with the cascaded counterpart when large-scale ASR and MT data is available.", "it is of small magnitude with clean speech data.", "This results in that the performance of the vanilla E2E baseline is even better than the cascaded counterpart under the restricted setting.", "Furthermore, pre-training helps the model achieve an improvement of 0.8 BLEU points over the cascaded baseline.", "More interestingly, SATE without pre-training outperforms the above methods significantly, even achieves a slight improvement than the MT model.", "A possible reason is that the diverse acoustic representation is fed to the textual encoder, which improves the robustness of the model.", "This demonstrates the superiority of our method.", "Combining our proposed methods yields a substantial improvement of 2.0 BLEU points over the cascaded baseline.", "It is a new state-of-the-art result of 18.3 BLEU points.", "Also, we outperform the cascaded counterpart by 0.2 BLEU points on the unrestricted task.", "In Table 4, we summarize the performance and inference speedup based on the real time factor (RTF).", "The vanilla E2E ST model yields an inference speedup of 1 .", "91 than the cascaded counterpart and demonstrates the low latency of the end-to-end methods.", "We increase the encoder layers for comparison with SATE under the similar model parameters.", "However, there is a remarkable gap of 0.5 or 0.6 BLEU points, with or without pre-training.", "Our method not only improves the performance of 1.9 BLEU points but also reaches up to 1 .", "69 speedup than the cascaded baseline.", "This encourages the application of the end-to-end ST model in practical scenarios.", "The effects of the pre-trained modules are shown in Table", "5. The model performance drops significantly without the pre-trained ASR encoder, especially on the MuST-C corpus that contains noisy speech.", "The model parameters of pre-trained MT model are updated for adapting the output representation of the random initialized acoustic encoder.", "This results in the catastrophic forgetting problem (Goodfellow et al., 2015).", "The effect of the pre-trained MT model is more remarkable on the LibriSpeech corpus due to the modeling burden on the translation.", "The benefit of the pre-trained MT decoder is larger than the MT encoder.", "This is contrary to the previous conclusions that the MT encoder helps the performance significantly (Li et al., 2020).", "A possible reason is that the pre-trained Design MuST-C LibriSpeech None 25.7 21.7 Soft 25.7 21.9 Mapping 26.0 21.8 Fusion 26.4 21.9 Table 6: BLEU scores [ % ] of different adaptor setups on the development set under the unrestricted setting.", "ASR encoder provides a rich representation and acts as part of the MT encoder, this leads to lower performance degradation when the textual encoder trains from scratch.", "Each pre-trained module has a great effect on the final performance.", "With the complete integration of the pre-trained modules, the model parameters are updated slightly, which preserves the pre-trained knowledge.", "We show the effects of the adaptor in Table", "6. The straight connection which omits the representation inconsistency issue results in the lower benefit of pre-training.", "Although the soft representation aims at generating the adaptive representation, there is no obvious improvement on the MuST-C corpus.", "A possible reason is that the noisy speech inputs produce the misalignment probabilities, which disturbs the textual encoding.", "The mapping method achieves a slight improvement by transforming the acoustic representation to the textual representation.", "Fusing the soft and mapping representation enriches the information and avoids the representation inconsistency issue, which achieves the best performances.", "Figure 4.", "As mentioned above, the vanilla ST model inherits the preference of ASR, which focuses on short-distance dependencies.", "SATE initializes with the pre-trained ASR and MT encoders, which stacks acoustic and textual encoding.", "The complementary behaviors of the pre-trained models benefit the translation, that is, the lower layers act like an ASR encoder while the upper layers capture global representation like an MT encoder.", "In this paper, we investigate the difficulty of speech translation and shed light on the reasons why pretraining has been challenging in ST. This inspires us to propose a Stacked Acoustic-and-Textual Encoding method, which is straightforward to incorporate the pre-trained models into ST. We also introduce an adaptor module and a multi-teacher knowledge distillation method for bridging the gap between pre-training and fine-tuning.", "Results on the LibriSpeech and MuST-C corpora demonstrate the superiority of our method.", "Furthermore, we achieve comparable or even better performance than the cascaded counterpart when large-scale ASR and MT data is available.", "This work was supported in part by the National Science Foundation of China (Nos. 61876035 and 61732005), the National Key R&D Program of China (No. 2019QY1801), and the Ministry of Science and Technology of the PRC (Nos. 2019YFF0303002 and 2020AAA0107900).", "The authors would like to thank anonymous reviewers for their comments." ]
[ "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "objective", "abstain", "abstain", "objective", "objective", "objective", "objective", "objective", "abstain", "objective", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "other", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "objective", "result", "other", "other" ]
[ "Web-crawled data provides a good source of parallel corpora for training machine translation models.", "It is automatically obtained, but extremely noisy, and recent work shows that neural machine translation systems are more sensitive to noise than traditional statistical machine translation methods.", "In this paper, we propose a novel approach to filter out noisy sentence pairs from web-crawled corpora via pre-trained language models.", "We measure sentence parallelism by leveraging the multilingual capability of BERT and use the Generative Pre-training (GPT) language model as a domain filter to balance data domains.", "We evaluate the proposed method on the WMT 2018 Parallel Corpus Filtering shared task, and on our own web-crawled Japanese-Chinese parallel corpus.", "Our method significantly outperforms baselines and achieves a new state-of-the-art.", "In an unsupervised setting, our method achieves comparable performance to the top-1 supervised method.", "We also evaluate on a web-crawled Japanese-Chinese parallel corpus that we make publicly available.", "Training modern neural machine translation (NMT) systems requires large parallel-text resources.", "Publicly-available parallel corpora are mostly paired with English, such as German-English, French-English, Chinese-English, etc., and their domains are limited.", "For building machine translation systems between non-English language pairs, such as Chinese and Japanese, existing parallel corpora are insufficient and often low quality.", "To address this problem, system builders have trained NMT systems on web-crawled data and achieved promising results (Xu and Koehn, 2017; Junczys-Dowmunt, 2018; Schwenk, 2018; Schwenk et al., 2019).", "However, data automatically crawled from the web is extremely noisy.", "Khayrallah and Koehn (2018) and Belinkov and Bisk (2018) show that neural translation models are far more sensitive to noisy parallel training data than statistical machine translation.", "Data selection methods that can filter noisy parallel sentences from large-scale web crawled resources are in demand.", "In this paper, we study the problem in a real-world scenario where we crawl a large Japanese-Chinese parallel corpus from various websites and build open-domain machine translation systems between Japanese and Chinese, by filtering the web crawled parallel corpus.", "In addition, a small amount of clean parallel data is available, in the software domain.", "In order to confirm our results on a public data, we also apply our filter to the WMT 2018 German-English Parallel Corpus Filtering shared task.", "Previous work on parallel corpus filtering performs poorly in our scenario as it either requires large clean parallel corpora or dictionaries (Xu and Koehn, 2017; Artetxe and Schwenk, 2019; Junczys-Dowmunt, 2018; Chaudhary et al., 2019), or relies on multilingual word embeddings and neglects context when measuring translation parallelism (Hangya and Fraser, 2018).", "In this paper, we propose a simple but effective parallel corpus filtering method.", "Multilingual BERT (Devlin et al., 2019) projects multilingual sentences into a shared space and has shown a great potential for cross-lingual model transfer (Pires et al., 2019).", "We use pre-trained multilingual BERT as prior knowledge and fine-tune it on a synthetic dataset.", "This multilingual BERT-based classifier forms an acceptability filter that determines whether or not a sentence pair consists of a bona-fide translation.", "As the domain of training data largely affects machine translation model performance, we also introduce a domain filter.", "It uses the pre-trained Generative Pre-training (GPT) as in-domain language model and is an extension of the existing cross-entropy difference based domain filter (Moore and Lewis, 2010; Junczys-Dowmunt, 2018).", "We evaluate our proposed method on the WMT 2018 German-English Parallel Corpus Filtering shared task and achieve a new state-of-the-art.", "Our unsupervised method achieves comparable performance to the top system that is trained on millions of clean parallel sentence pairs.", "Our proposed methods also significantly outperform baselines in our own Japanese-Chinese parallel corpus filtering task.", "We make the following contributions: We propose a novel approach to filter noisy parallel corpora by using pre-trained language models.", "Our approach outperforms strong baselines and achieves a new state-of-the-art.", "We devise an unsupervised filtering approach that does not require an identifiable clean subset of parallel segments.", "Our unsupervised method matches the results of previous supervised methods.", "We release a large web-crawled Japanese-Chinese parallel corpus which can be a useful resource for machine translation research on non-English language pairs.", "1 2 Related Work Several recent works address parallel corpus filtering.", "Denkowski et al. (2012), Dyer et al. (2010) and Heafield (2011) use language models and word alignments to determine how likely sentences are to be a good translation of another.", "Xu and Koehn (2017) introduce a noise filtering tool, Zipporah, that discriminates parallel and non-parallel sentences based on word-frequency vectors and a dictionary.", "Junczys-Dowmunt (2018) proposes a dual conditional cross-entropy filtering method, which achieved first place in the WMT 2018 German-English Parallel Corpus Filtering shared task.", "They train two translation models in inverse directions on millions of parallel sentences and score sentence pairs based on the word-normalized conditional cross-entropy from the translation models.", "Artetxe and Schwenk (2019) and Schwenk (2018) propose a margin-based scoring method that compares the 1", "similarity of the source and target sentence representations.", "The sentence representations are produced by a sentence encoder trained on clean parallel data via a neural encoder-decoder architecture.", "Other works based on sentence embeddings include Hangya and Fraser (2018) and Littell et al. (2018), as well as Schwenk et al. (2019), which mines millions of parallel sentences in 1620 language pairs from Wikipedia.", "These encoder-decoder based methods require large amounts of clean parallel training data and are not applicable in our scenario where available data is noisy.", "Ondrej Bojar (2020) organize an open domain translation challenge where participants are provided a large, noisy set of Japanese-Chinese segment pairs built from web data, and the task is to clean the noisy data and build an end-to-end machine translation system.", "Work on data selection is also related.", "Moore and Lewis (2010); Junczys-Dowmunt (2018) select domain-related data by computing the cross-entropy difference between in-domain and out-domain language models.", "Duh et al. (2013) use neural language models for data selection.", "Axelrod et al. (2011) and Axelrod et al. (2015) expand cross-entropy difference filtering to both sides of the parallel corpus.", "Since we aim to build a general machine translation system, instead of selecting data that are relevant to a specific domain, we select data whose domains are as general as possible, by using Generative Pre-training (GPT) models trained on large and diverse corpora.", "In this section we introduce a language detection filter, a translation-acceptability filter, and a domain filter.", "Each filter produces a score for every candidate source/target sentence pair.", "The partial score produced by each filter ranges from 0 to 1.", "Values beyond this range are normalized by min-max normalization: y = ( y min ) / ( max min ) .", "The final score is the product of the partial scores.", "Targeting a web-crawler at a given language pair still results in many pages written in the wrong language.", "For example, while a URL pair may clearly indicate translation (e.g., .jp and .zh), it may happen that the text content is simply copied rather than translated.", "We observe this in both our Japanese-Chinese data and the German-English Paracrawl data set.", "It is necessary to filter out sentence pairs with undesired languages.", "We adopt the fastText (Joulin et al., 2017, 2016) language identification toolkit in our language detection filter.", "For each sentence, the toolkit produces a list of language candidates and their corresponding confidence scores.", "We select the language that has the highest confidence score from fastText as the language of the sentence.", "Sentence pairs that have both of the elements detected as the desired language are assigned score 1 and otherwise 0 .", "By discarding sentence pairs with undesired language IDs, we filter out 27% of our Chinese-Japanese parallel sentences and nearly 70% of the German-English parallel sentences from Paracrawl data set.", "In this section, we introduce our translation acceptability filter, one of the main contributions in the paper.", "It aims to measure the parallelism of sentence pairs and filter out sentence pairs that are not mutual translations.", "The pre-trained language model BERT (Devlin et al., 2019) has been shown to be effective in many NLP tasks as it produces better and meaningful contextualized word representations.", "Multilingual BERT, a transformer Masked Language Model pre-trained on Wikipedia dumps of 104 languages, shows remarkable multilingual capability, given that it is not exposed to any multilingual signals, such as parallel data or dictionaries.", "A thorough study by Pires et al. (2019) shows the promising zero-shot cross-lingual model transfer ability of multilingual BERT on named entity recognition and part-of-speech tagging tasks.", "They hypothesize that having language-universal word pieces, such as numbers and URLs, mapped to a shared space forces the co-occurring pieces to also be mapped to a shared space, thus spreading the effect to other word pieces, until different languages are close in the shared space.", "We use pre-trained multilingual BERT to encode a sentence pair ( s, t ) and create the sentence embeddings v s and v t by using the representations of the [CLS] token of s and t .", "We find that the cosine similarity between v s and v t does not necessarily reflect the parallelism of sentence s and t .", "We suspect that the word representations from multilingual BERT are loosely aligned across languages as there is no parallel data or dictionary used during the pre-training.", "A similar observation was made in Lample et al. (2018), where the cross-lingual word embeddings learned in an unsupervised manner are loosely aligned.", "However, after fine-tuning on a few anchor pairs (word translations), they become more aligned.", "Similarly, we use an unsupervised synthetic training set as anchors to fine-tune multilingual BERT with a binary classification objective.", "Xu and Koehn (2017) did similar work to train a filtering classifier on synthetic data, but via bag-of-words translation features.", "Synthetic Training Set.", "In cases where a small number of clean parallel sentence pairs are available, we use them as positive training samples for our classifier.", "In Japanese-Chinese filtering, we use around 300k sentence pairs, mostly from open-source software documentation, 2 as our positive samples.", "In extreme cases where no identifiable, clean parallel data is available, we sub-select high quality parallel sentences, which are used as positive samples, from the noisy parallel corpus based on the Hunalign (Varga et al., 2007) sentence-alignment score.", "We sample negative instances by simulating the noise produced by web crawling and alignment.", "Given a positive pair ( s, t ) , we create a negative sample by randomly choosing one of the following options: Randomly select a target sentence from its adjacent sentences within a window size of k (where k = 2 in our experiments).", "Randomly truncate 30%-70% of the source or target sentence.", "Swap the order of 30%-70% words of the source or target sentence.", "To balance the training set, we create the same number of positive instances and sampled negative instances.", "Binary Classification Objective.", "We feed the sentence pair ( s, t ) into multilingual BERT, which accepts two-sentence input due to its next-sentence prediction objective (Devlin et al., 2019).", "Instead of using the [CLS] token representation, we use a Convolutional Network (CNN) layer that takes the BERT output and generates the final representation of the pair.", "Our experiments show that using CNN layer pooling achieves marginal gains over [CLS] pooling.", "The final layer is a feed-forward network 2 GNOME, Ubuntu, OpenOffice, and KDE data set, from http://opus.nlpl.eu/ with a softmax activation function to produce label probabilities.", "Web-crawled data contains noise of various types, due to the complicated structure of web pages.", "By inspecting the training data generated by the above methods, we notice much of the content is not well-formed, e.g., concatenated lists of months and dates, randomly mixed content from tables, series of emojis and punctuation marks, etc.", "These are certainly written in the desired language, thus not filtered out by language detection.", "The translation acceptability filter also accepts them.", "However, such malformatted data is not helpful to machine translation models, and we prefer a training corpus to contain meaningful content.", "For our domain filter, we adopt the cross-entropy difference scoring method proposed by Moore and Lewis (2010) and Junczys-Dowmunt (2018).", "More specifically, we treat a general domain monolingual corpus as our in-domain data set I , and the noisy parallel corpus without any filtering as our non-domain data set N .", "We train two language models LI and LN and measure how the target sentence t is domain-related to I and less domain-related to N by a perplexity ratio, which is a transformation of cross-entropy difference: f dom ( s, t ) = PPLN ( t ) PPLI ( t ) where PPLM ( x ) is the word-normalized perplexity of the sentence x defined by the language model LM : PPLM ( x ) = exp( 1 | x | | x | (cid:80) i =1 log PM ( x i | x <i )) The intuition is fairly straightforward: the higher the perplexity of the sentence to the non-domain corpus and the lower the perplexity of the sentence to the in-domain corpus, the more likely the sentence is meaningful.", "Our contribution is to use GPT (Radford et al., 2019) as our in-domain language model, instead of news domain text (Junczys-Dowmunt, 2018).", "This minor yet crucial change yields non-trivial performance gains in our experiments for German-English parallel corpus filtering.", "As GPT is trained on data from various sources, such as Wikipedia, Reddit, news websites, etc., it covers a wide range of domains, so our filtered data is more diverse and performs better on multi-domain test sets, as well as in the real world application.", "For our in-domain language model, we use pre-trained Chinese GPT 3 for Japanese-Chinese and pre-trained GPT-2 4 for German-English.", "We randomly sample 4 million sentences from the unfiltered noisy parallel corpus and use KenLM (Heafield, 2011) to train the non-domain language model.", "Perplexity scores from different language models are compatible.", "Following Junczys-Dowmunt (2018), we introduce two operations, clip and cutoff, to postprocess the domain filter score f dom ( s, t ) .", "The clip operation clips the maximum value of the domain score to a threshold clip : f clip ( x, clip ) = min ( x, clip ) and the cutoff operation modifies scores below a threshold cutoff and changes them to 0 : f cutoff ( x, cutoff ) = (cid:40) x, if x > cutoff 0 , otherwise clip prevents a high monolingual in-domain score from overwriting scores from other filters.", "cutoff eliminates out-domain sentence pairs and ensures that highly parallel sentence pairs are at least somewhat in-domain.", "We tune clip and cutoff on the development set.", "The scoring method of our final domain filter becomes: f dom ( s,t ) = f clip ( f cutoff ( f dom ( s,t ) , cutoff ) , clip ) 4 Experiments and Results 4.1 WMT 2018 Parallel Corpus Filtering We use the WMT 2018 Parallel Corpus Filtering shared task (Koehn et al., 2018) as a benchmark to evaluate our methods.", "Participants in the shared task are provided a very noisy 1 billion word (En-glish token count) German-English corpus crawled from the web by the Paracrawl project.", "5 The task is to sub-select clean sentence pairs amounting to", "(a) 10 million words, and", "(b) 100 million words, counted on the English side.", "The quality of the 3 https://github.com/dbiir/UER-py 4 https://github.com/huggingface/transformers 5 https://paracrawl.eu resulting subsets is determined by training a neural machine translation system (Marian) 6 (Junczys-Dowmunt et al., 2018) on this data.", "The quality of the machine translation system is measured by BLEU score on six test sets from various domains.", "As the task is to address the challenge of the data quality and not domain-relatedness of the data for a particular use, sub-sampling the corpus for relevance to the news domain is not encouraged by the shared task organizers.", "All parameters used for training Marian machine translation models are the same as described in Koehn et al. (2018).", "We use CLIP = 5 and CUTOFF = 1 .", "5 in the experiments.", "We use 4 GPUs for training.", "Due to the lack of publicly available Japanese-Chinese parallel corpus, we build a data harvesting pipeline to fetch Japanese-Chinese parallel text from the Internet.", "The crawled bi-text are extremely noisy, but we rely on the proposed parallel corpus filtering method to clean up the data and eventually train a satisfactory machine translation system.", "In this paper, we use these crawled data as another test bed to evaluate our proposed method.", "A single run of the of the data harvesting pipeline is the following.", "We first identify Japanese-Chinese parallel webpages by programmatically analyzing the URL structure of the 5 billion URLs from CommonCrawl, 7 for example, https://www.gotokyo.org/jp/ and https: //www.gotokyo.org/cn/ only differ by jp and cn .", "Then we download the webpages and conduct a series of cascaded data cleaning methods, including removing HTML markups, sentence segmentation, etc.", "Finally we perform segment alignment and filtering.", "Our workflow consists of several runs of the data harvesting pipeline with entry points at different modules (for instance, a more targeted crawling of higher quality material from a previous run).", "We also integrate existing Japanese-Chinese parallel datasets from other publicly available sources for a final parallel data size of 527m characters in 20.9M parallel segments.", "We include all details of our data harvesting 6 https://github.com/marian-nmt/marian (We do not evaluate our method using Moses, the statistical machine translation system provided by WMT, as neural machine translation better fits our real world scenario.) 7 https://commoncrawl.org/ pipeline, as well as the statistics of the obtained dataset, in Appendix A. Test and Development Dataset.", "We curate two parallel test sets by manually processing web data involving daily expressions (337 parallel segments) and news (437 parallel segments).", "For our development set, we use 5304 Japanese-Chinese basic expressions.", "WMT 2018 Parallel Corpus Filtering.", "Table 1 presents the BLEU scores of neural machine translation systems trained on 10 million and 100 million words of training data, selected by different filtering methods.", "In the table, we list the top three performers from the shared task, as well as another two work that are similar to ours.", "Junczys-Dowmunt (2018) has a dual conditional cross-entropy adequacy filter and a domain filter trained on news corpora.", "Hangya and Fraser (2018) generate sentence embeddings by using unsupervised word embedding alignment and measure parallelism via multilingual sentence embedding similarity.", "Chaudhary et al. (2019) leverage massive publicly available English-German parallel corpora to train multilingual sentence embeddings via bidirectional Long Short Term Memory (LSTM) encoder-decoder network.", "We replicate the adequacy and domain-news fil-ters from Junczys-Dowmunt (2018) and obtain similar results.", "By replacing the domain-news filter with our domain-GPT filter, we achieve new state-of-the-art scores on 10M and 100M word data sets (bold scores in the table).", "Given the very compact score range in the shared task (Koehn et al., 2018), we consider this gain very successful.", "It is stated in the shared task that the test sets are from multiple domains.", "Domain-news filter in Junczys-Dowmunt (2018) tends to select sentence pairs from news domain as the filter is trained on news domain data, and this leads to a biased parallel corpus for training machine translation system.", "Our proposed domain-GPT filter is trained from various sources and thus covers a wide range of domains, so our filtered data is more diverse and performs better on multi-domain test sets.", "For our supervised acceptability filter, we train a mulitlingual BERT classifier on clean parallel sentences as positive examples and randomly sampling negative instances, using the method described in Section 3.2.", "For our unsupervised acceptabil-Method Supervised Unsupervised 10M 100M Junczys-Dowmunt (2018) top-1 x 28.62 32.05 Lu et al. (2018) top-2 x 27.60 31.93 Lo et al. (2018) top-3 x 27.41 31.88 Hangya and Fraser (2018) x 22.96 30.54 Chaudhary et al. (2019) x 26.98 30.77 adequacy (our replication of J-D 2018) x 27.12 31.20 + domain-news (our replication of J-D 2018) x 28.66 32.01 + domain-GPT x 29.09 32.11 supervised acceptability x 27.09 31.56 + domain-GPT x 28.94 32.03 unsupervised acceptability x 27.03 30.65 + domain-GPT x 28.68 32.02 all methods above apply language detection filter beforehand.", "our new state-of-the-art combines adequacy (Junczys-Dowmunt, 2018) + our proposed domain-GPT .", "our unsupervised acceptability + domain-GPT is comparable to top supervised method.", "ity filter, we rank noisy parallel sentences by", "(a) the alignment score from Hunalign, and", "(b) the GPT domain filter score.", "We then select the top 10M words (counted on English side) worth of sentence pairs as positive examples.", "This makes the method completely unsupervised, not requiring any identifiable clean parallel data.", "With finetuning multilingual BERT on sentences pairs aligned by Hunalign, the unsupervised acceptability already achieves comparable performance to Chaudhary et al. (2019) which use massive public parallel data.", "After applying the unsupervised domain-GPT filter, we achieve a surprisingly good result (underlined scores in the table), comparable to the best supervised method.", "Japanese-Chinese Parallel Corpus Filtering.", "In Table 2, we evaluate machine translation systems trained on data generated by different filtering methods.", "Unfiltered refers to data generated by Hunalign without any filtering.", "Chaudhary et al. (2019) refer to LASER, the top performing filtering system in WMT 2019 Parallel Corpus Filtering shared task.", "We use the pre-trained 93-language LASER model to generate sentence pair scores.", "The model is trained on a large parallel corpus that contains 3.2M English-Japanese and 8.2M English-Chinese sentence pairs (English is used as pivot to connect Japanese and Chinese during their training).", "Adequacy refers to the dual conditional cross-entropy filtering method that we replicate from Junczys-Dowmunt (2018).", "It is trained on around 300k high quality software-domain parallel sentences from Microsoft Developer Network (MSDN) and Ubuntu.", "The GPT domain filter uses a pre-trained Chinese GPT 8 as the in-domain language model and trains a four-gram KenLM (Heafield, 2011) language model on the Chinese side of our 4 million unfiltered noisy parallel sentences as a non-domain language model.", "Acceptability is our proposed multilingual BERT based filtering method, which is trained on a synthetic dataset, where we use 300k high-quality software domain parallel sentences as positive examples and sample equal-sized negative sentence pairs, using the sampling methods described in Section 3.2.", "Chaudhary et al. (2019) train a multilingual sentence encoder on various English-Foreign Language parallel corpus and prove the zero-shot cross-lingual transfer capability between non-English pairs, such as Japanese and Chinese.", "However, when English is used as the pivot, the distance between Japanese and Chinese become larger, resulting in not effectively capturing the correlation between them.", "The conditional cross-entropy metric in adequacy relies on the quality of machine translation system.", "Due to the difficulty of training high-quality machine translation systems on 300k sentence pairs, the adequacy filter cannot produce accurate conditional cross-entropy.", "The GPT domain filter assigns higher score to sentences that are more like human natural language and downgrades malformatted sentence pairs.", "It is effective in the German-English filtering task, where a fixed-size subset is selected and we want to fill the subset with as much domain relevant data as possible.", "However, to best fit the real world scenario where the goal is to have the best machine translation system, we do not limit the amount of data to select for training machine translation system and let the system decide the amount of the data to select, according to each filtering method.", "We rank sentence pairs by their filtering scores and train a MT system on N percentage of the top ranked data.", "N is selected based on the development set and we report the best BLEU score.", "Under this setting, adding a domain filter makes the model use less data ( N = 50% 8 pre-trained Mixedlarge corpus + GptEncoder + LmTarget Model in https://github.com/dbiir/UER-py Filtering Probability Threshold Q u a l i t y o f P a i r s ( P / R ) 0.2 0.4 0.6 0.8 1 0.2 0.4 0.6 0.8 Precision Recall Figure 1: Precision and recall curves of the acceptability filter on our internal JA-ZH filtering test set.", "vs N = 75% ), but we do not observe any performance gain, as we suspect that the malformatted but parallel sentence pairs are neither harmful or helpful to the model, and filtering them out makes no difference in performance of the model.", "High Precision Parallel Corpus Filtering.", "For analysis purposes, we manually annotate a small set of 320 sentence pairs randomly selected from our original web crawled Japanese-Chinese data set.", "24% of the sentence pairs are labeled not mutual translations.", "As stated in Khayrallah and Koehn (2018), neural machine translation models are more sensitive to noise than statistical machine translation models, so having high precision filtering results as training data is necessary.", "In Figure 1, we show precision and recall curves for our proposed filtering method on this labeled test set, under different threshold settings.", "The threshold is selected based on the filtering classifier probability produced by the softmax layer.", "By setting the threshold to 0.9, we are able to obtain 97.7% precision high-quality parallel sentences, while still having 66.9% recall.", "In this paper, we address the parallel corpus filtering problem in machine translation.", "We propose a novel filtering method using pre-trained language models.", "Our method outperforms strong baselines and achieves a new state-of-the-art.", "We release a large Japanese-Chinese web crawled parallel corpus for the research purposes.", "Because it is artifi-cial to use synthetic data for training a filter classifier, future work can focus on a better objective that models parallelism more smoothly.", "Future work also includes extending the method to low-resource languages not covered by multilingual BERT.", "Acknowledgments We would like to thank the anonymous reviewers for their constructive feedback.", "Kevin Duh, Graham Neubig, Katsuhito Sudoh, and Ha-jime Tsukada.", "2013.", "Adaptation data selection using neural language models: Experiments in machine translation.", "In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL) .", "Chris Dyer, Jonathan Weese, Hendra Setiawan, Adam Lopez, Ferhan Ture, Vladimir Eidelman, Juri Gan-itkevitch, Phil Blunsom, and Philip Resnik.", "2010.", "cdec: A decoder, alignment, and learning framework for finite-state and context-free translation models.", "In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL), System Demonstrations .", "Viktor Hangya and Alexander Fraser.", "2018.", "An unsupervised system for parallel corpus filtering.", "In Proceedings of the Third Conference on Machine Translation: Shared Task Papers .", "Kenneth Heafield.", "2011.", "KenLM: Faster and smaller language model queries.", "In Proceedings of the Sixth Workshop on Statistical Machine Translation .", "Armand Joulin, Edouard Grave, Piotr Bojanowski, Matthijs Douze, H erve J egou, and Tomas Mikolov.", "2016.", "FastText.zip: Compressing text classification models.", "arXiv preprint arXiv:1612.03651 .", "Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov.", "2017.", "Bag of tricks for efficient text classification.", "In Proceedings of the Conference of the European Chapter of the Association for Computational Linguistics (EACL) .", "Marcin Junczys-Dowmunt.", "2018.", "Dual conditional cross-entropy filtering of noisy parallel corpora.", "In Proceedings of the Third Conference on Machine Translation: Shared Task Papers .", "Huda Khayrallah and Philipp Koehn.", "2018.", "On the impact of various types of noise on neural machine translation.", "In Proceedings of the Workshop on Neural Machine Translation and Generation .", "Philipp Koehn, Huda Khayrallah, Kenneth Heafield, and Mikel L Forcada.", "2018.", "Findings of the WMT 2018 shared task on parallel corpus filtering.", "In Proceedings of the Third Conference on Machine Translation: Shared Task Papers .", "Guillaume Lample, Alexis Conneau, Ludovic Denoyer, and Marc'Aurelio Ranzato.", "2018.", "Unsupervised machine translation using monolingual corpora only.", "In Proceedings of the 6th International Conference on Learning Representations (ICLR) .", "References Mikel Artetxe and Holger Schwenk.", "2019.", "Margin-based parallel corpus mining with multilingual sentence embeddings.", "In Proceedings of the Annual Meeting of the Association for Computational Linguistics .", "Amittai Axelrod, Xiaodong He, and Jianfeng Gao.", "2011.", "Domain adaptation via pseudo in-domain data selection.", "In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP) .", "Amittai Axelrod, Yogarshi Vyas, Marianna Martindale, and Marine Carpuat.", "2015.", "Class-based n-gram language difference models for data selection.", "In Proceedings of the International Workshop on Spoken Language Translation (IWSLT) .", "Yonatan Belinkov and Yonatan Bisk.", "2018.", "Synthetic and natural noise both break neural machine translation.", "In Proceedings of the Sixth International Conference on Learning Representations (ICLR) .", "Vishrav Chaudhary, Yuqing Tang, Francisco Guzm an, Holger Schwenk, and Philipp Koehn.", "2019.", "Low-resource corpus filtering using multilingual sentence embeddings.", "In Proceedings of the Fourth Conference on Machine Translation .", "Chenhui Chu, Toshiaki Nakazawa, and Sadao Kurohashi.", "2015.", "Integrated parallel sentence and fragment extraction from comparable corpora: A case study on ChineseJapanese Wikipedia.", "ACM Transactions on Asian and Low-Resource Language Information Processing .", "Raj Dabre and Sadao Kurohashi.", "2017.", "MMCR4NLP: multilingual multiway corpora repository for natural language processing.", "CoRR , abs/1710.01025.", "Michael Denkowski, Greg Hanneman, and Alon Lavie.", "2012.", "The CMU-Avenue French-English translation system.", "In Proceedings of the Seventh Workshop on Statistical Machine Translation .", "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova.", "2019.", "BERT: Pre-training of deep bidirectional transformers for language understanding.", "In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT) ." ]
[ "abstain", "abstain", "objective", "method", "abstain", "objective", "result", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "result", "method", "objective", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "result", "objective", "objective", "objective", "abstain", "result", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "abstain", "method", "abstain", "method", "method", "method", "other", "abstain", "method", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "method", "method", "method", "abstain", "method", "other", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "other", "method", "method", "method", "method", "other", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "objective", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain" ]
[ "Word and sentence embeddings are useful feature representations in natural language processing.", "However, intrinsic evaluation for embeddings lags far behind, and there has been no significant update since the past decade.", "Word and sentence similarity tasks have become the de facto evaluation method.", "It leads models to overfit to such evaluations, negatively impacting embedding models' development.", "This paper first points out the problems using semantic similarity as the gold standard for word and sentence embedding evaluations.", "Further, we propose a new intrinsic evaluation method called EvalRank , which shows a much stronger correlation with downstream tasks.", "Extensive experiments are conducted based on 60+ models and popular datasets to certify our judgments.", "Finally, the practical evaluation toolkit is released for future benchmarking purposes.", "1 1 Introduction Distributed representation of words (Bengio et al., 2003; Mikolov et al., 2013a; Pennington et al., 2014; Bojanowski et al., 2017) and sentences (Kiros et al., 2015; Conneau et al., 2017; Reimers and Gurevych, 2019; Gao et al., 2021) have shown to be extremely useful in transfer learning to many NLP tasks.", "Therefore, it plays an essential role in how we evaluate the quality of embedding models.", "Among many evaluation methods, the word and sentence similarity task gradually becomes the de facto intrinsic evaluation method.", "Figure 1 shows examples from word and sentence similarity datasets.", "In general, the datasets consist of pairs of words ( w 1 , w 2 ) (or sentences) and human-annotated similarity scores S h .", "To evaluate an embedding model ( ) , we first extract embeddings for ( w 1 , w 2 ): ( e 1 , e 2 ) = ( ( w 1 ) , ( w 2 ) ).", "Then, a similarity measure is applied to compute an predicted score S p = sim ( e 1 , e 2 ) , where cosine similarity is adopted as sim unquestionably in the majority of cases.", "Finally, the correlation between S h and S p is computed, and a higher correlation suggests good alignment with human annotations and a better embedding model.", "Many studies, especially those targeting on information retrieval via semantic search and clustering (Reimers and Gurevych, 2019; Su et al., 2021), have used the similarity task as the only or main evaluate method (Tissier et al., 2017; Mu et al., 2018; Arora et al., 2017; Li et al., 2020; Gao et al., 2021).", "We observe a number of issues in word or sentence similarity tasks ranging from dataset collection to the evaluation paradigm, and consider that focusing too much on similarity tasks would negatively impact the development of future embedding models.", "The significant concerns are summarized as follows, which generally apply to both word and sentence similarity tasks.", "First, the definition of similarity is too vague.", "There exist complicated relationships between sampled data pairs, and almost all relations contribute to the similarity score, which is challenging to non-expert annotators.", "Second, the similarity evaluation tasks are not directly 6060 relevant to the downstream tasks.", "We believe it is because of the data discrepancy between them, and the properties evaluated by similarity tasks are not the ones important to downstream applications.", "Third, the evaluation paradigm can be tricked with simple post-processing methods, making it unfair to benchmark different models.", "Inspired by Spreading-Activation Theory (Collins and Loftus, 1975), we propose to evaluate embedding models as a retrieval task, and name it as EvalRank to address the above issues.", "While similarity tasks measure the distance between similarity pairs from all similarity levels, EvalRank only considers highly similar pairs from a local perspective.", "Our main contributions can be summarized as follows: 1 We point out three significant problems for using word and sentence similarity tasks as the de facto evaluation method through analysis or experimental verification.", "The study provides valuable insights into embeddings evaluation methods.", "2 We propose a new intrinsic evaluation method, EvalRank , that aligns better with the properties required by various downstream tasks.", "3 We conduct extensive experiments with 60+ models and 10 downstream tasks to certify the effectiveness of our evaluation method.", "The practical evaluation toolkit is released for future benchmarking purposes.", "Word embedding has been studied extensively, and popular work (Mikolov et al., 2013a; Pennington et al., 2014; Bojanowski et al., 2017) are mainly built on the distributional hypothesis (Harris, 1954), where words that appear in the same context tend to share similar meanings.", "The early work on sentence embedding are either built upon word embedding (Arora et al., 2017; Rckl et al., 2018; Almarwani et al., 2019) or follow the distributional hypothesis on a sentence level (Kiros et al., 2015; Hill et al., 2016; Logeswaran and Lee, 2018).", "Recent development of sentence embedding are incorporating quite different techniques including multi-task learning (Cer et al., 2018), supervised inference data (Conneau et al., 2017; Reimers and Gurevych, 2019), contrastive learning (Zhang et al., 2020; Carlsson et al., 2020; Yan et al., 2021; Gao et al., 2021) and pre-trained language models (Li et al., 2020; Wang and Kuo, 2020; Su et al., 2021).", "Nonetheless, even though different methods choose different evaluation tasks, similarity task is usually the shared task for benchmarking purposes.", "Similarity task is originally proposed to mimic human's perception about the similarity level between word or sentence pairs.", "The first word similarity dataset was collected in 1965 (Rubenstein and Goodenough, 1965), which consists of 65 word pairs with human annotations.", "It has been a standard evaluation paradigm to use cosine similarity between vectors for computing the correlation with human judges (Agirre et al., 2009).", "Many studies raise concerns about such evaluation paradigm.", "Faruqui et al. (2016) and Wang et al. (2019b) points out some problems with word similarity tasks, including low correlation with downstream tasks and lack of task-specific similarity.", "Reimers et al. (2016), Eger et al. (2019) and Zhelezniak et al. (2019) states current evaluation paradigm for Semantic Textual Similarity (STS) tasks are not ideal.", "One most recent work (Abdalla et al., 2021) questions about the data collection process of STS datasets and creates a new semantic relatedness dataset (STR) by comparative annotations (Lou-viere and Woodworth, 1991).", "There are also other intrinsic evaluation methods for word and sentence embedding evaluation, but eventually did not gain much popularity.", "Word analogy task is first proposed in (Mikolov et al., 2013a,c) to detect linguistic relations between pairs of word vectors.", "Zhu and de Melo (2020) recently expanded the analogy concept to sentence level.", "However, the analogy task is more heuristic and fragile as an evaluation method (Gladkova et al., 2016; Rogers et al., 2017).", "Recently, probing tasks have been proposed to measure intriguing properties of sentence embedding models without worrying much about practical applications (Zhu et al., 2018; Conneau et al., 2018; Baranckov and Bo-jar, 2019).", "Because of the lack of effective intrinsic evaluation methods, Reimers and Gurevych (2019) and Wang et al. (2021) seeks to include more domain-specific tasks for evaluation.", "In this work, we discuss the problems of similarity tasks both on word and sentence levels.", "They are highly similar from data collection to evaluation 6061 paradigm and are troubled by the same problems.", "First, the concept of similarity and relatedness are not well-defined.", "Similar pairs are related but not vise versa.", "Taking synonym, hypernym, and antonym relations as examples, the similarity rank should be synonym > hypernym > antonym while the relatedness rank should be synonym > hypernym antonym.", "This was not taken into consideration when constructing datasets.", "Agirre et al. (2009) intentionally split one word similarity dataset into similarity and relatedness subsets.", "However, we find that obtained subsets are erroneous towards polysemy, and the relatedness between pair (stock', egg', 1.81) is much lower than pair (stock', oil', 6.34).", "It is because only the financial stock market' is compared but not the stock of supermarkets. Furthermore, relationships between samples are far more complicated than currently considered, which is a challenge to all current datasets. Second, the annotation process is not intuitive to humans. The initial goal of the similarity task is to let the model mimic human perception. However, we found that the instructions on similarity levels are not well defined. For example, on STS 13 16 datasets, annotators must label sentences that share some details' with a score of 2 and on the same topic' with a score of 1. According to priming effect theory, (Meyer and Schvan-eveldt, 1971; Weingarten et al., 2016), humans are more familiar with ranking several candidate samples based on one pivot sample (priming stimulus).", "Therefore, a more ideal way of annotation is to give one pivot sample (e.g. cup') and rank candidates with different similarity levels (e.g. trophy', tableware', food', article', cucumber').", "In other words, it is more intuitive for human to compare (a,b) > (a,c) than (a,b) > (c,d) as far as similarity is concerned.", "However, in practice, it is hard to collect a set of candidates for each pivot sample, especially for sentences.", "In previous studies, it was found that the performance of similarity tasks shows little or negative correlation with the performance of downstream tasks (Faruqui et al., 2016; Wang et al., 2019b, 2021).", "An illustration is shown in Table 1a.", "We think there are two reasons behind 1) low testing Score (rank) STS-B SST2 MR GloVe 47.95 (4) 79.52 (6 ) 77.54 (5 ) InferSent 70.94 (3) 83.91 (3) 77.61 (4 ) BERT-cls 20.29 (6) 86.99 (1 ) 80.99 (1 ) BERT-avg 47.29 (5) 85.17 (2 ) 80.05 (2 ) BERT-flow 71.76 (2) 80.67 (4 ) 77.01 (6 ) BERT-whitening 71.79 (1) 80.23 (5 ) 77.96 (3 )", "First, similarity datasets have their data source and are not necessarily close to the corpus of downstream tasks.", "For example, Baker et al. (2014) collect word pairs for verbs only while Luong et al. (2013) intentionally test on rare words.", "Also, for STS datasets, (Agirre et al., 2012) annotates on sentence pairs from paraphrases, video captions, and machine translations, which has limited overlap on downstream tasks like sentiment classification.", "Second, the original goal for the similarity task is to mimic human perceptions.", "For example, STS datasets are originally proposed as a competition to find the most effective STS systems instead of a gold standard for generic sentence embedding evaluation.", "Some properties evaluated by similarity tasks are trivial to downstream tasks, and it is more important to test on mutually important ones.", "As examples in Figure 1, the similarity tasks inherently require the model to predict sim( p 1 ) > sim( p 2 ) and sim( p 5 ) > sim( p 4 ), which we believe are unnecessary for most downstream applications.", "Instead, similar pairs are more important than less similar pairs for downstream applications (Keklinen, 2005; Reimers et al., 2016).", "Therefore, it is enough for good embedding models to focus on gathering similar pairs together while keeping dissimilar ones far away to a certain threshold.", "As similarity tasks become one de facto evaluation method for embedding models, recent work tend to overfit the current evaluation paradigm, including the choice of similarity measure and the post-processing step.", "Similarity Metrics.", "Cosine similarity is the default choice for similarity tasks.", "However, simply changing the similarity metric to other commonly used ones can lead to contradictory results.", "In Table 1b, we compare recent five BERT-based sentence embedding models including BERT (De-vlin et al., 2019), BERT-whitening (Su et al., 2021), BERT-flow (Li et al., 2020), SBERT (Reimers and Gurevych, 2019) and SimCSE (Gao et al., 2021).", "2 The results on standard STS-Benchmark testset are reported under both cosine and l 2 similarity.", "As we can see, the performance rank differs under different similarity metrics.", "This is especially true for BERT-flow and BERT-whitening, which do not even outperform their baseline models when evaluating with l 2 metric.", "Therefore, we can infer that some models overfit to the default cosine metric for similarity tasks.", "Whitening Tricks.", "A number of studies attempted the post-processing of word embeddings (Mu et al., 2018; Wang et al., 2019a; Liu et al., 2019b) and sentence embeddings (Arora et al., 2017; Liu et al., 2019a; Li et al., 2020; Su et al., 2021).", "The shared concept is to obtain a more isotropic embedding space (samples evenly distributed across directions) and can be summarized as a space whitening process.", "Even though the whitening tricks help a lot with similarity tasks, we found it is usually not applicable to downstream tasks or even hurt the model performance.", "3 We think the whitening methods are overfitted to similarity tasks and would like 2 Experimental details in Appendix B. 3 Analysis in Appendix C.1.", "First, we take the whole STS-Benchmark dataset and create subsets of sentence pairs from certain similarity levels.", "We test on two baseline sentence embedding models: GloVe, BERT; three whitening tricks: ABTT on GloVe (Mu et al., 2018), BERT-whitening, BERT-flow; two strong sentence embedding models that perform well on both STS and downstream tasks: SBERT, SimCSE.", "Figure 2 shows the result, and we can see that the whitening-based methods are boosting the baseline performance mainly for less similar pairs (e.g., pairs with a similarity score within [2,0]).", "In contrast, the models that perform well on downstream tasks show consistent improvement on all subsets with different similarity scores.", "As discussed in Section 3.2, highly similar pairs are more critical than less similar pairs for downstream tasks.", "Since the postprocessing methods mainly help with less similar pairs, they do not help much on downstream tasks.", "In cognitive psychology, Spreading-Activation Theory (SAT) (Collins and Loftus, 1975; Anderson, 1983) is to explain how concepts store and interact within the human brain.", "Figure 3 shows one example about the concept network.", "In the network, only highly related concepts are connected.", "To find the relatedness between concepts like engine and street , the activation is spreading through mediating concepts like car and ambulance with decaying factors.", "Under this theory, the similarity task is measuring the association between any two concepts in the network, which requires complicated long-distance activation propagation.", "Instead, to test the soundness of the concept network, it is enough to ensure the local connectivity between concepts.", "Moreover, the long-distance relationships can be inferred thereby with various spreading activation 6063 Type # pos pairs # background samples Source EvalRank Word 5,514 22,207 Word Similarity Datasets & Wiki Sent 6,989 24,957 STS-Benchmark & STR Table 2: Statistics of EvalRank Datasets algorithms (Cohen and Kjeldsen, 1987).", "Therefore, we propose EvalRank to test only on highly related pairs and make sure they are topologically close in the embedding space.", "It also alleviates the problems of similarity tasks.", "First, instead of distinguishing multifaceted relationships, we only focus on highly related pairs, which are intuitive to human annotators.", "Second, it shows a much stronger correlation with downstream tasks as desired properties are measured.", "Third, as we treat the embedding space from a local perspective, it is less affected by the whitening methods.", "We frame the evaluation of embeddings as a retrieval task.", "To this purpose, the dataset of EvalRank contains two sets: 1) the positive pair set P = { p 1 , p 2 , ..., p m } and 2) the background sample set C = { c 1 , c 2 , ..., c n } .", "Each positive pair p i = ( c x , c y ) in P consists of two samples in C that are semantically similar.", "For each sample ( c x ) and its positive correspondence ( c y ), a good embedding model should has their embeddings ( e x , e y ) close in the embedding space.", "Meantime, the other background samples should locate farther away from the sample c x .", "Some samples in the background may also be positive samples.", "We assume it barely happens and is negligible if good datasets are constructed.", "Formally, given an embedding model ( ) , the embeddings for all samples in C are computed as { e 1 , e 2 , ..., e n } = { ( c 1 ) , ( c 2 ) , ..., ( c n ) } .", "The cos similarity and l 2 similarity between two samples ( c x , c y ) are defined as: S cos ( c x , c y ) = e Tx e y || e x || || e y || S l 2 ( c x , c y ) = 1 1 + || e x e y || Further, the similarity score is used to sort all background samples in descending order and the performance at each positive pair p i is measured by the rank of c x 's positive correspondence c y", "w.r.t all background samples: rank i = rank ( S ( c x , c y ) , [ || nj =1 ,j (cid:54) = x S ( c x , c j )]) where || refers to the concatenation operation.", "To measure the overall performance of model ( ) on all positive pairs in P , the mean reciprocal rank (MRR) and Hits@k scores are reported and a higher score indicates a better embedding model: MRR = 1 m m (cid:88) i =1 1 rank i Hits @ k = 1 m m (cid:88) i =1 1 [ rank i k ] Note that there are two similarity metrics, and we found that S cos shows a better correlation with downstream tasks while S l 2 is more robust to whitening methods.", "We use S cos in the experiments unless otherwise specified.", "Word-Level.", "We collect the positive pairs from 13 word similarity datasets (Wang et al., 2019b).", "For each dataset, the pairs with the highest 25% similarity score are gathered as positive pairs.", "Background word samples contain all words that appear in the similarity datasets.", "Further, we augment the background word samples using the most frequent 20,000 words from Wikipedia corpus.", "Sentence-Level.", "Similarly, the pairs with top 25% similarity/relatedness score from STS-Benchmark dataset (Cer et al., 2017) and STR dataset (Abdalla et al., 2021) are collected as positive pairs.", "All sentences that appear at least once are used as the background sentence samples.", "In both cases, if positive pair ( c x , c y ) exists, the reversed pair ( c y , c x ) is also added as positive pairs.", "Detailed statistics of EvalRank datasets are listed in Table 2. 4.4 Alignment and Uniformity Recently, Wang and Isola (2020) identifies the alignment and uniformity properties as an explanation to the success of contrastive loss.", "It shares many similarities with our method and can also shed light on why EvalRank works.", "First, the alignment property requires similar samples to have similar features, which aligns with the objective of 6064 SCICITE MR CR MPQA SUBJ SST2 SST5 TREC MRPC SICK-E WS-353-All 62.87 43.68 40.94 37.50 15.57 41.65 45.03 34.70 8.98 57.96 WS-353-Rel 66.13 47.92 45.15 41.77 11.65 47.25 48.18 26.36 20.56 61.83 WS-353-Sim 67.86 45.94 43.97 38.68 17.41 44.03 50.32 34.85 10.67 56.13 RW-STANFORD 75.56 74.65 55.35 66.08 46.82 81.50 68.25 45.91 13.08 43.29 MEN-TR-3K 66.91 44.15 45.37 39.14 1.70 38.51 42.11 22.82 28.63 71.26 MTURK-287 68.48 65.95 48.01 52.36 31.94 71.96 58.01 29.22 7.54 36.23 MTURK-771 79.93 60.87 49.45 57.92 24.04 62.75 62.03 29.14 17.44 60.23 SIMLEX-999 68.20 48.02 40.90 46.43 19.03 47.30 50.95 38.14 15.32 60.26 SIMVERB-3500 65.13 45.60 36.95 47.04 21.57 45.16 48.56 41.74 10.70 58.08 EvalRank MRR 89.96 87.91 68.23 78.03 51.35 91.54 83.36 48.15 25.70 61.34 Hits@1 85.91 83.69 66.93 81.43 55.95 89.74 79.46 43.53 28.82 53.86 Hits@3 90.11 88.82 69.92 82.05 54.52 93.32 84.41 48.44 30.87 62.77 Table 3: Spearman's rank correlation ( 100 ) between performance scores of word-level intrinsic evaluation and downstream tasks, where the best is marked with bold and second best with underline.", "EvalRank .", "Second, the uniformity property is measured by the average Gaussian distance between any two samples.", "In contrast, EvalRank focuses on the distance between points from a local perspective and would require the pivot sample to have longer distances to any background samples than its positive candidate.", "Measuring the distance from a local perspective has unique advantages because the learned embedding space will likely form a manifold and can only approximates euclidean space locally.", "Therefore, simple similarity metrics like cos or l 2 are not suitable to model long-distance relationships.", "A good intrinsic evaluator can test the properties that semantically similar samples are close in vector space (Reimers and Gurevych, 2019; Gao et al., 2021) and serve as prompt information to real-world applications.", "As EvalRank directly test on the first property, we design experiments to show the correlation with various downstream tasks as a comparison of intrinsic evaluators.", "To be comprehensive, we first collect as many embedding models as possible and test them on the intrinsic evaluator and downstream task.", "The Spearman's rank correlation is computed between the results, and a higher score indicates better correlation with downstream tasks and better intrinsic evaluator.", "Meantime, we do not think similarity evaluations should be discarded, even though it fails to correlate well with downstream applications.", "It has its advantages as aiming to mimic human perception about semantic-related pairs.", "Word Embedding Models.", "We collect 19 word embedding models from GloVe (Pennington et al., 2014), word2vec (Mikolov et al., 2013b), fastText (Bojanowski et al., 2017), Dict2vec (Tissier et al., 2017) and PSL (Wieting et al., 2015).", "Meantime, we apply ABTT (Mu et al., 2018) post-processing to all models to double the total number of embedding models.", "When testing on downstream tasks, the simplest bag-of-words feature is used as sentence representations in order to focus on measuring the quality of word embeddings.", "Word Similarity Tasks.", "9 word similarity datasets are compared as the baseline methods including WS-353-All (Finkelstein et al., 2001), WS-353-Rel (Agirre et al., 2009), WS-353-Sim (Agirre et al., 2009), RW-STANFORD (Luong et al., 2013), MEN-TR-3K (Bruni et al., 2014), MTURK-287 (Radinsky et al., 2011), MTURK-771 (Halawi et al., 2012), SIMLEX-999 (Hill et al., 2015), SIMVERB-3500 (Gerz et al., 2016).", "The word similarity datasets with less than 200 pairs are not selected to avoid evaluation occasionality.", "Cosine similarity and Spearman's rank correlation are deployed for all similarity tasks.", "Downstream Tasks.", "SentEval (Conneau and Kiela, 2018) is a popular toolkit in evaluating sentence embeddings.", "We use 9 downstream tasks from SentEval including MR (Pang and Lee, 2005), CR (Hu and Liu, 2004), MPQA (Wiebe et al., 2005), SUBJ (Pang and Lee, 2004), SST2 (Socher et al., 2013), SST5 (Socher et al., 2013), TREC (Li and Roth, 2002), MRPC (Dolan et al., 2004), SICK-E (Marelli et al., 2014).", "Previous work spot 6065 SCICITE MR SST2 EvalRank 89.96 87.91 91.54 w/o wiki vocabs 88.55 83.99 88.26 w/ WN synonym 90.56 86.56 91.12 w/ l 2 metric 77.47 78.34 81.51 Table 4: Ablation study on variants of EvalRank .", "that SentEval tasks are biased towards sentiment analysis (Wang et al., 2018).", "Therefore, we add one extra domain-specific classification task SCICITE (Cohan et al., 2019) which assigns intent labels (background information, method, result comparison) to sentences collected from scientific papers that cite other papers.", "For all tasks, a logistic regression classifier is used with cross-validation to predict the class labels.", "Table 3 shows the word-level results.", "In short, EvalRank outperforms all word similarity datasets with a clear margin.", "For evaluation metrics, we can see that Hits@3 score shows a higher correlation than MRR and Hits@1 scores.", "However, the gap between the evaluation metrics is not big, which makes them all good measures.", "Among all 10 downstream tasks, EvalRank shows a strong correlation ( >0.6) with 7 tasks and a very strong correlation ( >0.8) with 5 tasks.", "While, among all word similarity datasets, only one dataset (RW-STANFORD) shows a strong correlation with one downstream task (SST2).", "For word similarity datasets, RW-STANFORD dataset shows the best correlation with downstream tasks.", "It confirms the finding in Wang et al. (2019b) that this dataset contains more high-quality and low-frequency word pairs.", "Ablation Study.", "We experiment with several variants of our EvalRank method and the result is shown in Table 4.", "First, if we do not augment the background word samples with the most frequent 20,000 words from the Wikipedia corpus, it leads to certain performance downgrading.", "Without suf-ficient background samples, positive pairs are not challenging enough to test each model's capability.", "Second, we tried to add more positive samples (e.g. 5k samples) using synonym relations from WordNet (WN) database (Miller, 1998).", "However, no obvious improvement is witnessed because the EvalRank MRR Hits@1 Hits@3 GloVe 13.15 4.66 15.72 word2vec 12.88 4.57 14.35 fastText 17.22 5.77 19.99 Dict2vec 12.71 4.03 13.04", "synonym pairs in WN contain too many noisy pairs.", "Last, for similarity measures, we notice that cos similarity is consistently better than l 2 similarity while both outperform word similarity baselines.", "Benchmarking Results.", "In Table 5a, we compared four popular word embedding models, including GloVe, word2vec, fastText, and Dict2vec, where fastText achieves the best performance.", "Sentence Embedding Models.", "We collect 67 embedding models, where 38 of them are built upon word embeddings with bag-of-words features and 29 of them are neural-network-based models.", "For neural-network-based models, we collect variants from InferSent (Conneau et al., 2017), BERT (De-vlin et al., 2019), RoBERTa (Liu et al., 2020), BERT-flow (Li et al., 2020), BERT-whitening (Su et al., 2021), SBERT (Reimers and Gurevych, 2019) and SimCSE (Gao et al., 2021).", "Sentence Similarity Tasks.", "We evaluate on 7 standard semantic textual similarity datasets including STS12 16 (Agirre et al., 2012, 2013, 2014, 2015, 2016), STS-Benchmark (Cer et al., 2017) and SICK-Relatedness (Marelli et al., 2014).", "Recently, Abdalla et al. (2021) questioned the labeling process of STS datasets and released a new semantic textual relatedness (STR) dataset, which is also included in our experiments.", "from SentEval evaluation toolkit, including MR, CR, MPQA, SUBJ, SST2, SST5, TREC, as well as the domain-specific classification task SCICITE.", "We exclude the MRPC and SICK-E because they are highly similar with STS tasks (Conneau and Kiela, 2018).", "Table 6 shows the sentence-level results.", "EvalRank outperform all sentence similarity datasets with a clear margin.", "For evaluation metric, Hits@1 shows a higher correlation comparing with MRR and Hits@3.", "Among all 7 downstream tasks, EvalRank shows strong correlation ( > 0 . 6 ) with 6 tasks.", "For sentence similarity datasets, no one clearly outperforms others.", "Additionally, we found that STR dataset shows the worst correlation with downstream tasks.", "Even though STR adopts a better data annotation schema than STS datasets, it still fol-SCICITE MR SST2 EvalRank (STS-B + STR) MRR 65.95 83.43 80.97 Hits@1 69.01 85.39 82.65 Hits@3 63.35 83.92 80.36 EvalRank (STS-B) MRR 63.05 75.85 72.87 Hits@1 66.22 77.94 75.20 Hits@3 61.23 75.49 72.92 EvalRank (STR) MRR 63.51 83.28 80.20 Hits@1 66.59 84.53 82.14 Hits@3 60.68 82.55 79.42 Table 7: Performance under different data sources.", "lows the previous standard evaluation paradigm and is exposed to the same problems.", "It further ver-ifies our discussion about problems with sentence similarity evaluation.", "Correlation Visualization.", "Figure 4 shows the performance rank of 67 sentence embedding models on five tasks, including 2 downstream tasks (MR, SST2) and 3 intrinsic evaluations (STS-B, STR, EvalRank ).", "The models' performance rank on the MR task is used as the pivot.", "As MR and SST2 datasets are both related to sentiment analysis, they correlate well with each other.", "Among the three intrinsic evaluation tasks, EvalRank shows a higher correlation with downstream tasks as the blue dots roughly follow the trend of red dots.", "In contrast, the dots of STS-B and STR are dispersed in different regions.", "This shows that the performance of STS-B and STR is not a good indicator of the performance on downstream tasks.", "mance of EvalRank with different data sources.", "By combining the positive pairs collected from both STS-B and STR datasets, EvalRank leads to the best performance.", "Interestingly, according to our results, even though STR evaluation does not correlate well with downstream tasks, the positive pairs collected from STR have better quality than STS-B.", "It also confirms the argument that STR improves the dataset collection process (Abdalla et al., 2021).", "Benchmarking Results.", "Table 5b benchmarked seven popular sentence embedding models.", "As the widely accepted SOTA model, SimCSE outperforms others with a clear margin.", "In this work, we first discuss the problems with current word and sentence similarity evaluations and proposed EvalRank , an effective intrinsic evaluation method for word and sentence embedding models.", "It shows a higher correlation with downstream tasks.", "We believe that our evaluation method can have a broader impact in developing future embedding evaluation methods, including but not limited to its multilingual and task-specific extensions.", "This research is supported by the Agency for Science, Technology and Research (A*STAR) under its AME Programmatic Funding Scheme (Project No. A18A2b0046) and Science and Engineering Research Council, Agency of Science, Technology and Research (A*STAR), Singapore, through the National Robotics Program under Human-Robot Interaction Phase 1 (Grant No. 192 25 00054)." ]
[ "abstain", "abstain", "abstain", "abstain", "objective", "objective", "method", "abstain", "other", "method", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "objective", "abstain", "objective", "objective", "objective", "method", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "other", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "method", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "other" ]
[ "There is no consensus on the state-of-the-art approach to historical text normalization.", "Many techniques have been proposed, including rule-based methods, distance metrics, character-based statistical machine translation, and neural encoderdecoder models, but studies have used different datasets, different evaluation methods, and have come to different conclusions.", "This paper presents the largest study of historical text normalization done so far.", "We critically survey the existing literature and report experiments on eight languages, comparing systems spanning all categories of proposed normalization techniques, analysing the effect of training data quantity, and using different evaluation methods.", "The datasets and scripts are made publicly available.", "Spelling variation is one of the key challenges for NLP on historical texts, affecting the performance of tools such as part-of-speech taggers or parsers and complicating users' search queries on a corpus.", "Normalization is often proposed as a solution; it is commonly defined as the mapping of historical variant spellings to a single, contemporary normal form as exemplified in Figure 1. Automatic normalization of historical texts has a long history, going back to at least Fix (1980).", "Earlier approaches often rely on hand-crafted algorithms tailored to one specific language, while more recent approaches have focused on supervised machine learning, particularly character-based statistical machine translation (SMT) and its neural equivalent (NMT).", "However, no clear consensus has emerged about the state of the art for this task, with papers either reporting an advantage for NMT (Hmlinen et al., 2018), SMT 1 This work largely builds upon the author's doctoral thesis (Bollmann, 2018), the research for which was carried out at Ruhr-Universitt Bochum, Germany.", "(Domingo and Casacuberta, 2018), or language-specific algorithms (Schneider et al., 2017).", "Moreover, the quantity of annotated training data varies considerably between studies, making it diffi-cult to obtain practical recommendations for new projects seeking to use normalization techniques.", "Contributions This paper aims to provide the most comprehensive evaluation and analysis of historical text normalization systems so far.", "Motivated by a systematic review of previous work on this topic (Sec. 2), only publicly available normalization systems covering a wide range of proposed techniques are selected (Sec. 3) and evaluated across a diverse collection of historical datasets covering eight languages (Sec. 4).", "This is followed by a detailed analysis of the effect of training data quantity and a critical discussion of evaluation methods for assessing normalization quality (Sec. 5).", "The datasets and code are made freely available whenever possible, 2 along with detailed instructions on how to reproduce the experiments.", "2 https://github.com/coastalcph/ histnorm ; one dataset could not be included due to licensing restrictions.", "The following overview is broadly organized by categories that each represent a conceptually or methodically different approach.", "The conceptually simplest form of normalization is to look up each historical variant in a precompiled list that maps it to its intended normalization.", "This approach can go by many names, such as lexical substitution , dictionary lookup , wordlist mapping , or memorization .", "While it does not generalize in any way to variants that are not covered by the list, it has proven highly effective as a component in several normalization systems, such as the semi-automatic VARD tool (Rayson et al., 2005; Baron and Rayson, 2008) or the fully automatic Norma tool (Bollmann, 2012).", "Rule-based approaches try to encode regularities in spelling variantse.g., historical (cid:104) v (cid:105) often representing modern (cid:104) u (cid:105) in the form of replacement rules, typically including context information to discriminate between different usages of a character.", "Some of the earliest approaches to normalization are rule-based, with rules being created manually for one particular language, such as Old Icelandic (Fix, 1980) or Old German (Koller, 1983).", "VARD 2 uses letter replacement rules to construct normalization candidates, but is not necessarily concerned with precision due to its interactive nature (Baron and Rayson, 2008).", "Bollmann et al. (2011) describe a supervised learning algorithm to automatically derive context-aware replacement rules from training data, including identity rules that leave a character unchanged, then apply one rule to each character of a historical word form to produce a normalization.", "Porta et al. (2013) model phonological sound change rules for Old Spanish using finite-state transducers; Etxeberria et al. (2016) describe a similarly motivated model that can be trained in a supervised manner.", "Rule-based methods are also commonly found when the goal is not to produce a single best normalization, but to cluster a group of spelling variants (Giusti et al., 2007) or to retrieve occurrences of variant spellings given a modern form in an information retrieval (IR) scenario (Ernst-Gerlach and Fuhr, 2006; Koolen et al., 2006).", "Approaches using edit distance measures (such as Levenshtein distance; Levenshtein, 1966) are most commonly found in an IR context, since measures that compare two word forms are a natural fit for matching a search term with relevant word forms in a historical document (e.g., Robertson and Willett, 1993).", "Weighted variants of distance measures can be used to assign lower costs to more likely edit operations (Kempken et al., 2006; Hauser and Schulz, 2007).", "In a normalization context, distance measures can be used to compare historical variants to entries in a contemporary full-form lexicon (Keste-mont et al., 2010; Jurish, 2010a).", "Norma includes a distance-based component whose edit weights can be learned from a training set of normalizations (Bollmann, 2012).", "Pettersson et al. (2013a) find a similar approach to be more effective than hand-crafted rules on Swedish.", "Sometimes, the line between distance-based and rule-based methods get blurred; Adesam et al. (2012) use the Levenshtein algorithm to derive substitution rules from training data, which are then used to link up historical Swedish forms with lexicon entries; van Halteren and Rem (2013) describe a comparable approach for Dutch.", "Furthermore, distance measures also lend themselves to unsupervised approaches for clustering historical variants of the same modern form, where identifying the precise modern form is not necessarily required (Amoia and Martnez, 2013; Barteld et al., 2015).", "In a probabilistic view of the normalization task, the goal is to optimize the probability p ( t | s ) that a contemporary word form t is the normalization of a historical word form s .", "This can be seen as a noisy channel model , which has been used for normalization by, e.g., Oravecz et al. (2010) and Etxeberria et al. (2016).", "More commonly, character-based statistical machine translation (CSMT) has been applied to the normalization task.", "Instead of translating a sentence as a sequence of tokens, these approaches translate a historical word form as a sequence of characters.", "This has been found to be very effective for a variety of historical languages, such as Spanish (Snchez-Martnez et al., 2013), Icelandic and Swedish (Pettersson et al., 2013b), Slovene (Scherrer and Erjavec, 2013, 2016; Ljubeic et al., 2016), as well as Hungarian, German, and English (Pettersson, 2016), where it is usually found to outperform previous approaches.", "Pettersson et al. (2014) find that a CSMT system often performs best in a comparison with a filtering method and a distance-based approach on five different languages.", "Schneider et al. (2017) compare VARD 2 to CSMT on English and find that VARD 2 performs slightly better.", "Domingo and Casacuberta (2018) evaluate both word-based and character-based models and find that SMT outperforms a neural network model.", "Neural network architectures have become popular for a variety of NLP tasks, and historical normalization is no exception.", "Character-based neural machine translation (CNMT) is the logical neural equivalent to the CSMT approach, and has first been used for normalization of historical German (Bollmann et al., 2017; Korchagina, 2017) using encoderdecoder models with long short-term memory (LSTM) units.", "Robertson and Goldwater (2018) present a more detailed evaluation of this architecture on five different languages.", "Hmlinen et al. (2018) evaluate SMT, NMT, an edit-distance approach, and a rule-based finite-state transducer, and advocate for a combination of these approaches to make use of their individual strengths; however, they restrict their evaluation to English.", "Other neural architectures have rarely been used for normalization so far.", "Al Azawi et al. (2013) and Bollmann and Sgaard (2016) frame the normalization task as a sequence labelling problem, labelling each character in the historical word form with its normalized equivalent.", "Keste-mont et al. (2016) use convolutional networks for lemmatization of historical Dutch.", "Overall, though, the encoderdecoder model with recurrent layers is the dominant approach.", "The presented literature almost exclusively focuses on models where the input is a single token.", "In theory, it would be desirable to include context from the surrounding tokens, as some historical spellings can have more than one modern equivalent depending on the context in which they are used (e.g., historical ther could represent their or there ).", "Remarkably few studies have attempted this so far: Jurish (2010b) uses hidden Markov models to select between normalization candidates; Mitankin et al. (2014) use a language model in a similar vein; Ljubeic et al. (2016) experiment with segment-level input, i.e., a string of several historical tokens as input to a normalizer.", "Since this area is currently very underex-plored, it warrants a deeper investigation that goes beyond the scope of this paper.", "Systems The selection of normalization systems follows two goals:", "(i) to include at least one system for each major category as identified in Sec. 2; and", "(ii) to use only freely available tools in order to facilitate reproduction and application of the described methods.", "To that effect, this study compares the following approaches: Norma 3 (Bollmann, 2012), which combines substitution lists, a rule-based normalizer, and a distance-based algorithm, with the option of running them separately or combined.", "Importantly, it implements supervised learning algorithms for all of these components and is not restricted to a particular language.", "cSMTiser 4 (Ljubeic et al., 2016; Scherrer and Ljubeic, 2016), which implements a normalization pipeline using character-based statistical machine translation (CSMT) using the Moses toolkit (Koehn et al., 2007).", "Neural machine translation (NMT) , in the form of two publicly available implementations:", "(i) the model by Bollmann (2018), also used in Bollmann et al. (2018); 5 and", "(ii) the model by Tang et al. (2018).", "6 Two systems were chosen for the NMT approach as they use very different hyperparameters, despite both using comparable neural encoder decoder models: Bollmann (2018) uses a single LSTM layer with dimensionality 300 in the encoder and decoder, while Tang et al. (2018) use six vanilla RNN cells with dimensionality 1024.", "3 https://github.com/comphist/norma 4 https://github.com/clarinsi/csmtiser 5 I reimplemented the model here using the XNMT toolkit (Neubig et al., 2018).", "6 https://github.com/tanggongbo/ normalization-NMT ; their model uses the deep transition architecture of Sennrich et al. (2017, Sec.2.3.1) as implemented by Marian (Junczys-Dowmunt et al., 2018).", "Datasets Table 1 gives an overview of the historical datasets.", "They are taken from Bollmann (2018) and represent the largest and most varied collection of datasets used for historical text normalization so far, covering eight languages from different language familiesEnglish, German, Hungarian, Icelandic, Spanish, Portuguese, Slovene, and Swedishas well as different text genres and time periods.", "Furthermore, most of these have also been used in previous work, such as the English, Hungarian, Icelandic, and Swedish datasets (e.g., Pettersson et al., 2014; Pettersson, 2016; Robertson and Goldwater, 2018; Tang et al., 2018) and the Slovene datasets (e.g., Ljubeic et al., 2016; Scherrer and Erjavec, 2016; Etxeberria et al., 2016; Domingo and Casacuberta, 2018).", "Additionally, contemporary datasets are required for the rule-based and distance-based components of Norma, as they expect a list of valid target word forms to function properly.", "For this, we want to choose resources that are readily available for many languages and are reliable, i.e., consist of carefully edited text.", "Here, I choose a combination of three sources: 7", "(i) the normalizations in the training sets,", "(ii) the Europarl corpus (Koehn, 2005), and", "(iii) the parallel Bible corpus by Christodouloupoulos and Steedman (2015).", "The only exception is Icelandic, which is not covered by Europarl; here, we can follow Pettersson (2016) instead by using data from two specialized resources, the BN database (Bjarnadt-tir, 2012) and the MM corpus (Helgadttir et al., 2012).", "This way, we obtain full-form lexica of 7 Detailed descriptions of the data extraction procedure can be found in the Supplementary Material.", "12k64k word types from the Bible corpus, 55k 268k types from Europarl, and 2.8M types from the Icelandic resources.", "Preprocessing The most important preprocessing decisions 8 are", "(i) to lowercase all characters and", "(ii) to remove all punctuation-only tokens.", "Both capitalization and punctuation often cannot be handled correctly without considering token context, which all current normalization models do not do.", "Furthermore, their usage can be very erratic in historical texts, potentially distorting the evaluation; e.g., when a text uses punctuation marks according to modern conventions, their normalization is usually trivial, resulting in artificial gains in normalization accuracy that other texts do not get.", "At the same time, most previous work has not followed these same preprocessing guidelines, making a direct comparison more difficult.", "This work tries to make up for this by evaluating many different systems, effectively reproducing some of these previous results instead.", "All models are trained and evaluated separately for each dataset by calculating word accuracy over all tokens.", "In particular, there is no step to discriminate between tokens that require normalization and those that do not; all word forms in the datasets are treated equally.", "8 The full preprocessing steps can be found in the Supplementary Material.", "component, while the latter is reported to produce the best results (Bollmann, 2012).", "For cSMTiser, the authors suggest using additional monolingual data to improve the language model; the contemporary datasets are used for this purpose and the model is trained both without and with this additional data; the latter is denoted cSMTiser +LM .", "For NMT, the model by Bollmann (2018) is evaluated using an ensemble of five models; the model by Tang et al. (2018) is trained on character-level input using the default settings provided by their implementation.", "9 To illustrate how challenging the normalization task is on different datasets, we can additionally look at the identity baseline i.e., the percentage of tokens that do not need to be normalizedas well as the maximum accuracy obtainable if each word type was mapped to its most frequently occurring normalization.", "The latter gives an indication of the extent of ambiguity in the datasets and the disadvantage of not considering token context (cf. Sec. 2.6).", "Results Table 2 shows the results of this evaluation.", "The extent of spelling variation varies 9 This is the Att-RNN setting reported in their paper; due to the high computational demands of the model, it was not feasible to run experiments with multiple configurations.", "greatly between datasets, with less than 15% of tokens requiring normalization (SLG ) to more than 80% (HU).", "The maximum accuracy is above 97% for most datasets, suggesting that we can obtain high normalization accuracy in principle even without considering token context.", "For the normalization systems, we observe significantly better word accuracy with SMT than NMT on four of the datasets, and non-significant differences on five others.", "There is only one dataset (DEA ) where the NMT system by Tang et al. (2018) gets significantly better word accuracy than other systems.", "This somewhat contradicts the results from Tang et al. (2018), who find NMT to usually outperform the SMT baseline by Pettersson et al. (2014).", "However, note that the results for the cSMTiser system are often significantly better than reported in previous work: e.g., on Hungarian, cSMTiser obtains 91.7% accuracy, but only 80.1% with the SMT system from Pettersson et al. (2014).", "Overall, the deep NMT model by Tang et al. (2018) consistently outperforms the shallow one by Bollmann (2018).", "cSMTiser seems to ben-efit from the added contemporary data for language modelling, though the effect is not significant on any individual dataset.", "Finally, while Norma does produce competitive results on sev-Method Dataset DEADEREN ES HU IS PT SLBSLGSV Norma, Lookup 0.41 0.31 0.38 0.35 0.43 0.38 0.39 0.44 0.44 0.29 Norma, Rule-based 0.40 0.33 0.43 0.39 0.38 0.40 0.45 0.47 0.46 0.32 Norma, Distance-based 0.42 0.34 0.46 0.44 0.41 0.44 0.50 0.52 0.39 0.38 Norma (Combined) 0.41 0.33 0.45 0.42 0.34 0.42 0.51 0.51 0.42 0.31 cSMTiser 0.37 0.26 0.39 0.41 0.26 0.40 0.50 0.53 0.56 0.24 cSMTiser +LM 0.39 0.27 0.39 0.42 0.27 0.41 0.50 0.53 0.56 0.24 NMT (Bollmann, 2018) 0.38 0.26 0.39 0.43 0.27 0.40 0.48 0.47 0.51 0.23 NMT (Tang et al., 2018) 0.38 0.27 0.38 0.42 0.26 0.41 0.46 0.50 0.56 0.24", "(b) Stemming accuracy: percentage of incorrect normalizations with correct word stems (higher is better) Table 3: Evaluations on the subset of incorrect normalizations only; best results for each dataset in bold.", "eral datasets (particularly in the combined set-ting), it is generally significantly behind the SMT and NMT methods.", "While word accuracy is easily interpretable, it is also a very crude measure, as it classifies predictions as correct/incorrect without considering the type of error(s) made by the model.", "Character error rate (CER) has sometimes been suggested as a complement to address this issue, but I believe this is not very insightful: For any normalization system that achieves a reasonably high word accuracy, CER will highly correlate with accuracy simply because CER equals zero for any word that is accurately normalized.", "10 At the same time, there is a need for a more fine-grained way to assess the normalization quality.", "Consider the follow-10 When comparing word accuracy scores in Table 2 with the same configurations evaluated using CER, they correlate with Pearson's r 0 .", "96 .", "ing example from the Hungarian dataset with its predicted normalization from the NMT system by Bollmann (2018): (1) ORIG yduewzewlendewk GOLD dvzlendoek PRED dvzlendok Here, the prediction matches the correct target form almost perfectly, but would be counted as incorrect since it misses an insertion of the letter (cid:104) e (cid:105) towards the end.", "In this vein, it will be treated the same by the word accuracy measure as a prediction that, e.g., had left the original form unchanged.", "CERI One alternative is to consider character error rate on the subset of incorrect normalizations only.", "This way, CER becomes a true complement to word accuracy by assessing the magnitude of error that a normalization model makes when it is not perfectly accurate.", "The results of this measure, denoted CERI , are shown in Table 3a.", "The lowest CERI score is often achieved by Norma's lookup module, which leaves historical word forms unchanged if they are not in its lookup wordlist learned during training.", "This suggests that the incorrect predictions made by other systems are often worse than just leaving the historical spelling unchanged.", "Stemming Another problem of CER is that all types of errors are treated the same: a one-letter difference in inflection, such as king kings or came come , would be treated identically to an error that changes the meaning of the word ( bids beds ) or results in a non-word ( creature crya-ture ).", "I propose an approach that, to the best of my knowledge, has not been used in normalization evaluation before: measure accuracy on word stems, i.e., process both the reference normalization and the prediction with an automatic stemming algorithm and check if both stems match.", "For this evaluation, I choose the Snowball stemmer (Porter, 2001) as it contains stemming algorithms for many languages (including the ones represented here except for Icelandic and Slovene) and is publicly available.", "11 Table 3b shows the accuracy on word stems, again only evaluated on the subset of incorrect normalizations, as this better highlights the differences between settings.", "This evaluation reveals some notable differences between datasets : For example, while the English and Spanish datasets have very comparable accuracy scores overall (cf. Tab. 2), they show very different characteristics in the stemming evaluation; for English, only up to 9.86% of incorrect predictions show the correct word stem, while for Spanish the number is up to 43.82%.", "Examining predictions on the dev set, many of the incorrectly predicted cases in Spanish result from mistakes in placement of diacritics, such as sta est or enve envi ; the stemming algorithm removes diacritics and can therefore match these instances.", "Overall, this gives an indication that the errors made on the Spanish dataset are less severe than those on English, despite comparable word accuracy scores and a usually higher CERI for Spanish.", "This case study shows that stemming can be a useful tool for error analysis in normalization models and reveal characteristics that neither word accuracy nor CER alone can show.", "Supervised methods for historical text normalization have been evaluated with highly varying amounts of training data: e.g., Domingo and Casacuberta (2018) train a normalizer for 17 th century Spanish on 436k tokens; Etxeberria et al. (2016) use only 8k tokens to train a normalizer for Basque.", "Even in the evaluation in Sec. 4, training set sizes varied between 24k and 234k tokens, depending on the dataset.", "Furthermore, many research projects seeking to use automatic normalization techniques cannot afford to produce training data in high quantity.", "All of this raises the question how different normalization systems perform with varying amounts of training data, and whether reasonable normalization results can be achieved in a low-resource scenario.", "Methodology All models are retrained on varying subsets of the training data, with sizes ranging from 100 tokens to 50,000 tokens.", "However, the lower the training set size is, the higher the potential variance when training on it, since random factors such as the covered spelling variants or vocabulary are more likely to impact the results.", "Therefore, I choose the following approach: For each dataset and training size, up to ten different training splits are extracted, 12 and a separate model is trained on each one.", "Each model is then evaluated on the respective development dataset, and only the average accuracy across all splits is considered.", "Results Figure 2 shows two learning curves that are representative for most of the datasets.", "13 They reveal that Norma (in the combined setting) performs best in extremely low-resource scenarios, but is overtaken by the SMT approach as more training data becomes available; usually already around 5001000 tokens.", "The NMT models have a steeper learning curve, needing more training data to become competitive.", "Extrapolating this trend, it is conceivable that the NMT models would simply need more training data than our current datasets provide in order to consistently outperform the SMT approach.", "On the other hand, there appears to be no correlation between the size of the training set (cf. Tab. 1) and the relative per-12 The ten training splits consist of chunks of n tokens that are spaced equidistantly across the full training set; for larger n , the number of chunks is reduced so that no splits overlap to more than 50%.", "13 Plots for all datasets can be found in the Appendix.", "formance of NMT vs. SMT (cf. Tab. 2) in the experiments.", "Since I am not aware of larger datasets for the historical normalization task, this remains an open question for now.", "A remarkable result is that very small amounts of training data can already be helpful for the normalization task.", "The English dataset has comparatively little spelling variation to begin with: leaving all words unnormalized already results in an accuracy of 75.5%.", "Still, with as little as 100 tokens for training, applying the Norma tool raises the accuracy above 83%.", "For Hungarian, the same amount of training data raises the accuracy from 17.8% (unnormalized) to around 50%.", "It would be interesting to further compare these results with fully unsupervised methods.", "Robertson and Goldwater (2018) highlight the importance of evaluating separately on seen vs. unseen tokens, i.e., tokens that have also been in the training set (in-vocabulary) and those that have not (out-of-vocabulary), as well as comparing to a naive memorization baseline.", "These numbers are presented in Table 4. For unseen tokens (Tab. 4b), the accuracy scores follow generally the same trend as in the full evaluation of Tab.", "2; i.e., SMT performs best in most cases.", "For seen tokens (Tab. 4a), however, Norma's lookup component which implements naive memorizationobtains the highest score on nine datasets.", "These observations suggest a new normalization strategy: apply the naive lookup on the subset of in-vocabulary tokens and the SMT/NMT models on the subset of out-of-vocabulary tokens only.", "Table 5 shows the results of this strategy.", "14 On nine datasets, it performs better than always using the learned models (as in Tab. 2), and this difference is statistically significant on five of them.", "These results support the claim from Robertson and Goldwater (2018) that learned models should typically only be applied to unseen tokens. 6 Conclusion This paper presented a large study of historical text normalization.", "Starting with a systematic survey of the existing literature, four different systems (based on supervised learning) were evaluated and compared on datasets from eight different languages.", "On the basis of these results, we can extract some practical recommendations for projects seeking to employ normalization techniques: 1. to use the Norma tool when only little training data ( < 500 tokens) is available; 2. to use cSMTiser otherwise, ideally with additional data for language modelling; and 3. to make use of the naive memoriza-tion/lookup technique for in-vocabulary tokens when possible.", "Furthermore, the qualitative analysis (in Sec. 5.1) should encourage authors evaluating normalization systems to use task-motivated approaches, such as evaluation on word stems, to provide 14 The non-lookup components of Norma are not included in this evaluation since Norma (Combined) effectively implements such a strategy already.", "deeper insight into the properties of their models and datasets.", "Detailed information on how to train and apply all of the evaluated techniques is made available online at https://github.com/ coastalcph/histnorm .", "I would like to thank my PhD supervisor, Stefanie Dipper, for her continuous support over many years that culminated in the doctoral thesis on", "which this paper is based; the acknowledgments in that thesis largely extend to this paper as well.", "Many thanks to Anders Sgaard for many helpful discussions and for supporting the follow-up experiments conducted for this paper.", "Further thanks go to the anonymous reviewers whose helpful suggestions have largely been incorporated here.", "I gratefully acknowledge the donation of a Titan Xp GPU by the NVIDIA Corporation that was used for a substantial part of this research." ]
[ "abstain", "abstain", "method", "objective", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other" ]
[ "Multimodal fusion has been proved to improve emotion recognition performance in previous works.", "However, in real-world applications, we often encounter the problem of missing modality, and which modalities will be missing is uncertain.", "It makes the fixed multimodal fusion fail in such cases.", "In this work, we propose a unified model, Missing Modality Imagination Network (MMIN), to deal with the uncertain missing modality problem.", "MMIN learns robust joint multimodal representations, which can predict the representation of any missing modality given available modalities under different missing modality conditions.", "Comprehensive experiments on two benchmark datasets demonstrate that the unified MMIN model significantly improves emotion recognition performance under both uncertain missing-modality testing conditions and full-modality ideal testing condition.", "The code will be available at https://github.com/AIM3-RUC/MMIN.", "Automatic multimodal emotion recognition is very important to natural human-computer interactions (Fragopanagos and Taylor, 2002).", "It aims to understand and interpret human emotions expressed through multiple modalities such as speech content, voice tones and facial expression.", "Previous works have shown that these different modalities are complimentary for emotion expression, and proposed many effective multimodal fusion methods to improve the emotion recognition performance (Baltrusaitis et al., 2018; Tsai et al., 2019; Zhao et al., 2018).", "However, in real applications, many common causes can lead to the missing modality problem.", "For example, the camera is turned off or Equal Contribution Corresponding Author ...", "blocked due to privacy issues; the speech content is unavailable due to automatic speech recognition errors; the voice and text are missing due to the silence of the user; or the faces cannot be detected due to lighting or occlusion issues as shown in Figure 1. Existing multimodal fusion models trained on full-modality samples usually fail when partial modalities are missing (Aguilar et al., 2019; Pham et al., 2019; Cai et al., 2018; Parthasarathy and Sundaram, 2020).", "The missing modality problem has attracted more research attention in the past years, and the existing solutions for this problem are mainly based on learning joint multimodal representation so that all modality information can be encoded.", "Han et al. (Han et al., 2019) propose a joint training approach that implicitly fuses multimodal information from auxiliary modalities, which improves the mono-modal emotion recognition performance.", "The re-cent cross-modality sequential translation-based methods proposed in (Pham et al., 2019; Wang et al., 2020) learn the joint multimodal representations via translating a source modality to multiple target modalities, which improves the performance of the source modality as input at the test time.", "However, these methods can only deal with the scenario where the source modality is input to the trained model.", "Different models need to be built for different missing modality cases 1 .", "Additionally, the sequential translation-based models require translation and generation of videos, audios, and text, which are difficult to train especially with limited training samples (Li et al., 2018; Pham et al., 2019).", "In this work, we propose a novel unified model, Missing Modality Imagination Network (MMIN), to address the above issues.", "Specifically, the proposed MMIN learns the robust joint multimodal representations through cross-modality imagination with Cascade Residual Autoencoder (CRA) (Tran et al., 2017) and Cycle Consistency Learning (Zhu et al., 2017) based on sentence-level modality-specific representations, as the sentence-level representation is more reasonable for modeling the cross-modality emotion correlation.", "The imagination module aims to predict the sentence-level emotional representation of the missing modality from the other available modalities.", "To the best of our knowledge, this is the first work that investigates a unified model for multimodal emotion recognition with uncertain missing-modality.", "Extensive experiments are carried out on two benchmark datasets, IEMOCAP and MSP-IMPROV, under both uncertain missing-modality and full-modality conditions.", "The proposed MMIN model as a unified multimodal emotion recognition model can learn robust joint multimodal representations and outperforms the standard multimodal fusion models on both benchmark datasets under both the uncertain missing-modality and the full-modality conditions.", "Furthermore, to evaluate the imagination ability of our MMIN model, we visualize the distributions of the imagined representations of the missing modalities and its ground-truth representations and find they are very similar, which demonstrates that MMIN can imagine the representations of the missing modalities based on the representations of the available modalities.", "In summary, the main contributions of this work are: 1) We propose a unified model, Missing Modality Imagination Network (MMIN), to improve the robustness of emotion recognition sys-tems under uncertain missing-modality testing con-1 If there are", "audio(a),visual(v) and textual(t) three modalities, then the system needs 6 models trained under 6 missing modality conditions { a } , { v } , { t } , { a,v } , { a,t } and { v,t } , plus one model trained under the full-modality data.", "ditions.", "2) We design cross-modality imagination based on paired multimodal data and adopt Cascade Residual Autoencoder (CRA) and Cycle Consistency Learning to learn the robust joint multimodal representations.", "3) Extensive experiments on two benchmark datasets demonstrate the effectiveness of the proposed model which improves the emotion recognition performance under both the uncertain missing-modality and the full-modality conditions.", "Multimodal Emotion Recognition Many previous works have focused on fusing multimodal information to improve emotion recognition performance.", "Temporal attention-based methods are proposed to use the attention mechanism to selectively fuse different modalities based on the frame-level or word-level temporal sequence, such as Gated Multimodal Unit (GMU) (Aguilar et al., 2019), Multimodal Alignment Model (MMAN) (Xu et al., 2019) and Multi-modal Attention mechanism (cLSTM-MMA) (Pan et al., 2020).", "These methods use different uni-modal sub-networks to model the contextual representations for each modality and then use the multimodal attention mechanism to selectively fuse the representations of different modalities.", "Liang et al. (Liang et al., 2020) propose a semi-supervised multimodal (SSMM) emotion recognition model which uses cross-modality emotional distribution matching to leverages unlabeled data to learn the robust representations and achieves state-of-the-art performance.", "Missing Modality Problem Existing methods for missing modality problem can mainly be divided into three groups.", "The first group features the data augmentation approach, which randomly ablates the inputs to mimic missing modality cases (Parthasarathy and Sundaram, 2020).", "The second group is based on generative methods to directly predict the missing modalities given the available modalities (Li et al., 2018; Cai et al., 2018; Suo et al., 2019; Du et al., 2018).", "The third group aims to learn the joint multimodal representations that can contain related information from these modalities (Aguilar et al., 2019; Pham et al., 2019; Han et al., 2019; Wang et al., 2020).", "Data augmentation methods: Parthasarathy et al. (Parthasarathy and Sundaram, 2020) propose a strategy to randomly ablate visual inputs during training at the clip or frame level to mimic real-world missing modality scenarios for audio-visual multimodal emotion recognition, which improves the recognition performance under missing modality conditions.", "Generative methods: Tran et al. (Tran et al., 2017) propose Cascaded Residual Autoencoder (CRA) to utilize the residual mechanism over the autoencoder structure, which can take the corrupted data and estimate a function to well restore the incomplete data.", "Cai et al. (Cai et al., 2018) propose an encoder-decoder deep neural network to generate the missing modality (Positron Emission Tomography, PET) given the available modality (Mag-netic Resonance Imaging, MRI), and the generated PET can provide complementary information to improve the detection and tracking of Alzheimers disease.", "Learning joint multimodal representations: Han et al. (Han et al., 2019) propose a joint training model that consists of two modality-specific encoders and one shared classifier, which implicitly fuse the audio and visual information as joint representations and improve the performance of the mono-modality emotion recognition.", "Pham et al. (Pham et al., 2019) propose a sequential translation-based model to learn the joint representation between the source modality and multiple target modalities.", "The hidden vectors of the source modality encoder work as the joint representations, which improve the emotion recognition performance of the source modality.", "Wang et al. (Wang et al., 2020) follow this translation-based method and propose a more efficient transformer-based translation model with parallel translation including textual features to acoustic features and textual features to visual features.", "Moreover, the above two translation-based models adopt the forward translation and backward translation training strategy to ensure that joint representations can retain maximal information from all modalities.", "Given a set of video segments S , we use x = ( x a , x v , x t ) to represent the raw multimodal features for a video segment s S , where x a , x v and x t represent the raw features of acoustic, visual and textual modalities respectively.", "| S | represents the number of video segments in set S .", "We denote the target set Y = { y i } | S | i =1 , y i { 0 , 1 , . . . , C } , where y i is the target emotion category of the video (available,missing) unified triplet format pairs 1 (( x a ) , ( x v ,x t )) (( x a ,x vmiss ,x tmiss ) , ( x amiss ,x v ,x t )) 2 (( x v ) , ( x a ,x t )) (( x amiss ,x v ,x tmiss ) , ( x a ,x vmiss ,x t )) 3 (( x t ) , ( x a ,x v )) (( x amiss ,x vmiss ,x t ) , ( x a ,x v ,x tmiss )) 4 (( x a ,x v ) , ( x t )) (( x a ,x v ,x tmiss ) , ( x amiss ,x vmiss ,x t )) 5 (( x a ,x t ) , ( x v )) (( x a ,x vmiss ,x t ) , ( x amiss ,x v ,x tmiss )) 6 (( x v ,x t ) , ( x a )) ( x amiss ,x v ,x t ) , ( x a ,x vmiss ,x tmiss )) Table 1: The six possible missing-modality conditions and their unified format cross-modality pairs.", "segment s i and | C | is the number of emotion categories.", "Our proposed method aims to recognize the emotion category y i for every video segment s i with full modalities, or with only partial modalities available, for the example shown in Figure 1, there exist only acoustic and textual modalities when visual modality is missing.", "In order to learn robust joint multimodal representations, we propose a unified model, Missing Modality Imagination Network (MMIN), which can deal with different uncertain missing-modality conditions in real application scenarios.", "Figure 2 illustrates the framework of our proposed MMIN model which contains three main modules: 1) Modality Encoder Network for extracting modality-specific embeddings; 2) Imagination Module based on the Cascade Residual Autoencoder (CRA) and Cycle Consistency Learning for imagining the representations of missing modalities given the representations of the corresponding available modalities.", "The latent vectors of the autoencoders in CRA are collected to form the joint multimodal representations; 3) Emotion classifier for predicting the emotion category based on the joint multimodal representations.", "We introduce each module in details in the following subsections.", "The Modality Encoder Network is used to extract the modality-specific utterance-level embeddings based on the raw modality features x .", "As shown in Figure", "2(b), we first pretrain the Modality Encoder Network in a multimodal emotion recognition model and it is further trained within MMIN model.", "We define the modality-specific embeddings of each modality as h a = EncA( x a ) , h v = EncV( x v ) , h t = EncT( x t ) , where EncA , EncV and EncT represent the acoustic, visual and textual encoders respectively, and h a , h v and h t represent the modality-specific embeddings generated by the corresponding encoders respectively.", "Given a training sample with all three modalities ( x a , x v , x t ) , there are 6 different possible missing-modality conditions as shown in Table 1. We can build a cross-modality pair ( available, missing ) under each missing-modality condition, where the available and missing mean the available modalities and the corresponding missing modalities respectively.", "In order to ensure a unified model that can handle various missing-modality conditions, we enforce a unified triplet input format for the modality encoder network as ( x a , x v , x t ) .", "Under the missing-modality conditions, the raw features of the corresponding missing modalities are replaced by zero vectors.", "For example, the unified format input of the available modalities under the visual modality missing condition (case 1 in Table 1) is formatted as ( x a , x vmiss , x t ) , where x vmiss refers to zero vectors.", "Under the missing-modality training conditions, the input includes the cross-modality pairs referring to available modalities and missing modalities in the unified triplet format (as shown in Table 1).", "where h amiss , h vmiss and h tmiss represent the modality-specific embedding when the corresponding modality is missing, which is produced by the corresponding modality encoder with input zero vectors.", "We propose an autoencoder-based Imagination Module to predict the multimodal embeddings of the missing modalities given the multimodal embeddings of the available modalities.", "The Imagination Module is expected to learn the robust joint multimodal representations through the cross-modality imagination.", "As illustrated in Figure", "2(a), we employ the Cascade Residual Autoencoder (CRA) (Tran et al., 2017) structure, which has sufficient learning capacity and more stable convergence than the standard autoencoder.", "The CRA structure is constructed by connecting a series of Residual Autoencoders (RAs).", "We further employ cycle consistency learning (Zhu et al., 2017; Wang et al., 2020) with a coupled net architecture with two independent networks to perform imagination in two directions, including the Forward ( available missing ) and Backward ( missing available ) imagination directions.", "To be specific, we use a CRA model with B RAs and each RA is represented by k , k = 1 , 2 , . . . , B , and the calculation of each RA can be defined as: (cid:26) z k = k ( h ) , k = 1 z k = k ( h + (cid:80) k 1 j =1 z j ) , k > 1 (2) where h is the extracted multimodal embedding based on the available modalities in a unified cross-modality pair format", "(Eq.(1)) and z k represents the output of the k th RA.", "Taking the visual modality missing condition as example (as shown in Figure", "2(a)), the forward imagination aims to predict the multimodal embedding of the missing visual modality based on the available acoustic and textual modalities.", "The forward imagined multimodal embedding is expressed as: h (cid:48) = imagine forward ( h ) = h + B (cid:88) k =1 z k (3) where imagine ( ) represents the function of the Imagination Module.", "The backward imagination aims to predict the multimodal embedding of the available modalities based on the forward imagined multimodal embedding h (cid:48)", "(Eq.(3)).", "The backward imagined multimodal embedding is expressed as: h (cid:48)(cid:48) = imagine backward ( h (cid:48) ) (4) 3.1.4 Classifier We collect the latent vectors of each auto-encoder in the forward imagination module and concatenate them together to form the joint multimodal representation: R = concat ( c 1 , c 2 , . . . , c B ) , where c k is the latent vector of the autoencoder in the k th RA.", "Based on the joint multimodal representation R , we calculate the probability distribution q as: q = softmax ( f cls ( R )) (5) where f cls ( ) denotes the emotion classifier that consists of several fully-connected layers.", "The loss function for MMIN training includes three parts: the emotion recognition loss L cls , forward imagination loss L forward , and backward imagination loss L backward :", "where p is the true distribution of one-hot label and q is the prediction distribution calculated in", "Eq.(5).", "H ( p, q ) is the cross-entropy between distributions p and q .", "h i and h i are the ground-truth representations extracted by the modality encoder network as shown in", "Eq.(1).", "We combine all the three losses into the joint objective function as below to jointly optimize the model parameters: L = L cls + 1 L forward + 2 L backward (7) where 1 and 2 are weighting hyper parameters for L forward and L backward respectively.", "We evaluate our proposed model on two benchmark multimodal emotion recognition datasets, Interactive Emotional Dyadic Motion Capture (IEMO-CAP) (Busso et al., 2008) and MSP-IMPROV (Busso et al., 2016).", "The statistics of the two datasets are shown in Table 2. IEMOCAP contains recorded videos in 5 dyadic conversation sessions.", "In each session, there are multiple scripted plays and spontaneous dialogues between a male and a female speaker and 10 speakers in total in the database.", "We follow the emotional label processing in (Xu et al., 2019; Liang et al., 2020) to form the four-class emotion recognition setup.", "MSP-IMPROV contains recorded segments videos in dyadic conversation scenarios with 12 actors.", "We first remove videos that are shorter than 1 second.", "Then we select the videos in the Other-improvised group which are recorded during the improvisation scenarios with happy , anger , sadness , or neutral labels to form the four-class emotion recognition setup.", "We first define the original training set which contains all the three modalities as the full-modality training set.", "Based on the full-modality training set, we construct another training set that contains cross-modality pairs to simulate the possible missing-modality conditions and we define it as the missing-modality training set, which we use to train the proposed MMIN.", "Six different cross-modality pairs (Table 1) for each training sample are generated.", "Therefore, the number of the generated cross-modality pairs is six times as large as the number of the full-modality training samples.", "We first define the original testing set which contains all the three modalities as the full-modality testing set.", "To evaluate the performance of the proposed MMIN under the uncertain missing-modality conditions, we construct six different missing modality testing subsets corresponding to the six possible missing modality conditions respectively.", "For example, in the inference stage, under the missing visual modality condition as shown in Figure", "2(c), the raw feature of a missing-modality testing sample in the unified format is ( x a , x vmiss , x t ) .", "We combine all the six missing-modality testing subsets together and denote it as the missing-modality testing set.", "We follow feature extraction methods described in (Liang et al., 2020; Pan et al., 2020) and extract the frame-level raw features of each modality 2 .", "Acoustic features: OpenSMILE toolkit (Ey-ben et al., 2010) with the configuration of IS13 ComParE is used to extract frame-level features, which have similar performance with the IS10 utterance-level acoustic features used in (Liang et al., 2020).", "We denote the features as ComParE and the feature vectors are in 130 dimensions.", "et al., 2017) which is trained based on the Facial Expression Recognition Plus (FER+) corpus (Bar-soum et al., 2016).", "We denote the facial expression features as Denseface.", "The Denseface are frame-level sequential features based on the detected faces from the video frames, and the feature vectors are in 342 dimensions.", "Textual features: We extract contextual word embeddings using a pretrained BERT-large model (Devlin et al., 2019) which is one of the state-of-the-art language representations.", "We denote the word embeddings as Bert and the features are in 1024 dimensions.", "To generate more efficient sentence-level modality-specific representations for the Imagination Module, we design different modality encoders for different modalities.", "Acoustic Modality Encoder ( EncA ): We apply a Long Short-term Memory (LSTM) network (Sak et al., 2014) to capture the temporal information based on the sequential frame-level raw acoustic features x a .", "Then we use max-pooling to get utterance-level acoustic embedding h a based on the LSTM hidden states.", "Visual Modality Encoder ( EncV ): We adopt a similar method with EncA on the sequential frame-level facial expression features x v and get utterance-level visual embedding h v .", "Textual Modality Encoder ( EncT ): We apply a TextCNN (Kim, 2014) to get the utterance-level textual embedding as h t based on the sequential word-level features x t .", "Our baseline model takes the structure as shown in Figure", "2(b), which is trained based on the full-modality training set and we use it as our full-modality baseline .", "To improve the system robustness against the missing modality problem, one intuitive solution is to add samples under the missing-modality conditions into the training set.", "We, therefore, pool the missing-modality training set and full-modality training set together to train the baseline model and use it as our augmented baseline .", "Table 3 presents our implementation details.", "We use the 10-fold and 12-fold speaker-independent cross-validation to evaluate the models on IEMOCAP and MSP-IMPROV respectively.", "For the experiments on IEMOCAP, we take four sessions for Acoustic Encoder single layer LSTM with hidden size of 128 Visual Encoder single layer LSTM with hidden size of 128 Textual Encoder 3 Conv blocks in TextCNN with kernel size { 3,4,5 } and output layer with 128 channels Emotion Classifier 3 FC layers of size { 128,64,4 } CRA 5 residual-RAs with RA-layers in size 384-256128-64-128-256-384 (latent-vector size: 64) parameters 1 , 2 both set as 0.1 Learning rate Adam optimizer with learning rate of 0.001, ReLU activation Table 3: Implementation Details train test WA UA Our full-modality baseline { a,v,t } { a,v,t } 0.7651 0.7779 cLSTM-MMA(Pan et al., 2020) 0.7394 SSMM(Liang et al., 2020) 0.7560 0.7450 Table 4: Multimodal Emotion Recognition Results on IEMOCAP under full-modality condition.", "training, and the remaining session is split by speakers into the validation and testing sets.", "For MSP-IMPROV, we take the utterances of 10 speakers for training, the remaining 2 speakers are divided into validation set and testing set by speakers.", "We train the model with at most 100 epochs for each experiment.", "We select the best model on the validation set and report its performance on the testing set.", "To demonstrate the robustness of our models, we run each model three times to alleviate the influences of random initialization of parameters and apply a significance test for model comparison.", "All models are implemented with Pytorch deep learning toolkit and run on a single Nvidia GTX 1080Ti graphic card.", "For the experiments on IEMOCAP, we use two evaluation metrics: weighted accuracy (WA) and unweighted accuracy (UA).", "Due to the imbalance of emotion categories on MSP-IMPROV, we use the f-score as the evaluation metric.", "We first compare our full-modality baseline with several state-of-the-art multimodal recognition models under the full-modality condition.", "Results in Table 4 show that our full-modality baseline outperforms other state-of-the-art models, which proves that our modality encoder network can extract effective representations for multimodal emotion recognition.", "Table 5 presents the experimental results of our proposed MMIN model under different missing-modality testing conditions and full-modality testing condition.", "On IEMOCAP, comparing to the full-modality baseline results in Table 4, we see a significant performance drop under uncertain missing-modality testing conditions, which indicates that the model trained under the full-modality condition is very sensitive to the missing modality problem.", "The intuitive solution Aug-mented baseline, which combines the missing-modality training set with the full-modality training set to train the baseline model, does significantly improves over the full-modality baseline under missing-modality testing conditions, which indicates that data augmentation can help alleviate the problem of data mismatch between training and testing.", "More notably, our proposed MMIN significantly outperforms both the full-modality baseline and the augmented baseline under every possible missing-modality testing condition.", "It also outperforms the two baselines under the full-modality testing condition, even though the MMIN model does not use the full-modality training data.", "These results indicate that our proposed MMIN model can learn robust joint multimodal representation so that it can achieve consistently better performance under both the different missing-modality and the full-modality testing conditions.", "This is because our proposed MMIN method not only has the data augmentation capability, but also can learn better joint representation, which can preserve information of other modalities.", "We further analyze the performance under different missing modality conditions.", "Our MMIN model achieves significant improvement under one modality available conditions ( { a } , { v } , or { t } ) compared with the augmented baseline, especially for the weak modalities { a } and { v } .", "It brings some improvements as well over the augmented baseline even for the strong modality combinations, such as { a, t } .", "These experimental results indicate that the learned joint representation via MMIN did learn complementary info from the other modalities to compensate for the weak modalities.", "The bottom block in Table 5 shows the performance comparison on the MSP-IMPROV dataset.", "Our proposed MMIN model again significantly outperforms the two baselines under different missing-modality and full-modality testing conditions, which demonstrates the good generalization ability of MMIN across different datasets.", "We also compare to the MCTN (Pham et al., 2019) model which is the state-of-the-art model for the missing modality problem.", "As MCTN can-Dateset Model Metric Testing Condition { a } { v } { t } { a,v } { a,t } { v,t } Average { a,v,t } IEMOCAP Full-modality baseline WA( ) 0.4190 0.4574 0.5646 0.5488 0.7018 0.6217 0.5522 0.7651 UA( ) 0.4719 0.3966 0.5549 0.5762 0.7257 0.5971 0.5537 0.7779 Augmented baseline WA( ) 0.5303 0.4864 0.6564 0.6395 0.7251 0.7082 0.6243 0.7617 UA( ) 0.5440 0.4598 0.6691 0.6434 0.7435 0.7162 0.6293 0.7767 proposed MMIN WA( ) 0.5658 0.5252 0.6657 0.6399 0.7294 0.7267 0.6410 (cid:78) 0.7650 UA( ) 0.5900 0.5160 0.6802 0.6543 0.7514 0.7361 0.6524 (cid:78) 0.7812 (cid:78) MCTN (Pham et al., 2019) WA( ) 0.4975 0.4892 0.6242 0.5634 0.6834 0.6784 0.5894 UA( ) 0.5162 0.4573 0.6378 0.5584 0.6946 0.6834 0.5913 MSP-IMPROV Full-modality baseline F1( ) 0.2824 0.3295 0.4576 0.4721 0.5655 0.5368 0.4543 0.6523 Augmented baseline F1( ) 0.4278 0.4185 0.5544 0.5396 0.6038 0.6295 0.5455 0.6663 proposed MMIN F1( ) 0.4647 0.4471 0.5573 0.5740 0.6188 0.6411 0.5649 (cid:78) 0.6855 (cid:78) MCTN (Pham et al., 2019) F1( ) 0.3285 0.3810 0.5050 0.4683 0.5611 0.5886 0.4721 Table 5: Performance comparison under six possible missing-modality testing conditions and the full-modality testing condition (i.e. testing condition { a } means that only the acoustic modality is available and both visual and textual modalities are missing. { a, v, t } refers to the full-modality testing condition where all acoustic, visual and textual modalities are available) Average refers to the average performance over all six missing-modality conditions.", "not handle different missing-modality conditions in one unified model, so we have to train a particular model under each missing-modality condition 3 .", "The comparison results demonstrate that our proposed MMIN model not only can handle both the different missing-modality and the full-modality testing condition with a unified model, but also can consistently outperform the MCTN models under all missing-modality conditions.", "We conduct experiments to ablate the contributions of different components in MMIN, including the structure of the imagination module and the cyclic consistency learning.", "Structure of the imagination module.", "We first investigate the impact of different network structures on the performance in the imagination module.", "Specifically, we compare the Autoencoder and the CRA structure in MMIN, and we adopt the same parameter scale to ensure the fairness of the comparison.", "As shown in Table 6, the performance of the imagination module with Autoencoder structure MMIN-AE is worse than that with the CRA structure under both different missing-modality and full-modality testing conditions.", "The performance comparison indicates that the CRA has a stronger imagination ability than the Autoencoder model.", "Cycle Consistency Learning.", "To evaluate the impact of the cyclic consistency learning in MMIN, 3 We use features described in Sec. 4.3 and follow the training setting in (Pham et al., 2019) to conduct the MCTN experiments.", "The MCTN model cannot be evaluated under the full-modality testing condition because the target modalities cannot be None.", "we conduct experiments using MMIN with or without cycle consistency learning.", "As shown in Table 6, the model trained without cycle consistency learning results in performance loss under all conditions, which indicates that the cycle consistency learning can enhance the imagination ability and learn more robust joint multimodal representations.", "We conduct detailed experiments on IEMOCAP to demonstrate the joint representation learning ability", "and the imagination ability of our MMIN model.", "Joint representation learning ability: Since the joint representation is expected to retain information of multiple modalities, we conduct experiments to evaluate the joint representation learning ability of MMIN.", "We compare MMIN to the baseline model under the matched-modality condition in which the training data and the test data contain the same modalities.", "As shown in Table 7, comparing to the baseline model, MMIN achieves on par with or even better performance, which demonstrates that MMIN has the ability to learn effective joint multimodal representations.", "We also notice that the data-augmented model cannot beat the corresponding matching partial-modality baseline model, which indicates the data-augmented model cannot learn the joint representation.", "Imagination ability: Figure 3 visualizes the distribution of the ground-truth multimodal embeddings ( h in Figure 2) and MMIN imagined multimodal embeddings ( h (cid:48) in Figure 2) for a male speaker and female speaker using t-SNE (Maaten and Hinton, 2008).", "We observe that the distribution of Model Metric Testing Condition { a } { v } { t } { a,v } { a,t } { v,t } Average { a,v,t } MMIN-AE WA( ) 0.5404 0.5025 0.6588 0.6115 0.7203 0.7125 0.6244 0.7619 UA( ) 0.5625 0.4836 0.6689 0.6246 0.7374 0.7187 0.6368 0.7677 MMIN-NoCycle WA( ) 0.5503 0.5116 0.6577 0.6239 0.7185 0.7202 0.6304 0.7498 UA( ) 0.5821 0.5006 0.6705 0.6454 0.7438 0.7301 0.6454 0.7709 MMIN WA( ) 0.5658 0.5252 0.6657 0.6399 0.7294 0.7267 0.6410 0.7650 UA( ) 0.5900 0.5160 0.6802 0.6543 0.7514 0.7361 0.6524 0.7812 Table 6: Experimental results for component contribution evaluation on IEMOCAP.", "the ground-truth embeddings and imagined embeddings are very similar, although the distribution of visual modality embeddings deviates a little, it is mainly because the quality of the visual modality is poor in this dataset.", "It demonstrates that MMIN can imagine the representations of the missing modalities based on the available modalities.", "In this paper, we propose a novel unified multimodal emotion recognition model, Missing Modality Imagination Network (MMIN), to improve the emotion recognition performance under uncertain missing-modality conditions in real application scenarios.", "The proposed MMIN can learn the robust joint multimodal representations through cross-modality imagination via the Cascade Residual Autoencoder and Cycle Consistency Learning.", "Extensive experiments on two public benchmark datasets demonstrate the effectiveness and robustness of our proposed model, which significantly outperforms other baselines under both uncertain missing-modality and full-modality conditions.", "In the future work, we will explore ways to further improve the robust joint multimodal representation.", "This work was supported by the National Key R & D Program of China under Grant No. 2020AAA0108600, National Natural Science Foundation of China (No. 62072462), National Natural Science Foundation of China (No. 61772535), Beijing Natural Science Foundation (No. 4192028)." ]
[ "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "objective", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "method", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "abstain", "method", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "objective", "abstain", "objective", "objective", "other" ]
[ "We present STARC (Structured Annotations for Reading Comprehension), a new annotation framework for assessing reading comprehension with multiple choice questions.", "Our framework introduces a principled structure for the answer choices and ties them to textual span annotations.", "The framework is implemented in OneStopQA, a new high-quality dataset for evaluation and analysis of reading comprehension in English.", "We use this dataset to demonstrate that STARC can be leveraged for a key new application for the development of SAT-like reading comprehension materials: automatic annotation quality probing via span ablation experiments.", "We further show that it enables in-depth analyses and comparisons between machine and human reading comprehension behavior, including error distributions and guessing ability.", "Our experiments also reveal that the standard multiple choice dataset in NLP, RACE (Lai et al., 2017), is limited in its ability to measure reading comprehension.", "47% of its questions can be guessed by machines without accessing the passage, and 18% are unanimously judged by humans as not having a unique correct answer.", "OneStopQA provides an alternative test set for reading comprehension which alleviates these shortcomings and has a substantially higher human ceiling performance.", "1 1 Introduction Assessment of reading comprehension is of paramount importance in education and science and is a key component of high-stakes evaluations such as the SAT examinations.", "Reading comprehension tasks are also central to NLP, where extensive efforts are invested in developing systems that try to match human-level performance.", "Despite 1 OneStopQA dataset, STARC guidelines and human experiments data are available at https://github.com/ berzak/onestop-qa the proliferation of NLP work on reading comprehension and the increasing number of large-scale reading comprehension datasets, key quality assurance issues such as question guessability, unwanted dataset biases, and the considerable success of simple pattern matching and slot filling heuristics remain open challenges for ensuring that evaluation benchmarks capture genuine reading comprehension.", "Further, existing annotation frameworks have very limited support for reading behavior analyses which go beyond simple accuracy statistics.", "In this work, we introduce STARC , a new annotation framework for multiple choice reading comprehension, which addresses these shortcomings.", "Our framework aims to ensure high annotation quality and supports detailed probing and comparisons of human and machine reading comprehension behavior.", "The following are the primary novel characteristics of our annotation scheme.", "Structured Answer Choices As opposed to existing multiple choice reading comprehension datasets, our framework has a principled and consistent answer structure.", "Specifically, every question has four possible answers.", "The first answer is the correct answer.", "Importantly, the correct answer typically does not appear verbatim in the passage.", "The second answer represents a misunderstanding of the critical information for answering the question correctly.", "The third answer refers to information in the passage that is not relevant for the question.", "The fourth distractor has no support in the passage.", "This structure reflects four fundamental types of responses, ordered by miscomprehension severity.", "Auxiliary Span Annotations To further enhance the versatility of the annotation scheme, the framework provides span annotations for the different answer choices.", "This approach creates a systematic correspondence between answers and their textual support.", "Specifically, the correct answer relies on a critical span which contains the essential information for answering the question.", "In contrast to span identification datasets such as SQUAD (Rajpurkar et al., 2016) and Natural Questions (Kwiatkowski et al., 2019), we do not consider the span as the correct answer, but rather as a text region that contains the critical information required for answering the question correctly.", "The second answer represents a misunderstanding of that same span.", "Finally, the information referred to in the third answer is marked in a distractor span .", "In this paper we demonstrate that the combination of a consistent answer structure with span annotations opens the door for new approaches to automatic verification of annotations and enables new types of analyses for reading comprehension.", "We further introduce OneStopQA , a new dataset for multiple choice reading comprehension which implements our annotation framework.", "OneStopQA is a carefully constructed high-quality dataset intended primarily for testing and analyses, thereby complementing the existing larger multiple choice dataset RACE (Lai et al., 2017), which also has a 4-answer format and is commonly used for training.", "OneStopQA is designed to be challenging for both machine and human readers.", "The dataset comprises 30 articles from the Guardian in three parallel text difficulty versions and contains 1,458 paragraph-question pairs with multiple choice questions, along with manual span markings for both correct and incorrect answers.", "Despite its shorter passages and more constrained annotation scheme, baselines perform worse on OneStopQA than on RACE and the performance of a state-of-the-art model is comparable on both datasets.", "We use OneStopQA to introduce an ablation-based framework for automatic verification of multiple choice reading comprehension materials and to measure the extent to which the dataset can be solved without performing reading comprehension.", "Our framework is inspired by prior work on tasks such as image captioning and Visual Question Answering (VQA), where models were shown to perform well despite limited reliance on the images or the questions (Jabri et al., 2016; Agrawal et al., 2016; Goyal et al., 2017; Chao et al., 2018).", "We utilize this framework to demonstrate the validity of OneStopQA annotations and their robustness to heuristics.", "Our analyses further reveal quality control issues in RACE.", "Machine readers are able to guess the correct answers to 47.1% of the questions in RACE without being exposed to the passage, as opposed to 37.2% for OneStopQA.", "When presented to humans via crowdsourcing, 18.3% of the questions in RACE are unanimously judged by three annotators as not having a single correct answer, compared to only 3.4% for OneStopQA.", "Using this human data, we establish an approximate ceiling above which model performance improvements are not likely to be meaningful: 88.8% on RACE and 97.9% on OneStopQA.", "We further verify this ceiling approximation with an in-lab human reading comprehension experiment in which we obtain a superior empirical human ceiling of 95.3% for OneStopQA as compared to 84.7% for RACE.", "These results are consequential in that state-of-the-art models are already around ceiling performance on RACE, while substantial room for improvement is still available for OneStopQA.", "Finally, we showcase how the structure of OneStopQA annotations can be used for detailed comparisons between human and machine readers.", "Specifically, we demonstrate that human subjects and a state-of-the-art machine reading comprehension model have similar distributions of erroneous answers, suggesting a deeper link between human and machine readers than previously reported.", "On the other hand, humans and machines are fundamentally different in their guessing behavior.", "We present STARC, an annotation framework for reading comprehension which combines structured answers with span annotations for both correct answers and distractors.", "We annotate and release OneStopQA, a dataset which adheres to this framework.", "We introduce a new methodology which leverages our annotations for automated data quality probing via ablation experiments.", "We showcase the value of the annotation framework for detailed analyses of human and machine reading comprehension behavior.", "Our experiments reveal that RACE is highly guessable and has a relatively low human ceiling due to low item quality in a large portion of the questions.", "OneStopQA does not have these drawbacks and can serve as an alternative out-of-domain challenge dataset for evaluations, compatible with training on RACE.", "The combination of the novel annotation framework and the presented experiments suggests that the proposed annotation framework and our dataset can improve both the depth and the breadth of reading comprehension evaluations.", "STARC is a new annotation framework accompanied by a protocol for increasing annotation quality and reducing annotation biases which can be exploited by either humans or machines for solving reading comprehension datasets without performing the intended task.", "The annotation scheme aims for the questions to be on a high difficulty level.", "Importantly, STARC tries to minimize the possibility of answering questions correctly using simple string-matching strategies, as well as guessing the correct answer without reading the passage.", "To focus on testing language comprehension, as opposed to other types of skills and knowledge, it aims to avoid questions that rely on numerical reasoning and substantial external world knowledge.", "It also refrains from questions that require the reader to speculate (for example, given some information on person X, ask about their likely position issue Y when this position is not stated in the text).", "Reading comprehension questions have four answers, structured in the following manner.", "A is the correct answer.", "Answering a question correctly requires comprehending information from a text span in the passage called the critical span .", "Importantly, with exceptions when necessary, the correct answer should not appear in the critical span in verbatim form.", "B is an incorrect answer which represents a plausible misunderstanding of the critical span.", "Neither the critical span nor the distractor span have to adhere to sentence boundaries, and both can be non-continuous.", "This structure introduces well-defined and consistent relations between the answers and the passage.", "C is an incorrect answer which refers to an additional span in the passage, called the distractor span .", "This answer can be anchored in the distractor span in various ways.", "For example, it may borrow keywords, or contain a correct fact that is stated in the distractor span but is not the correct answer to the question.", "D is an incorrect answer which is plausible a-priori, but has no support in the passage.", "Note that to be plausible, D often appeals to the reader's general world knowledge.", "Further, the answers are ordered by degree of comprehension, whereby A represents correct comprehension, B reflects the ability to identify the crucial information for answering the question but failure to comprehend it, C reflects some degree of attention to the passage's content, and D provides no evidence for text comprehension.", "The utilization of B-type answers in particular enables probing comprehension at a deep level.", "The overall answer structure can support new types of error analyses beyond the correct/incorrect distinction by examining specific types of miscomprehension and their relation to the text.", "In order to reduce the effectiveness of answer elimination strategies, we developed additional guidelines on the joint form and content of the answers.", "These include a quality ranking of answer patterns, where the most preferred structures are those in which all answers have either similar phrasings or distinct phrasings.", "For all other patterns (e.g. three similarly worded answers and an outstanding answer), the answer types for the pattern should be distributed equally across questions.", "The guidelines also list dispreferred content relations between answers, such as B being the opposite of A. Finally, the guidelines specify that the answers across, and whenever possible within questions should be of comparable length.", "We implemented the STARC annotation framework in a new reading comprehension dataset, OneStopQA.", "The textual materials of OneStopQA are drawn from the OneStopEnglish corpus (Vajjala and Lucic, 2018), which contains Guardian News Lessons articles from the English language learning portal onestopenglish.com by Macmillan Education.", "We chose articles that have non-repetitive content, and collectively represent a diverse range of topics.", "The texts were cleaned from errors stemming from the conversion process from the original PDFs to plain text, and manually converted from British to American English spelling.", "Each article has three versions, corresponding to three text difficulty levels: Advanced, Intermediate and Elementary.", "The Advanced version is the original Guardian article.", "The Intermediate and Elementary articles are simplified versions of the original article created by professional editors at onestopenglish.com.", "Common simplifications RACE OneStopQA Middle High Ele Int Adv Passages 6,409 / 368 / 362 18,728 / 1,021 / 1,045 162 162 162 Questions 25,421 / 1,436 / 1,436 62,445 / 3,451 / 3,498 486 486 486 Words per passage 232.12 354.08 112.32 126.97 138.6 Sentences per passage 16.6 17.99 5.42 5.4 5.36 Words per sentence 13.99 19.69 20.72 23.53 25.84 Flesh Kincaid 3.24 7.06 7.32 8.9 10.1 SMOG 7.58 10.14 10.29 11.4 12.21 Table 1: RACE and OneStopQA corpus statistics.", "OneStopQA has 30 articles, with 4 to 7 paragraphs per article, and a total of 162 paragraphs.", "Each paragraph has 3 to 12 sentences.", "Further statistics on OneStopQA and RACE articles along with readability estimates for the different text difficulty levels are presented in Table 1.", "We note that OneStopQA paragraphs are considerably shorter than RACE articles.", "At the same time, even the Elementary version of OneStopQA has longer sentences and higher text difficulty level compared to the High School version of RACE.", "We composed three reading comprehension questions for each paragraph, resulting in 486 questions, and 1,458 question-paragraph pairs when considering all three text versions.", "All the questions are answerable based on any of the three difficulty levels of the paragraph.", "Furthermore, the questions are local to the paragraph; they are answerable without any additional information from the preceding nor the following paragraphs.", "All the spans were annotated manually for each question in all three versions of the paragraph.", "Two of the questions have the same or substantially overlapping critical spans, and the third question has a distinct critical span.", "No restrictions were imposed on the distractor spans.", "Statistics for the questions, answers and spans are presented in Table 3.", "Table 2 presents an annotated question for two paragraph difficulty levels.", "Appendix A contains details on the dataset development and piloting process.", "We report a series of experiments which assess human and machine reading comprehension on OneStopQA and compare it to RACE.", "We further Definition Answer Span Span Length Length A correct 7.2 (3.5) critical 37.9 (16.5) B incorrect 7.6 (3.6) C incorrect 8.1 (3.8) distractor 15.5 (11.8) D incorrect 6.9 (3.1) N/A N/A Table 3: STARC answer structure, and mean length (in words) of answers and spans in OneStopQA (standard deviation in parentheses).", "showcase the ability of our annotation framework to support automated dataset quality validation and enable in-depth comparisons between human and machine reading comprehension behavior.", "In this experiment, we benchmark two neural reading comprehension models, the Stanford Attentive Reader (AR) (Chen et al., 2016), and RoBERTA (Liu et al., 2019) a state-of-the-art model on RACE.", "We train the models on RACE, and evaluate their accuracy on RACE and OneStopQA.", "To reduce the impact of potential domain differences, we also provide an evaluation in which we further finetune the models on OneStopQA with 5-fold cross validation, where in each fold 18 articles are used for training, 6 for development and 6 for testing.", "Additionally, we report the performance of the commonly used sliding window baseline (Richardson et al., 2013).", "In parallel with the two neural model evaluation regimes for OneStopQA, we perform two evaluations for this baseline, one in which the window size is optimized on the RACE development set, and one in which it is optimized on OneStopQA using 5-fold cross validation.", "Table 4 presents the results of this experiment.", "We observe that the two weaker models, Sliding Window and Stanford AR, perform better on RACE than on OneStopQA.", "Particularly notable is the large drop in the performance of Stanford AR from 42.8 on RACE to 34.3 on OneStopQA ( p (cid:28) .", "001 , t-test).", "This suggests that OneStopQA is more robust to simple word-matching heuristics.", "The results for RoBERTa are comparable on OneStopQA and on RACE.", "We note that overall this is a strong outcome for OneStopQA in light of its span-based format, shorter paragraphs, and higher human ceiling performance which we discuss in Section 4.3.", "We further note that finetuning on OneStopQA preserves or improves performance across models by a small margin.", "Finally, the difficulty level of OneStopQA paragraphs has only a small and inconsistent effect on model performance.", "We introduce a new methodology for analyzing the quality of reading comprehension datasets through ablation studies.", "This methodology enables evaluating the robustness of OneStopQA to guessing heuristics and the validity of the relation between the answers and the span annotations.", "In each ablation study, we train and evaluate the performance of RoBERTa without a part of the textual input.", "The ablation studies are divided into two groups: Full component ablations , applicable to any multiple choice reading comprehension dataset.", "In these experiments we withhold either the question, the passage or both during the training and testing of the model.", "Span ablations , which are enabled by the STARC annotations and hence apply only to OneStopQA.", "In the span ablation experiments we remove parts of the passage according to the span markings.", "These experiments enable empirical validation of the relation between answers and spans.", "We report the results of these ablation studies in the RoBERTa portion of Table", "5. Full component ablations When removing the passage, we obtain an accuracy of 37.2% on OneStopQA, and comparable choice rates among the distractors.", "This is a key result which suggests that RoBERTa is not able to recover substantial information about the correct answer without the passage and provides evidence for the a-priori plausibility of all three distractor types.", "In contrast to this outcome, on RACE, the passage ablation experiment yields a significantly higher accuracy of 47.1 ( p (cid:28) 0 .", "001 , t-test).", "The ability of RoBERTa to guess the correct answers to nearly half of the questions in RACE without requiring the passage leads to a credit assignment issue, where 22% of RoBERTa's performance on this dataset could in principle be attributed to question and answer patterns rather than reading comprehension.", "We next exclude the question and find that OneStopQA is less robust than RACE in this RACE OneStopQA (no finetuning) OneStopQA Mid High All Ele Int Adv All Ele Int Adv All Sliding Window 41.2 31.0 33.9 25.6 26.2 27.5 26.7 27.7 27.2 27.3 28.2 Stanford AR 40.0 43.9 42.8 30.2 30.1 30.1 30.2 34.2 34.3 34.3 34.3 RoBERTa Base 73.2 66.4 68.4 69.5 69.1 67.7 68.8 68.7 69.1 68.5 68.8 RoBERTa Large 86.6 81.3 82.9 85.6 85.0 86.0 85.6 86.0 85.4 86.4 86.0 Table 4: QA Accuracy on RACE and OneStopQA.", "regime, with an accuracy of 68.8 compared to 60.8 ( p < 0 . 001 , t-test).", "This result is likely reflecting the fact that unlike in RACE, the correct answer in OneStopQA is always stated or can be directly inferred from the passage.", "We note that compared to the no-passage ablation, the presence of the passage eliminates D as expected.", "Interestingly, the relative choice rate for C is high for the no-question ablation compared to the full model, suggesting that RoBERTa is able to rule out C only in the presence of the question.", "This is a desirable behavior, consistent with the requirement for the C distractor to contain information from the passage which could be possibly correct, while not being a correct answer to the question.", "Finally, 40.0 percent of the RACE questions are guessable even when both the question and the passage are not provided, compared to 34.7 for OneStopQA ( p (cid:28) 0 .", "001 , t-test).", "In the OneStopQA span ablation experiments, providing RoBERTa only with the critical span makes it focus on A and B as the only viable options, as expected.", "A similar C elimination outcome is obtained when the ablation is targeted at the distractor span only.", "Finally, removing the critical span, which should make the question unanswerable, results in a sharp drop in performance to an accuracy of 41.1, only 3.9% above withholding the entire passage.", "Interestingly, the selection rate of C is lower compared to the full passage ablation, an outcome we intend to investigate further in the future.", "Overall, these results confirm the robustness of OneStopQA to guessing as well as the tight correspondence between answers and spans.", "We envision extending this framework in the future for automatic identification of specific items with problematic annotations which could substitute item pilots with human subjects.", "In these experiments we assess human reading performance and guessing behavior, and further investigate OneStopQA and RACE question quality.", "2 Question Answering (QA) This experiment benchmarks human question answering performance.", "Participants are presented with a passage along with a question and its four answers, and are asked to select the correct answer based on the passage.", "After confirming their selection, participants are informed on whether they answered correctly and shown the correct answer.", "2 The human subject data was collected under MIT IRB protocol #1605559077 Cognitive Foundations of Human Language Processing and Acquisition.", "All subjects provided written consent prior to participation.", "Guessing (No Passage) The goal of this experiment is to determine the extent to which humans can guess the correct answer to questions without reading the passage.", "Participants see only the question and its four answers and are asked to provide their best guess for the correct answer.", "After confirming their selection, participants are informed on whether it was correct and shown the correct answer along with the passage.", "Question Validity Judging This experiment is designed to identify questions which do not have a unique correct answer.", "Participants are presented with the question, answers and the passage, and are asked to indicate whether the question has (A) one correct answer, (B) more than one correct answer, or (C) no correct answer.", "If (A) is selected, the participant further selects the correct answer.", "If (B) is selected, the participant is asked to mark all the answers that they consider to be correct.", "We deployed all three experiments on the crowd-sourcing platform Prolific (prolific.co), with a 6 trials batch for each subject.", "The first two trials were fixed practice items, one with a passage from OneStopQA and one from RACE.", "These trials were tailored for each experiment such that performing the respective task correctly is straightforward.", "Next, each participant performed 4 experimental trials.", "Two of the trials had passages from OneStopQA (one Advanced and one Elementary, taken from different articles), and two were from RACE (one Middle School and one High School).", "To encourage participants to perform the tasks well, in the QA and Guessing experiments participants received a monetary bonus for each correct answer.", "In all three experiments, participants who did not answer both practice trials correctly were excluded from the analysis.", "The materials for each of the three Prolific experiments are 1296 question-passage pairs, 648 from OneStopQA and 648 from RACE.", "The OneStopQA items are taken from 20 OneStopQA articles, with a total of 108 paragraphs.", "For each paragraph we use two paragraph difficulty levels Advanced and Elementary, combined with each of the 3 questions.", "The RACE materials include 108 Middle School articles and 108 High School articles from the RACE test set.", "We chose the articles at random among the articles that have three or more questions, and then randomly picked 3 questions for each article.", "In each of the three Prolific experiments we collected responses from three valid participants (i.e. participants who answered both practice trials correctly) for each question-passage pair.", "A single participant completed one batch in one of the three experiments, corresponding to a total of 2,916 unique participants (792 per experiment).", "Even in the presence of monetary incentives and participant filtering based on practice trials, it is hard to guarantee that crowd-sourcing workers are always performing the given task attentively.", "We therefore further ran the QA experiment with in-lab participants.", "For this experiment, we used a subset of 432 questions from the Prolific experiments' materials.", "We recruited 12 participants (6 undergraduate students and 6 post-graduate students), each completing 36 items.", "The items given to each participant were equally distributed between datasets and text difficulty levels, and guaranteed not to repeat the same article for RACE and the same paragraph for OneStopQA.", "The results of the human reading comprehension experiments are presented in the Humans portion of Table", "5. Comparisons were calculated using Satterthwaite's method applied to a mixed-effects model that treats subjects and questions as crossed random effects.", "All the experiments suggest clear advantages of OneStopQA as compared to RACE.", "In the Prolific QA experiment, participants obtain a higher overall accuracy of 80.7 on OneStopQA compared to 74.3 on RACE ( p < 0 . 001 ).", "We note that our QA experiment reproduces the Mechanical Turk experiment in (Lai et al., 2017), which yielded a similar human performance of 73.3 on RACE.", "In the Guessing experiment, we observe that without exposure to the passage, participants were able to obtain an accuracy of 32.1 on OneStopQA as compared to 39.5 on RACE ( p (cid:28) 0 .", "001 ).", "For the Question Validity Judging experiment we report the percentage of questions on which all three participants have indicated that the question does not have a unique answer.", "This metric reveals a dramatic advantage of OneStopQA, with 3.4% of invalid questions as compared to 18.3% for RACE ( p (cid:28) 0 .", "001 ).", "We note that this result is substantially different from the percentage of invalid questions reported in Lai et al. (2017), where the authors have estimated that only 5.5% of the RACE questions are invalid.", "The judging experiment also enables us to devise a heuristic for approximating the ceiling performance on both datasets.", "To calculate it, we assign valid questions with a score of 1, and invalid questions with a score of 1 divided by the average number of answers considered correct across participants (where no correct answer is treated as 4 correct answers).", "The resulting performance ceiling is 88.8 for RACE and 97.9 for OneStopQA.", "The QA accuracy of our in-lab participants approaches this ceiling with 95.3 accuracy on OneStopQA versus 84.7 on RACE ( p < 0 . 01 ).", "The combination of this outcome with the results of our Question Validity experiment suggests that the human gap from perfect 100% accuracy on RACE is due mainly to poor item quality rather than high item difficulty.", "These results have important implications on current machine reading evaluations.", "With an accuracy of 82.9% for RoBERTa and even higher performance for ensemble models reported on the RACE public leader board, it is likely that current machine reading models are very close to exhausting the space of meaningful performance improvements on this dataset.", "On the other hand, a more substantial room for improvement is still available for OneStopQA.", "Our final analysis uses the structured annotations of OneStopQA for detailed comparisons of human and machine reading comprehension behavior.", "In particular, the annotations enable comparing the error distributions of humans and machines.", "Interestingly, we observe that the Prolific QA error distribution is similar to that of RoBERTa, where B is the most common error, C is the second most common error and D is the least common error.", "This error frequency order is in line with the strength order design of the distractors.", "Further, similarly to RoBERTa, humans are only slightly affected by the difficulty level of the paragraph, although differently from RoBERTa, human performance is consistently worse on the advanced level compared to the elementary level.", "These results suggest deeper parallels between human and machine reading comprehension behavior than previously observed via overall accuracy comparisons.", "Our no-passage guessing experiment on the other hand suggests interesting differences between humans and RoBERTa.", "First, RoBERTa, which is specifically trained on this task, has a higher guessing performance than humans on Prolific.", "Further, the overlap in the questions successfully guessed by humans and by RoBERTa is fairly small: the percentage of questions correctly guessed by both humans and RoBERTa is 18% for RACE and 12% for OneStopQA.", "We hypothesize that these results are due at least in part to RoBERTa picking up on statistical regularities in the question and answer training data which are difficult for humans to spot at test time.", "The STARC annotations enable gaining further insight into the difference in the guessing strategies of humans and machines: humans have a stronger preference for D ( p < . 05 , McNemar's test).", "This outcome makes sense in the absence of the paragraph, as while the other answers are constrained by the specifics of the paragraph, D distractors may appeal to general world knowledge and reasoning which can be beyond the capacities of RoBERTa.", "A considerable number of reading comprehension datasets have been introduced in NLP.", "A large fraction of these datasets can be broadly divided into three tasks: Cloze (Hermann et al., 2015; Hill et al., 2015; Bajgar et al., 2016), span identification QA (Rajpurkar et al., 2016; Nguyen et al., 2016; Trischler et al., 2017; Joshi et al., 2017; Kwiatkowski et al., 2019) and multiple choice QA (Richardson et al., 2013; Lai et al., 2017).", "Our approach primarily falls into the third category.", "The basic 4-answer format we use is identical to RACE (Lai et al., 2017), which enables training models on RACE and evaluating them on OneStopQA.", "Our dataset is considerably smaller than RACE, but is of appropriate size for robust evaluations and error analyses.", "As demonstrated in this work, OneStopQA annotations are of substantially higher quality than RACE, and enable analyses which are not possible with RACE.", "MCTest (Richardson et al., 2013) was created with a similar purpose to RACE, but has a low text difficulty level suitable for 7-year-olds.", "Span identification QA is a task in which the correct answer to the question is one or more textual spans which the reader is required to mark.", "This task differs from multiple choice reading comprehension in its focus on information retrieval, which limits the range of question types (e.g. forces the answers to be primarily named entities) and their difficulty level.", "While our approach contains span annotations, our notion of span is different from that in span identification QA: spans are not considered as answers but rather as text regions that contain the critical information for the respective answer.", "This difference enables a higher difficulty degree and a wider scope of question types.", "The combination of this approach with a multiple choice answer structure which always has a span misinterpretation distractor facilitates deeper probing of text understanding and is designed to allow for more robustness to simple pattern matching.", "Prior work has explored both manual and automatic auxiliary span annotations for correct answers in multiple choice QA datasets (Khashabi et al., 2018; Wang et al., 2019).", "Our framework extends such annotations to include multiple distractor types, with B distractors providing an additional guarantee that simply identifying the critical span is not sufficient for answering the question correctly.", "We further demonstrate the utility of our distractor structure for automatic verification of annotation quality through ablation experiments, as well as detailed error comparisons between human and machine readers.", "We introduce a new annotation framework for reading comprehension and an accompanying high-quality dataset.", "We leverage the novel structure of our annotations to develop a methodology for automatic validation of annotations and to perform detailed comparisons between human and machine reading comprehension.", "Our experiments further demonstrate substantial quality assurance issues with RACE, which are alleviated in our new dataset.", "Our results demonstrate the promise of our annotation framework and dataset in supporting a wide range of reading behavior analyses, as well as the feasibility of developing automated question validation tools for reading comprehension examinations for humans as exciting directions for future work.", "We thank Beining Jenny Zhang, Katherine Xiao, Margarita Misirpashayeva and Theodor Cucu for contributions to preparation of OneStopQA materials and collection of human subject data.", "We also thank Sowmya Vajjala for assistance with OneStopEnglish.", "We gratefully acknowledge support from Elemental Cognition and from NSF grant IIS-1815529, a Google Faculty Research Award, and a Newton Brain Science Award to RPL." ]
[ "objective", "abstain", "abstain", "objective", "result", "result", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "objective", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "objective", "result", "objective", "result", "abstain", "abstain", "method", "result", "abstain", "method", "objective", "abstain", "method", "abstain", "objective", "abstain", "result", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "abstain", "method", "abstain", "objective", "other", "other", "other", "method", "other", "other", "other", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "other", "other", "other" ]
[ "Various models have been proposed to incorporate knowledge of syntactic structures into neural language models.", "However, previous works have relied heavily on elaborate components for a specific language model, usually recurrent neural network (RNN), which makes themselves unwieldy in practice to fit into other neural language models, such as Transformer and GPT-2.", "In this paper, we introduce the Dependency-based Mixture Language Models.", "In detail, we first train neural language models with a novel dependency modeling objective to learn the probability distribution of future dependent tokens given context.", "We then formulate the next-token probability by mixing the previous dependency modeling probability distributions with self-attention.", "Extensive experiments and human evaluations show that our method can be easily and effectively applied to different neural language models while improving neural text generation on various tasks.", "1 1 Introduction Syntactic structures serve as the principle of how words are correctly combined to form sentences.", "It is widely acknowledged that learning syntactic structures should improve neural text generation (Shen et al., 2018; Peng et al., 2019; Du et al., 2020).", "Even though current neural language models, such as Transformer (Vaswani et al., 2017) and GPT-2 (Radford et al., 2019) have achieved outstanding performance without explicitly modeling latent syntactic structures, these models still fail to learn the long-range syntactic dependencies (Kun-coro et al., 2018; Xu et al., 2021).", "To leverage explicit syntactic knowledge in natural language generation (NLG), many methods have been proposed (Wu et al., 2017; Shen et al., 2018; Zhang et al., 2019; Kim et al., 2019; Du 1 Our code is available at https: //github.com/FadedCosine/Dependency-Guided-Neural-Text-Generation et al., 2020).", "We conclude from previous works that knowledge of syntactic structures can bring four advantages to neural language models: (1) Syntactic structures can be modeled to obtain better representations of natural language sentences (Jacob et al., 2018; Williams et al., 2018; Wang et al., 2019).", "(2) Jointly training syntactic structure parsing and language modeling can contribute to each other (Shen et al., 2018; Dyer et al., 2016; Kim et al., 2019; Du et al., 2020; Shen et al., 2021b).", "(3) Syntactic structures can be used to directly model the composition of language (Socher et al., 2013; Casas et al., 2020) and help with the long-range dependency problem by providing shortcuts for gradient backpropagation (Chung et al., 2017).", "(4) Integrating syntactic structures into a neural network can improve generalization via a better inductive bias (Shen et al., 2019; Zhang et al., 2019).", "Despite these advantages, it is not trivial to incorporate knowledge of syntactic structures into neural language models effectively and efficiently.", "Several practical problems arise: (1) Previous works (Chung et al., 2017; Shen et al., 2018; Dyer et al., 2016; Kim et al., 2019; Shen et al., 2019) have relied heavily on elaborate components for a specific language model, usually recurrent neural network (RNN) (Sutskever et al., 2014).", "These methods are difficult to be adapted to other neural language models, such as Transformer and GPT-2.", "(2) If jointly modeling language modeling and syntactic structure parsing, it will require much more time/memory during training or inference.", "To address these problems while keeping the advantages, we explore incorporating knowledge of syntactic structures in a different manner.", "In this work, we propose a novel dependency modeling objective to train neural language models to directly predict the current token's future dependent tokens given the history.", "We define the future dependent to-7758 Models External Parameters?", "kens of a specific token in a sentence as its children and parent in the dependency parse tree that will appear in the rest of the sentence.", "Further, we propose Dependency-based Mixture Language Models (DMLM) that, at each timestep, mixes the previous dependency modeling probability distributions with self-attention to get the next-token probability.", "As shown in Table 1, the proposed method can be adapted to any neural language model without adding external networks or parameters.", "Our core idea can be illustrated in Figure 1 and Figure 2: when predicting the next-token \"indi-cate\" after reading \"red figures on the screen\", common language models are easy to predict an incorrect word, such as \"indicates\", since the prediction of these models relies heavily on the recent word, \"screen\" in this case.", "However, our propose DMLM will directly look back into the long-range context, and select the next-token from all the future dependent tokens predicted by previous tokens.", "According to the underlying dependency structure, DMLM pays different weights to different tokens' future dependent tokens.", "Thus, the model is more likely to predict \"indicate\" since DMLM tends to think of the next-token as a future dependent token of \"figures\" rather than \"screen\".", "We conduct experiments with different neural language models including LSTM (Hochreiter and Schmidhuber, 1997), Transformer (Vaswani et al., 2017), and GPT-2 (Radford et al., 2019) across different tasks in conditional text generation, unconditional text generation, and language modeling.", "Through extensive experiments we demonstrate that DMLM consistently improves the generation quality according to both human evaluations and automatic metrics.", "Compared to other neural language models that incorporate syntactic knowledge, indicate figures stocks red screen falling on the ROOTROOT red figures on the screen indicate falling stocks .", "DMLM is architecturally simpler and easier to fit into any neural language model, while possessing wide applicability to different text generation tasks.", "Our goal is to propose a simple yet effective method that can improve neural text generation by learning from the underlying syntactic structure, and can fit into any auto-regressive generation model without using additional elaborate components.", "We first introduce a novel dependency modeling objective to force the model to directly predict the future dependent tokens of the current token.", "Based on the dependency modeling, we then present the proposed DMLM.", "It has been a challenge to equip neural language models with the capability of modeling long-range dependency in text (Dai et al., 2019).", "In particular, previous works (Wu et al., 2017) observe that vanilla RNN can hardly capture many subtle long-range token dependencies effectively.", "On 7759 red <BOS> red figures figures on on the the screen screen + A tt e n ti on D i s t r i bu ti on D e p e nd e n c y M od e li ng D i s t r i bu ti on s indicate F i n a l P r e d i c ti on Figure 2: Illustration of DMLM.", "the other hand, though self-attention mechanisms can build direct connections between long-distance token pairs, it is still elusive for Transformer to be aware of syntactic dependency structures while also obtaining strong language modeling performance (Shen et al., 2021a).", "The current neural language models are mostly trained purely using the language modeling objective with Maximum Likelihood Estimation (MLE).", "With the auto-regressive factorization, language modeling can be reduced to modeling the conditional distribution of the next-token x t given the context x <t = { x 1 , . . . , x t 2 , x t 1 } .", "However, in order to make neural language models aware of long-range dependency and syntactic structures, we propose the dependency modeling objective to train models to learn the probability distribution of the future dependent tokens directly.", "Following Ahmed et al. (2019), we define the future dependent tokens of a specific token in a sentence as its children and parent in the dependency parse tree that will appear in the rest of the sentence.", "Taking Figure 1 as an example, the future dependent tokens of \"figures\" are \"screen\" and \"indicate\", since \"red\" does not appear after \"figures\" in this sentence.", "Specifically, given a token sequence x = { x 1 , . . . , x T 1 , x T } where T N denotes the sequence length, we first use dependency parser to generate a dependency tree.", "Then, we derive the future dependent tokens set Z t for each token x t 1 , where Z t = { x i | i t, x i is the child or parent of x t 1 } .", "We train a language model to maximize the log-likelihood sum of tokens in Z t .", "This equals to minimize: LDM ( ) = T (cid:88) t =1 (cid:88) z t Z t log p dep ( z t | x <t ) , (1) which is the dependency modeling objective.", "To give a categorical probability distribution over the next-token, a standard approach for the current neural language models is to encode the context into a fixed-size vector followed by an output embedding layer and a softmax function.", "In our case, given the context x <t , we first train the language model to directly learn the probability distribution of x t 1 's future dependent tokens p dep ( w | x <t ) by dependency modeling (Section 2.1).", "We then propose DMLM (depicted in Figure 2) that mixes dependency modeling probability distributions P dep = { p dep ( w | x < 1 ) , . . . , p dep ( w | x <t 1 ) , p dep ( w | x <t ) } .", "All the probability distributions in P dep are weighed by self-attention, and summed to obtain the final next-token probability distribution.", "We can easily implement a self-attention in both Transformer-based and RNN-based language models.", "For example, in Transformer and GPT-2, the penultimate layer seems to naturally learn alignments (Garg et al., 2019), so we use its average attention weights over all the attentions heads as the dependency attention distribution.", "In RNN-based models, inspired by Merity et al. (2017) and 7760 Vaswani et al. (2017), at each timestep, we linearly project the current hidden state h t RH to a query vector q t = WQ h t and a key vector k t = WK h t , where WQ RH H , WK RH H , q t RH , and k t RH .", "To generate the dependency attention, we compute the match between the query q t and the context's keys { k 1 , . . . , k t 1 , k t } by taking the inner product, followed by a softmax to obtain the dependency attention distribution: e (t) = { e ( t ) 1 , . . . , e ( t ) t 1 , e ( t ) t } , e ( t ) i = q Tt k i , 1 i t, a ( t ) = softmax ( e (t) H ) , a ( t ) = { a ( t ) 1 , . . . , a ( t ) t 1 , a ( t ) t } , (2) where e (t) R t , and a ( t ) R t .", "H", "The dependency attention distribution reveals which token in the context may have a strong dependency relation with the token to be predicted.", "Thus, the neural language model should pay more attention to previous tokens with high dependency attention scores, i.e., the next-token is more likely to be the future dependent token of those tokens in the context.", "Formally, the next-token probability is the sum of the context's dependency modeling probability distributions weighed by the dependency attention scores: p ( w | x <t ) = t (cid:88) =1 a ( t ) p dep ( w | x < ) .", "where p dep ( w | x < ) is the probability distribution of x 1 's future dependent tokens, since till now the neural language model is only trained by dependency modeling.", "Then, we further finetune the neural language model using MLE, but with respect to our modified probability distribution given in Equation 3: LLM ( ) = T (cid:88) t =1 log p ( x t | x <t ) .", "For each timestep during inference, DMLM outputs a dependency modeling distribution, and we store it in a list.", "To predict the next-token, DMLM applies self-attention in Equation 2 to produce a dependency attention distribution over the context, and then the next-token probability can be calculated by Equation 3, where the list preserves all the p dep ( w | x < ) , 1 t .", "Despite previous works mainly focusing on language modeling, it has always been a thorny issue whether better language models lead to better performance in downstream tasks.", "Therefore, we showcase the performance of our proposed DMLM in three different tasks: conditional text generation (Section 3.1), unconditional text generation (Sec-tion 3.2), and language modeling (Section 3.3).", "To verify the effectiveness and architecturally generalizability of our method, we conduct the generation tasks with three dominant neural language models, including LSTM, Transformer and GPT-2.", "We prefix the base model name with \" DM\" to denote the corresponding Dependency-based Mixture language model.", "Specifically, we adopt AWD-LSTM (Merity et al., 2018) as our base LSTM, and further compare our DM-LSTM with PRPN (Shen et al., 2018) and ON-LSTM (Shen et al., 2019) which also incorporate knowledge of syntactic structures, and are built on LSTM.", "In the same task, we use exactly the same hyper-parameters and setups for the pairs of base models and corresponding DM-models.", "Other details of the experimental setup for each task can be seen in Appendix A. For all the tasks, we use a state-of-the-art parser, HPSG Parser 2 (Zhou and Zhao, 2019) to get the dependency parse tree for each sentence in the datasets.", "We discuss the impact of the dependency parser in Appendix B. 3.1 Conditional Text Generation Setup We take the story ending generation as the conditional text generation task, and evaluate our method on the ROCStories corpus (Mostafazadeh et al., 2016), which consists of 98,161 five-sentences.", "We follow the preprocessing 3 of Kong et al. (2021) to randomly split ROCStories by 8:1:1 for training/validation/test, respectively, and delexicalize stories by masking all the male/female/unknown names with \"[MALE]\"/\"[FEMALE]\"/\"[NEUTRAL]\".", "We finally get a word-level vocabulary with 31 , 216 unique tokens.", "The conditional text generation task is to generate a reasonable ending given a four-sentence story context.", "For all models, we generate stories using nucleus sampling (Holtzman et al., 2 https://github.com/DoodleJZ/ HPSG-Neural-Parser 3 We use the preprocessed data in https://github.com/thu-coai/Stylized-Story-Generation-with-Style-Guided-Planning 7761 Models UNION BERTScore B-1 B-2 D2 D3 SB-2 SB-3 PRPN 83.37 29.11 21.45 6.84 13.22 33.50 95.17 86.76 ON-LSTM 82.18 29.41 22.16 7.33 13.93 35.71 94.98 85.80 AWD-LSTM 82.98 29.57 22.23 7.31 14.07 35.71 94.92 85.88 DM-LSTM 83.97 29.93 22.54 7.63 14.92 37.44 94.47 84.77 Transformer 81.39 27.64 21.28 7.01 17.48 42.30 93.18 81.52 DM-Transformer 84.07 28.20 21.49 7.29 17.79 42.08 92.86 81.36 GPT-2 84.41 29.02 21.79 7.45 17.09 40.74 93.51 82.55 DM-GPT-2 85.31 30.18 22.81 8.02 17.98 43.29 93.18 81.41 Table 2: Automatic evaluation results for the conditional text generation task on Rocstories dataset.", "2020) with p = 0 .", "5 .", "We measure the generated story endings by the following automatics metrics: (1) UNION (Guan and Huang, 2020): It is a learnable unreferenced metric for evaluating the quality of generated stories; (2) BERTScore (Zhang et al., 2020): The metric measures the semantic consistency between the generated and the referenced ones by BERT (De-vlin et al., 2019); (3) BLEU (B-n) (Papineni et al., 2002): BLEU evaluates n -gram overlap between the generated stories and the references; (4) Distinct (D-n) (Li et al., 2016): The proportions of distinct n -grams in the outputs to evaluate the diversity of generated results.", "Since Distinct score will become extremely low for small n , we calculate it with n = 2 , 3 ; (5) Self-BLEU (SB-n) (Zhu et al., 2018): The metric is calculated by computing n grams ( n = 2 , 3 ) BLEU score of each generated text with all other generated ones as references.", "Smaller Self-BLEU scores indicate better diversity.", "Results The experimental results of baselines and corresponding DM-models are shown in Table", "2. Note that we do not conduct significant tests on Distinct since it is a document-level metric.", "We can see that, all the DM-models significantly outperform baseline models on almost all the metrics.", "Furthermore, compared with PRPN and ON-LSTM, our DM-LSTM performs signifi-Models LM score RLM score PRPN 5.24 5.75 ON-LSTM 5.20 5.59 AWD-LSTM 5.18 5.64 DM-LSTM 5.14 5.52 Transformer 5.00 5.59 DM-Transformer 4.97 5.49 GPT-2 4.89 5.55 DM-GPT-2 4.67 5.47 Table 4: Results of global metrics for the unconditional text generation task on EMNLP2017 WMT News.", "cantly better in all the metrics.", "This indicates that incorporating knowledge of syntactic structures in our proposed way can effectively contribute to both the quality and diversity of the story ending generation.", "Moreover, no matter what the base model is, our DM-model can substantially improves the conditional text generation.", "This demonstrates that our method can be effectively adapted to different neural language models, such as the large scale language model, GPT-2, while previous models like ON-LSTM can only be built on LSTM.", "Human evaluation To further evaluate the fluency and logic of generated stories, following (Guan et al., 2020), we conduct pair-wise comparisons between DM-models and corresponding 7762 Models Nucleusp 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 PRPN 41.48 45.77 55.32 64.23 83.98 109.3 172.09 302.57 ON-LSTM 37.46 42.98 46.16 56.69 72.36 98.06 152.60 274.43 AWD-LSTM 37.97 41.80 48.74 57.45 71.77 94.22 146.40 289.13 DM-LSTM 36.11 39.53 47.67 55.30 69.38 95.95 136.98 256.51 Transformer 45.37 46.36 50.90 60.27 70.74 91.65 125.46 222.27 DM-Transformer 37.74 40.75 43.25 49.92 60.28 76.77 104.03 182.29 GPT-2 41.19 44.05 47.86 53.97 63.18 81.45 112.81 192.10 DM-GPT-2 36.41 40.99 41.75 46.18 55.36 67.97 92.22 152.98 Table 5: GPT-2 Perplexity on 1 , 000 random samples with various sampling hyper-parameters generated by models trained on EMNLP2017 WMT News dataset.", "baselines.", "We randomly sample 100 story endings from each model.", "For each pair of stories (one by the DM-model and the other by the baseline, along with the beginning), five annotators are hired to give a preference (win, lose, or tie) from the following two aspects: (1) Grammaticality : whether a story ending is natural and fluent; (2) Logicality : whether a story is coherent to the given beginning and reasonable in terms of causal and temporal dependencies in the context.", "The detailed questionnaire and other details are shown in Appendix D. The average win/lose/tie rates of the human evaluation are shown in Table", "3. To measure the inter-annotator agreement, we calculate Krippendorff's alpha (Hayes and Krippendorff, 2007) for each pair-wise comparison, and all the results are fair agreement ( 0 . 2 0 . 4 ) or moderate agreement ( 0 . 4 0 . 6 ).", "The results show that our DM-models significantly outperform baseline models in both the grammaticality and logicality.", "Setup We perform experiments of unconditional text generation on EMNLP2017 WMT News dataset 4 .", "We use the preprocessed data of a recent work 5 (Caccia et al., 2020) that contains 5 , 268 distinct words with maximum sentence length 51.", "The training/validation/test set consists of 268 , 586 / 10 , 000 / 10 , 000 sentences.", "Following Caccia et al. (2020), we evaluate the models with the global metrics (Semeniuta et al., 2018): (1) Language Model score (LM score) : We use the oracle Language Model to evaluate the negative log-likelihood of generated text as the metric to reflect quality; (2) Reverse Language Model score (RLM score) We train a new Language Model on the generated text, and then evaluate the negative log-likelihood of a held-out set of real text.", "This metric can measure text diversity since the generated text with better diversity would have a broader coverage over the real data space, and the new Language Model can be trained better, thus leading to lower RLM score.", "Both the LM score and RLM score are usually evaluated on the sentences generated by purely random sampling.", "Besides, to further measure the generation fluency, we directly use the public GPT-2 checkpoint of pretrained parameters without finetuning to calculate GPT-2 Perplexity of generated samples.", "Results Table 4 shows the results of global metrics obtained by various models.", "All the DM-models again outperform the baselines.", "The consistently lower LM scores indicate that the generated 4 http://statmt.org/wmt17/ translation-task.htm 5 https://github.com/pclucas14/ GansFallingShort/tree/master/real_data_experiments/data/news 7763 Models #Params Dev PPL Test PPL Pointer Sentinel-LSTM (Merity et al., 2017) 21M 72.4 70.9 RNNG (Dyer et al., 2016) -88.7 Variational RHN (Zilly et al., 2017) 23M 67.9 65.4 PRPN (Shen et al., 2018) -62.0 Fraternal dropout (Zolna et al., 2018) 24M 58.9 56.8 URNNG (Kim et al., 2019) -85.9 ON-LSTM (Shen et al., 2019) 25M 58.3 56.2 AWD-LSTM (Merity et al., 2018) 24M 60.0 57.3 DM-LSTM (Ours) 24M 58.6 56.2 AWD-LSTM-MoS(Yang et al., 2018) 22M 56.5 54.4 AWD-LSTM-DOC(Takase et al., 2018) 23M 54.1 52.4 Table 7: Various language models' perplexity evaluated on validation and test sets of Penn Treebank dataset.", "sentences of DM-models are of better quality, while the consistently lower RLM scores also demonstrate that DM-models can generate more diverse sentences meanwhile.", "In addition, each model is used to generate 1 , 000 sentences with various sampling hyper-parameters, and GPT-2 Perplexity is further calculated.", "As shown in Table 5, our proposed method can make neural language models perform significantly better in terms of generation fluency.", "In particular, Transformer-based models can gain more significant improvement from DMLM.", "We conjecture that this is because, in our implementation, we directly uses the penultimate multi-head attention layer of Transformer to obtain the dependency attention distribution of DMLM.", "Thus, it can easily inherit all the strengths of Transformer-based models.", "Human evaluation Following previous work (Yu et al., 2017; Guo et al., 2018), we conduct a Turing test to further evaluate the generated text.", "In practice, we mix 100 randomly sampled sentences from each model, and another 100 sentences from the real test set.", "Five annotators are hired to judge whether each of the 900 sentences is created by human or machines.", "Each sentence gets +1 score when it is regarded as a real one, and 0 score otherwise.", "The detailed questionnaire and other details are shown in Appendix D. The average score for each model is shown in Table 6, from which we can see all the DM-models surpass the baselines.", "Both automatic evaluations and human evaluations indicate that DMLM can help neural language models generate more readable, fluent, and natural sentences.", "Setup We evaluate the proposed method with the word-level language modeling task by measuring Perplexity (PPL) on the Penn Treebank (PTB) (Marcus et al., 1993; Mikolov et al., 2012) corpora.", "The PTB dataset has a vocabulary size of 10 , 000 unique words, and the training/validation/test set consists of 42 , 068 / 3 , 370 / 3 , 761 sentences.", "For this task, we mainly implement the DMLM on the RNN-based language model, i.e., AWD-LSTM (Merity et al., 2018).", "For a fair comparison, our DM-LSTM uses exactly the same hyper-parameters and setups as AWD-LSTM.", "Since Transformer-based models' strong performance relies on training with large datasets, it will perform worse than random when trained on a small dataset (Shen et al., 2021a).", "We still report Transformer-based models' language modeling results on PTB in Appendix C. Results We compare our method with its base model, AWD-LSTM, and we report the results along with other state-of-the-art models in Table 7.", "Compared with the AWD-LSTM, our DM-LSTM reduces the perplexity by 1.4 on the validation set and 1.1 on the test set, indicating that incorporating knowledge of syntactic structures in our proposed manner can substantially improve language modeling.", "Compared with other models that also leverage syntactic knowledge, our DM-LSTM strongly outperforms RNNG, PRPN, and URNNG.", "Moreover, though DM-LSTM does not make any changes to the architecture of the AWD-LSTM language model, it still achieves a comparable perplexity with ON-LSTM.", "Note that, since our method is model-agnostic, it can be harmonically 7764 \u00005 \u00002 \u00002 \u00007 \u0000U \u0000H \u0000G \u0000I \u0000L \u0000J\u0000X \u0000U \u0000H \u0000V \u0000R\u0000Q \u0000W \u0000K \u0000H \u0000V \u0000F \u0000U \u0000H\u0000H \u0000Q \u0000L \u0000Q\u0000G \u0000L \u0000F\u0000D \u0000W \u0000H \u0000I \u0000D \u0000O\u0000O\u0000L \u0000Q\u0000J \u0000V \u0000W \u0000R \u0000F \u0000N \u0000V \u0000U\u0000H\u0000G \u0000I\u0000L\u0000J\u0000X\u0000U\u0000H\u0000V \u0000R\u0000Q \u0000W\u0000K\u0000H \u0000V\u0000F\u0000U\u0000H\u0000H\u0000Q \u0000L\u0000Q\u0000G\u0000L\u0000F\u0000D\u0000W\u0000H \u0000I\u0000D\u0000O\u0000O\u0000L\u0000Q\u0000J \u0000V\u0000W\u0000R\u0000F\u0000N\u0000V \u0000\u0011 10 10 10 8 10 6 10 4 10 2 10 0 Figure 3: Visualization of dependency attention distributions.", "We show how our proposed method works by visualizing the dependency attention distributions.", "We use DM-Transformer to generate a sentence: \"red figures on the screen indicate falling stocks.\"", "For each generation step, we record this step's dependency attention distribution.", "When we finally generate the whole sentence, we get 9 distributions and plot Figure 3 from them.", "Each row in Figure 3 shows the dependency attention distribution of the model when generating the corresponding Y-axis token.", "When predicting the token \"indicate\", DMLM pays great attention to \"figures\".", "This is because these two tokens have a direct dependency connection in the dependency parse tree, and our method successfully captures this relationship.", "In addition, DMLM also helps the model better organize dependency information when the next-tokens, such as \"screen\" and \"stocks\", have dependencies on more than one token in the context.", "We perform case studies for a better understanding of the model performance.", "Table 8 provides examples of conditional text generation produced by our DM-models and other baselines.", "Obviously, all the DM-models can generate more reasonable and coherent story endings.", "Additionally, some examples of unconditional text generation are shown in Table 9 and Appendix E. These examples show that our DMLM can help base models generate more reasonable, readable, fluent, and natural sentences.", "Compared with vanilla RNN, our DM-RNN indeed increases the computational complexity from O ( T ) to O ( T 2 ) .", "In practice, we can follow Merity et al. (2017) to set a context window that allows DMLM looks L timesteps into the past at most, where L is the context length.", "However, our DMLM can efficiently apply to Transformer-based models without additional computational complexity.", "Many previous studies have shown that leveraging the knowledge of syntactic structures can improve NLG (Chelba, 1997; Roark, 2001; Emami and Je-linek, 2005; Buys and Blunsom, 2015).", "Mirowski and Vlachos (2015) incorporated syntactic dependencies into the RNN formulation, but they limited the scope to the scoring of complete sentences, not to next word prediction.", "Some other efforts have been done to integrate dependency structure into neural machine translation (NMT) from both the source and target side.", "Eriguchi et al. (2016) proposed a tree-to-sequence attentional NMT model where source-side parse tree was used.", "Wu et al. (2017) involved target syntactic trees into NMT model to jointly learn target translation and dependency parsing.", "Casas et al. (2020) introduced a syntactic inductive bias to NLG in an iterative non-autoregressive way.", "For neural language models, recently, Dyer et al. (2016) proposed recurrent neural network grammar (RNNG) to jointly model syntax and surface structure by incrementally generating a syntax tree and sentence.", "Subsequent work (Kim et al., 2019) extended the model to an unsupervised version.", "Shen et al. (2018) introduced the Parsing-Reading-Predict Networks (PRPN) to calculate syntactic distances among words and use self-attention to compose previous states.", "Its subsequent work (Shen et al., 2019) transferred the distance notion to LSTM cell, and introduced Ordered Neurons LSTM (ON-LSTM).", "edge of syntactic structures by introducing complex architectural changes.", "Therefore, it can get very unwieldy to adapt them to other neural language models, such as Transformer and GPT-2.", "In this paper, we introduce Dependency-based Mixture Language Models, which can incorporate knowledge of dependency structures into arbitrary auto-regressive generation models without any changes to the original architectures.", "Both automatic and human evaluation results in extensive experiments across different tasks and different architectures demonstrate the effectiveness and generalizability of our method.", "In the future, we will explore to incorporate the dependency labels into our method, and combine our DMLM with more neural language models.", "Second, we would like to integrate other linguistic knowledge, such as constituency structures and semantic information, into neural language models in our manner.", "This work was supported by National Key R&D Program of China (No.2018YFB1005100), Bejing Academy of Artificial Intelligence (BAAI) and State Key Laboratory of Media Convergence Production Technology and Systems.", "We appreciate the anonymous reviewers for their helpful comments.", "Xiaojun Wan is the corresponding author." ]
[ "abstain", "abstain", "abstain", "objective", "method", "result", "abstain", "abstain", "abstain", "other", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "method", "abstain", "objective", "abstain", "method", "objective", "abstain", "abstain", "method", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "result", "result", "abstain", "abstain", "result", "abstain", "other", "abstain", "abstain", "other", "abstain", "result", "result", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "other", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "result", "method", "method", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "objective", "objective", "abstain", "other", "other", "other" ]
[ "This paper solves the fake news detection problem under a more realistic scenario on social media.", "Given the source short-text tweet and the corresponding sequence of retweet users without text comments, we aim at predicting whether the source tweet is fake or not, and generating explanation by highlighting the evidences on suspicious retweeters and the words they concern.", "We develop a novel neural network-based model, Graph-aware Co-Attention Networks (GCAN), to achieve the goal.", "Extensive experiments conducted on real tweet datasets exhibit that GCAN can significantly outperform state-of-the-art methods by 16% in accuracy on average.", "In addition, the case studies also show that GCAN can produce reasonable explanations.", "Social media is indispensable in people's daily life, where users can express themselves, access news, and interact with each other.", "Information can further spread through the social network.", "Opinions and sentiments on source stories can be reflected by user participation and interaction.", "The convenient and low-cost essence of social networking brings collective intelligence, but at the same time leads to a negative by-product, the propagation of misinformation such as fake news .", "Fake news is a kind of news story possessing intentionally false information on social media (Rashkin et al., 2017; Allcott and Gentzkow, 2017).", "The widespread of fake news can mislead the public, and produce unjust political, economic, or psychological profit for some parties (Horne and Adali, 2017; Allcott and Gentzkow, 2017).", "Data mining and machine learning techniques were utilized to detect fake news (Shu et al., 2017; Cha et al., 2020).", "Typical approaches rely on the content of new articles to extract textual features, such as n-gram and bag of words, and apply supervised learning (e.g., random forest and support vector machine) for binary classification (Shu et al., 2017).", "NLP researchers also learn advanced linguistic features, such as factive/assertive verbs and subjectivity (Popat, 2017) and writing styles and consistency (Potthast et al., 2018).", "Multi-modal context information is also investigated, such as user profiles (Yang et al., 2012; Liu and Wu, 2018) and retweet propagation (Ruchansky et al., 2017; Shu et al., 2019a).", "Nevertheless, there are still critical challenges in detecting fake news online.", "First, existing content-based approaches (Castillo et al., 2011; Potthast et al., 2018; Shu et al., 2019a) require documents to be long text, e.g., news articles, so that the representation of words and sentences can be better learned.", "However, tweets on social media are usually short text (Yan et al., 2015), which produces severe data sparsity problem.", "Second, some state-of-the-art models (Ruchansky et al., 2017; Liu and Wu, 2018; Shu et al., 2019a) require a rich collection of user comments for every news story, to learn the opinions of retweeters, which usually provide strong evidences in identifying fake news.", "However, most users on social media tend to simply reshare the source story without leaving any comments (Kwak et al., 2010).", "Third, some studies (Ma et al., 2018) consider that the pathways of information cascade (i.e., retweets) in the social network are useful for classifying misinformation, and thus learn the representations of the tree-based propagation structures.", "However, it is costly to obtain the diffusion structure of retweets at most times due to privacy concerns (Li et al., 2018).", "Many users choose to hide or delete the records of social interactions.", "Fourth, if the service providers or the government agencies desire to inspect who are the suspicious users who support the fake news, and which topics do they concern in producing fake news (Reis et al., 2019), existing models cannot provide explanations.", "Although dEFEND (Shu et al., 2019a) can generate reasonable explanation, it requires both long text of source articles and text of user comments.", "This paper deals with fake news detection under a more realistic scenario on social media.", "We predict whether a source tweet story is fake, given only its short text content and its retweet sequence of users , along with user profiles .", "That said, we detect fake news under three settings:", "(a) short-text source tweet,", "(b) no text of user comments, and", "(c) no network structures of social network and diffusion network.", "Moreover, we require the fake news detection model to be capable of explainability , i.e., highlighting the evidence when determining a story is fake.", "The model is expected to point out the suspicious retweeters who support the spreading of fake news, and highlight the words they especially pay attention to from the source tweet.", "To achieve the goal, we propose a novel model, G raph-aware C oA ttention N etwork ( GCAN ) 1 .", "We first extract user features from their profiles and social interactions, and learn word embeddings from the source short text.", "Then we use convolutional and recurrent neural networks to learn the representation of retweet propagation based on user features.", "A graph is constructed to model the potential interactions between users, and the graph convolution network is used to learn the graph-aware representation of user interactions .", "We develop a dual co-attention mechanism to learn the correlation between the source tweet and retweet propagation, and the co-influence between the source tweet and user interaction.", "The binary prediction is generated based on the learned embeddings.", "We summarize the contributions as follows.", "(1) We study a novel and more realistic scenario of fake news detection on social media.", "(2) For accurate detection, we develop a new model, GCAN, to better learn the representations of user interactions, retweet propagation, and their correlation with source short text.", "(3) Our dual co-attention mechanism can produce reasonable explanations.", "(4) Extensive experiments on real datasets demonstrate the promising performance of GCAN, comparing to state-of-the-art models.", "The GCAN explainability is also exhibited in case studies.", "We organize this paper as follows.", "Section 2 reviews the relevant approaches to fake news detection in social media.", "We describe the problem statement in Section", "3. Then in Section 4, the details of our proposed GCAN model will be elaborated.", "Section 5 demonstrates the evaluation settings and results.", "We conclude this work in Section 6.", "Content-based approaches rely on the text content to detect the truthfulness of news articles, which usually refer to long text.", "A variety of text characteristics are investigated for supervised learning, including TF-IDF and topic features (Castillo et al., 2011), language styles (e.g., part of speech, factive/assertive verbs, and subjectivity) (Popat, 2017), writing styles and consistency (Potthast et al., 2018), and social emotions (Guo et al., 2019).", "Zhao et al. (2015) find the enquiry phrases from user responses are useful, and Ma et al. (2016) use recurrent neural networks to learn better representations of user responses.", "User-based approaches model the traits of users who retweet the source story.", "Yang et al. (2012) extract account-based features, such as is verified, gender, hometown, and number of followers.", "Shu et al. (2019b) unveil user profiles between fake and real news are significantly different.", "CRNN (Liu and Wu, 2018) devise a joint recurrent and convolutional network model (CRNN) to better represent retweeter's profiles.", "Session-based heterogeneous graph embedding (Jiang et al., 2018) is proposed to learn the traits of users so that they can be identified in shared accounts.", "However, since such a method relies on session information, it cannot be directly applied for fake news detection.", "Structure-based approaches leverage the propagation structure in the social network to detect fake news.", "Sampson et al. (2016) leverage the implicit information, i.e., hashtags and URLs, to connect conversations whose users do not have social links, and find such implicit info can improve the performance of rumor classification.", "Ma et al. (2017) create a kernel-based method that captures high-order patterns differentiating different types of rumors.", "Ma et al. (2018) develop a tree-structured recursive neural networks to learn the embedding of rumor propagation structure.", "Although multi-relational graph embedding methods (Feng et al., 2019; Wang and Li, 2019) are able to effectively learn how different types of entities (related to source news ar-Table 1: Comparison of related studies.", "Column notations: news story texts (NS), response comments (RC), user characteristics (UC), propagation structure (PS), social network (SN), and model explainability (ME).", "For the NS column, S and L indicates short and long text, respectively.", "ticles) interact with each other in a heterogeneous information network for classification tasks, they cannot be applied for the inductive setting, i.e., detecting", "detecting the truthfulness of new-coming tweets.", "Hybrid-based approaches consider and fuse multi-modal context information regarding the source tweets.", "CSI (Ruchansky et al., 2017) learns the sequential retweet features by incorporating response text and user profiles, and generates suspicious scores of users based on their social interactions.", "Wang et al. (2018) develop an event adversarial neural network to learn transferable features by removing the event-specific features, along with convolutional neural networks to extract textual and visual features.", "dEFEND (Shu et al., 2019a) jointly learns the sequential effect of response comments and the correlation between news content and comments, and use an attention mechanism to provide explainability.", "We compare our work and the most relevant studies in Table 1.", "The uniqueness of our work lies in: targeting at short text, requiring no user response comments, and allow model explainability.", "Let = { s 1 , s 2 ...s | | } be a set of tweet stories, and U = { u 1 , u 2 ...u | U | } be a set of users.", "Each s i is a short-text document (also called the source tweet ), given by s i = { q i 1 , q i 2 , ..., q il i } indicating l i words in story s i .", "Each u j U is associated with a user vector x j R d representing the user feature with d dimensions.", "When a news story s i is posted, some users will share s i and generate a sequence of retweet records, which is termed a propagation path .", "Given a news story s i , we denote its propagation path as R i = { ..., ( u j , x j , t j ) , ... } , where ( u j , x j , t j ) depicts j -th user u j (with their feature vector x j ) : product : sum : softmax : product T : product : sum : softmax 1 : product : concatenate FC Layer : prediction 1 2 ... T : product : sum : softmax 2 : product : product : sum : softmax : product 1 2 ... 1 2 ... 1 2 ... 1 2 GRU GRU GRU ... 1 2 CNN CNN CNN ... 1 2 1 2 ... ... ... 1 2 GRU GRU GRU ... ... ... 1 1 Source tweet 2 3 ... 2 3 4 ... ...", "who retweets story s i , and j = 1 , 2 , ..., K (i.e., K = | R i | ).", "We denote the set of users who retweet story s i as U i .", "In R i , we denote the user who originally shares s i as u 1 at time t 1 .", "For j > 1 , user u j retweets s i at t j ( t j > t 1 ).", "Each story s i is associated with a binary label y i { 0 , 1 } to represent its truthfulness, where y i = 0 indicates story s i is true, and y i = 1 means s i is fake.", "Given a source tweet s i , along with the corresponding propagation path R i containing users u j who retweet s i as well as their feature vectors x j , our goal is to predict the truthfulness y i of story s i , i.e., binary classification.", "In addition, we require our model to highlight few users u j U i who retweet s i and few words q ik s i that can interpret why s i is identified as a true or fake one.", "We develop a novel model, Graph-aware Co-Attention Networks (GCAN), to predict fake news based on the source tweet and its propagation-based users.", "GCAN consists of five components.", "The first is user characteristics extraction : creating features to quantify how a user participates in online social networking.", "The second is new story encoding : generating the representation of words in the source tweet.", "The third is user propagation representation : modeling and representing how the source tweet propagates by users using their extracted characteristics.", "The fourth is dual co-attention mechanisms : capturing the correlation between the source tweet and users' interactions/propagation.", "The last is making prediction : generating the detection outcome by concatenating all learned representations.", "To depict how users participate in social networking, we employ their metadata and profiles to de-fine the feature vector x j of every user u j .", "The extracted features are listed as follows: (1) number of words in a user's self-description, (2) number of words in u j 's screen name, (3) number of users who follows u j , (4) number of users that u j is following, (5) number of created stories for u j , (6) time elapsed after u j 's first story, (7) whether the u j account is verified or not, (8) whether u j allows the geo-spatial positioning, (9) time difference between the source tweet's post time and u j 's retweet time, and (10) the length of retweet path between u j and the source tweet (1 if u j retweets the source tweet).", "Eventually, every user feature vector x j R v is generated, where v is the number of features.", "The given source tweet is represented by a word-level encoder.", "The input is the one-hot vector of each word in story s i .", "Since the length of every source story is different, we perform zero padding here by setting a maximum length m .", "Let E = [ e 1 , e 2 , ..., e m ] R m be the input vector of source story, in which e m is the one-hot encoding of the m -th word.", "We create a fully-connected layer to generate word embeddings, V = [ v 1 , v 2 , ..., v m ] R d m , where d is the dimensionality of word embeddings.", "The derivation of V is given by: V = tanh ( W w E + b w ) (1) where W w is the matrix of learnable weights, and b c is the bias term.", "Then, we utilize Gating Recurrent Units (GRU) (Chung et al., 2014) to learn the words sequence representation from V .", "The source tweet representation learning can be depicted by: s t = GRU ( v t ) , t { 1 , ..., m } , where m is the GRU dimensionality.", "We denote the source tweet representation as S = [ s 1 , s 2 , ..., s m ] R d m .", "The propagation of source tweet s i is triggered by a sequence of users as time proceeds.", "We aim at exploiting the extracted user feature vectors x j , along with the user sequence spreading s i , to learn user propagation representation.", "The underlying idea is that the user characteristics in real news propagations are different from those of fake ones.", "Here the input is the sequence of feature vectors of users retweeting s i , denoted by P F ( s i ) = (cid:104) x 1 , x 2 , ..., x t , ..., x n (cid:105) , where n is the fixed length of observed retweets.", "If the number of users sharing s i is higher than n , we take the first n users.", "If the number is lower than n , we resample users in P F ( s i ) until its length equals to n .", "GRU-based Representation.", "Given the sequence of feature vectors P F ( s i ) = (cid:104) ..., x t , ..., (cid:105) , we utilize GRU to learn the propagation representation.", "Each GRU state has two inputs, the current feature vector x t and the previous state's output vector h t 1 , and one output vector h t .", "The GRU-based representation learning can be depicted by: h t = GRU ( x t ) , t { 1 , ..., n } , where n is the dimensionality of GRU.", "We generate the final GRU-based user propagation embedding h R d by average pooling, given by h = 1 n (cid:80) nt =1 h t .", "CNN-based Representation.", "We take advantage of 1-D convolution neural network to learn the sequential correlation of user features in P F ( s i ) .", "We consider consecutive users at one time to model their sequential correlation, i.e., (cid:104) x t , ..., x t + 1 (cid:105) .", "Hence the filter is set as W f R v .", "Then the output representation vector C R d ( t + 1) is given by C = ReLU ( W f X t : t + 1 + b f ) (2) where W f is the matrix of learnable parameters, ReLU is the activation function, X t : t + 1 depicts sub-matrices whose first row's index is from t = 1 to t = n + 1 , and b f is the bias term.", "We aim at creating a graph to model the potential interaction among users who retweet source story s i .", "The idea is that some correlation between users with particular characteristics can reveal the possibility that the source tweet is fake.", "To ful-fill such an idea, a graph G i = ( U i , E i ) is constructed for the set of users who share source story s i (i.e., U i ), where E i is the corresponding edge set.", "Since the true interactions between users are unknown, we consider G i is a fully-connected graph, i.e., e E i , u U i , u U i , and u (cid:54) = u , |E i | = n ( n 1) 2 .", "To incorporate user features in the graph, each edge e E i is associated with a weight , and the weight is derived based on cosine similarity between user feature vectors x and x , given by = x x (cid:107) x (cid:107) (cid:107) x (cid:107) .", "We use matrix A = [ ] R n n to represent weights between any pair of nodes u and u in graph G i .", "A graph convolution network (GCN) layer (Kipf and Welling, 2017) is created based on the constructed graph G i for source tweet s i .", "A GCN is a multi-layer neural network that performs on graph data and generates embedding vectors of nodes according to their neighborhoods.", "GCN can capture information from a node's direct and indirect neighbors through stacking layer-wise convolution.", "Given the matrix A for graph G i , and X depicting the matrix of feature vectors for users in G i , the new g -dimensional node feature matrix H ( l +1) R n g can be derived by H ( l +1) = ( AH ( l ) W l ) , (3) where l is the layer number, A = D 12 AD 12 is the normalized symmetric weight matrix ( D ii = (cid:80) j A ij ), and W l R d g is the matrix of learnable parameters at the l -th GCN layer.", "is an activation function, i.e., a ReLU ( x ) = max (0 , x ) .", "Here H (0) is set to be X .", "We choose to stack two GCN layers in derive the learned graph-aware representation, denoted as G R g n .", "We think the evidence of fake news can be unveiled through investigating which parts of the source story are concerned by which kinds of retweet users, and fake clues can be reflected by how retweet users interact with each other.", "Therefore, we develop a dual co-attention mechanism to model the mutual influence between the source tweet (i.e., S = [ s 1 , s 2 , ..., s m ] ) and user propagation embeddings (i.e., C = [ c 1 , c 2 , ..., c n +1 ] from Section 4.3), and between the source tweet and graph-aware interaction embeddings (i.e., G = [ g 1 , g 2 , ..., g n ] from Section 4.4).", "Equipped with co-attention learning, our model is capable of the explainability by looking into the attention weights between retweet users in the propagation and words in the source tweet.", "In other words, by extending the co-attention formulation (Lu et al., 2016), the proposed dual co-attention mechanism aims to attend to the source-tweet words and graph-aware interaction users simultaneously (source-interaction co-attention), and also attend to the source-tweet words and propagated users simultaneously (source-propagation co-attention).", "Source-Interaction Co-attention.", "We first compute a proximity matrix F R m n as: F = tanh ( S (cid:62) W sg G ) , where W sg is a d g matrix of learnable parameters.", "By treating the proximity matrix as a feature, we can learn to predict source and interaction attention maps, given by H s = tanh ( W s S + ( W g G ) F (cid:62) ) H g = tanh ( W g G + ( W s S ) F ) (4) where W s R k d , W g R k g are matrices of learnable parameters.", "The proximity matrix F can be thought to transforming user-interaction attention space to source story word attention space, and vice versa for its transpose F (cid:62) .", "Then we can generate the attention weights of source words and interaction users through the softmax function: a s = softmax ( w (cid:62) hs H s ) a g = softmax ( w (cid:62) hg H g ) (5) where a s R 1 m and a g R 1 n are the vectors of attention probabilities for each word in the source story and each user in the interaction graph, respectively.", "w hs , w hg R 1 k are learnable weights.", "Eventually we can generate the attention vectors of source story words and interaction users through weighted sum using the derived attention weights, given by s 1 = m (cid:88) i =1 a si s i , g = n (cid:88) j =1 a gj g j (6) where s 1 R 1 d and g R 1 g are the learned co-attention feature vectors that depict how words in the source tweet are attended by users who interact with one another.", "Source-Propagation Co-attention.", "The process to generate the co-attention feature vectors, s 2 R 1 d and c R 1 d , for the source story and user propagation, respectively, is the same as source-interaction co-attention, i.e., creating another proximity matrix to transform them into each other's space.", "We skip the repeated details due to the page limit.", "Note that the GRU-based user representations are not used to learn the interactions with the source tweet.", "The reason is that how user profiles in the retweet sequence look like is also important, as suggested by CRNN (Liu and Wu, 2018), and should Table 2: Statistics of two Twitter datasets.", "be emphasized separately.", "Nevertheless, the CNN-based user representations (i.e., features that depict the sequence of user profiles) has been used in the co-attention mechanism to learn their interactions with source tweet.", "We aim at predicting fake news using the source-interaction co-attention feature vectors s 1 and g , the source-propagation feature vectors s 2 and c , and the sequential propagation feature vector h .", "Let f = [ s 1 , g , s 2 , c , h ] which is then fed into a multi-layer feedforward neural network that finally predicts the label.", "We generate the binary prediction vector y = [ y 0 , y 1 ] , where y 0 and y 1 indicate the predicted probabilities of label being 0 and 1 , respectively.", "It can be derived through y = softmax ( ReLU ( fW f + b f )) , (7) where W f is the matrix of learnable parameters, and b f is the bias term.", "where denotes all learnable parameters in the entire neural network.", "We choose the Adam optimizer to learn as it can determine the learning rate abortively.", "We conduct experiments to answer three questions: (1) whether our GCAN model is able to achieve satisfactory performance of fake news detection, compared to state-of-the-art methods?", "(2) how does each component of GCAN contribute to the performance?", "(3) can GCAN generate a convincing explanation that highlights why a tweet is fake?", "tweets, along with their corresponding sequences of retweet users.", "We choose only true and fake labels as the ground truth.", "Since the original data does not contain user profiles, we use user IDs to crawl user information via Twitter API.", "Competing Methods.", "We compare our GCAN with the state-of-the-art methods and some baselines, as listed below.", "(1) DTC (Castillo et al., 2011): a decision tree-based model combining user profiles and the source tweet.", "(2) SVM-TS (Ma et al., 2015): a linear support vector machine classi-fier that utilizes the source tweet and the sequence of retweet users' profiles.", "(3) mGRU (Ma et al., 2016): a modified gated recurrent unit model for rumor detection, which learns temporal patterns from retweet user profile, along with the source's features.", "(4) RFC (Kwon et al., 2017): an extended random forest model combining features from retweet user profiles and the source tweet.", "(5) CSI (Ruchansky et al., 2017): a state-of-the-art fake news detection model incorporating articles, and the group behavior of users who propagate fake news by using LSTM and calculating the user scores.", "(6) tCNN (Yang et al., 2018): a modified convolution neural network that learns the local variations of user profile sequence, combining with the source tweet features.", "(7) CRNN (Liu and Wu, 2018): a state-of-the-art joint CNN and RNN model that learns local and global variations of retweet user profiles, together with the resource tweet.", "(8) dEFEND (Shu et al., 2019a): a state-of-the-art co-attention-based fake news detection model that learns the correlation between the source article's sentences and user profiles.", "Model Configuration.", "Our model is termed GCAN .", "To examine the effectiveness of our graph-aware representation, we create another version GCAN-G , denoting our model without the graph convolution part.", "For both our models and competing methods, we set the number of training epochs to be 50.", "The hyperparameter setting of GCAN is: number of retweet users = 40, word embedding dim = 32, GRU output dim = 32, 1-D CNN output filter size = 3, 1-D CNN output dim = 32, and GCN output dim = 32.", "The hyperparame-ters of competing methods are set by following the settings mentioned in respective studies.", "Metrics & Settings.", "The evaluation metrics include Accuracy, Precision, Recall, and F1.", "We randomly choose 70% data for training and 30% for testing.", "The conducted train-test is repeated 20 Table 3: Main results.", "times, and the average values are reported.", "Main Results.", "The main results are shown in Table", "3. We can clearly find that the proposed GCAN significantly outperforms the best competing methods over all metrics across two datasets, improving the performance by around 17% and 15% on average in Twitter15 and Twitter16, respectively.", "Even without the proposed graph-aware representation, GCAN-G can improve the best competing method by 14% and 3% on average in Twitter15 and Twitter16, respectively.", "Such promising results prove the effectiveness of GCAN for fake news detection.", "The results also imply three insights.", "First, GCAN is better than GCAN-G by 3.5% and 13% improvement in Twitter15 and Twitter16, respectively.", "This exhibits the usefulness of graph-aware representation.", "Second, the dual co-attention mechanism in GCAN is quite powerful, as it clearly outperforms the best non-co-attention state-of-the-art model CSI.", "Third, while both GCAN-G and dEFEND are co-attention-based, additional sequential features learned from the retweet user sequence in GCAN-G can significantly boost the performance.", "Early Detection.", "We further report the performance (in only Accuracy due to page limit) by varying the number of observed retweet users per source story (from 10 to 50 ), as exhibited in Figure 2 and Figure", "3. It can be apparently found that our GCAN consistently and significantly outperforms the competitors.", "Even with only ten retweeters, GCAN can still achieve 90% accuracy.", "Such results tell GCAN is able to generate accurate early detection of the spreading fake news, which is cru-10 20 30 40 50 Number of users 0.5 0.6 0.7 0.8 0.9 1.0 A cc u r a c y Twitter15 GCAN GCAN-G dEFEND CSI CRNN Figure 2: Accuracy by # retweet users in Twitter15.", "Ablation Analysis.", "We report how each of GCAN component contributes by removing each one from the entire model.", "Below ALL denotes using all components of GCAN.", "By removing dual co-attention, GRU-based representation, graph-aware representation, and CNN-based representation, we have sub-models -A, -R, -G, Twitter15 Twitter16 -S-A 0.52 0.64 -A 0.59 0.65 -R 0.735 0.7 -G 0.88 0.78 -C 0.89 0.88 ALL 0.915 0.91 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Twitter15 Twitter16 A cc u r a c y SA A R G C ALL Figure 4: GCAN ablation analysis in Accuracy.", "and -C, respectively.", "Sub-model -S-A denotes the one without both source tweet embeddings and dual co-attention.", "The results are presented in Figure", "4. We can find every component indeed plays a significant contribution, especially for dual co-attention (-A) and the representation learning of user propagation and interactions (-R and -G).", "Since the source tweet provides fundamental clues, the accuracy drops significantly without it (-S-A).", "The co-attention weights derived from Section 4.5 attended on source tweet words and retweet users (source-propagation co-attention) allow our GCAN to be capable of explainability.", "By exhibiting where attention weights distribute, evidential words and users in predicting fake news can be revealed.", "Note that we do not consider source-interaction co-attention for explainability because user interaction features learned from the constructed graph cannot be intuitively interpretable.", "Explainability on Source Words.", "To demonstrate the explainability, we select two source tweets in the test data.", "One is fake ( breaking: ks patient at risk for ebola: in strict isolation at ku med center in kansas city #kwch12 ), and the other is real ( confirmed: this is irrelevant. rt @ks-0 1 2 3 4 5 6 7 8 9 10111213141516171819202122232425262728293031323334353637 0 F 1 0 1 2 3 4 5 6 7 8 9 10111213141516171819202122232425262728293031323334353637 0 F 2 0 1 2 3 4 5 6 7 8 9 10111213141516171819202122232425262728293031323334353637 0 F 3 0 1 2 3 4 5 6 7 8 9 10111213141516171819202122232425262728293031323334353637 0 T 1 0 1 2 3 4 5 6 7 8 9 10111213141516171819202122232425262728293031323334353637 0 T 2 0 5 10 15 20 25 30 35 Rewteet Order 0 T 3 Figure 6: Visualization of attention weights for user propagations of 3 fake (upper F1-F3) and 3 true source tweets. From left to right is retweet order. Dark colors refer to higher attention weights. Ans: fake news uid verified creationtime descpt.length path to source 14 0 4 7 1 15 0 5 11 1 16 0 6 8 1 32 0 9 17 1 33 0 7 13 2 34 1 9 20 2 Retweet Propagation highlighted by attention weights on fake news highlighted by attention weights on real news Source Tweet Breaking : huge explosion of an #oil pipeline belonging to @saudi_aramco near sudair, #saudiarabia. Figure 7: Evidential words highlighed by GCAN in source tweet (upper) and suspicious users highlighed by GCAN in retweet propagation (bottom), in which each column is a user characteristic. Note that only few user characteristics are presented. dknews: confirmed: #mike-brown had no criminal record. #ferguson ).", "We highlight evidential words with higher co-attention weights in font sizes of word clouds, as exhibited in Figure", "5. GCAN predicts the former to be fake with stronger attention on words breaking and strict, and detects the latter as real since it contains confirmed and ir-relevant.", "Such results may correspond to the common knowledge (Rashkin et al., 2017; Horne and Adali, 2017) that fake news tends to use dramatic and obscure words while real news is attended by confirmed and fact checking-related words.", "Explainability on Retweet Propagation.", "We aim to exploit the retweet order in propagations to unfold the behavior difference between fake and real news.", "We randomly pick three fake (F1-F3) and three true (T1-T3) source stories, and plot their weights from source-propagation co-attention (Sec-tion 4.5), as exhibited in Figure 6, in which the horizontal direction from left to right denotes the order of retweet.", "The results show that to determine whether a story is fake, one should first examine the characteristics of users who early retweet the source story.", "The evidences of fake news in terms of user characteristics may be evenly distributed in the propagation.", "Explainability on Retweeter Characteristics.", "The source-propagation co-attention of our GCAN model can further provide an explanation to unveil the traits of suspicious users and the words they focus on.", "A case study is presented in Figure 7.", "We can find that the traits of suspicious users in retweet propagation can be: accounts are not verified, shorter account creation time, shorter user description length, and shorter graph path length to the user who posts the source tweet.", "In addition, what they highly attend are words breaking and pipeline.", "We think such kind of explanation can benefit interpret the detection of fake news so as to understand their potential stances.", "In this study, we propose a novel fake news detection method, Graph-aware Co-Attention Networks (GCAN).", "GCAN is able to predict whether a short-text tweet is fake, given the sequence of its retweeters.", "The problem scenario is more realistic and challenging than existing studies.", "Evaluation results show the powerful effectiveness and the reasonable explainability of GCAN.", "Besides, GCAN can also provide early detection of fake news with satisfying performance.", "We believe GCAN can be used for not only fake news detection, but also other short-text classification tasks on social media, such as sentiment detection, hate speech detection, and tweet popularity prediction.", "We will explore model generalization in the future work.", "Besides, while fake news usually targets at some events, we will also extend GCAN to study how to remove event-specific features to further boost the performance and explainability.", "This work is supported by Ministry of Science and Technology (MOST) of Taiwan under grants 109-2636-E-006-017 (MOST Young Scholar Fellowship) and 108-2218-E-006-036, and also by Academia Sinica under grant AS-TP-107-M05." ]
[ "objective", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "objective", "objective", "method", "abstain", "objective", "abstain", "objective", "objective", "objective", "method", "abstain", "abstain", "method", "abstain", "abstain", "objective", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "method", "objective", "objective", "other" ]
[ "Sentence encoders based on the transformer architecture have shown promising results on various natural language tasks.", "The main impetus lies in the pre-trained neural language models that capture long-range dependencies among words, owing to multi-head attention that is unique in the architecture.", "However, little is known for how linguistic properties are processed, represented, and utilized for downstream tasks among hundreds of attention heads inside the pre-trained transformer-based model.", "For the initial goal of examining the roles of attention heads in handling a set of linguistic features, we conducted a set of experiments with ten probing tasks and three downstream tasks on four pre-trained transformer families (GPT, GPT2, BERT, and ELECTRA).", "Meaningful insights are shown through the lens of heat map visualization and utilized to propose a relatively simple sentence representation method that takes advantage of most influential attention heads, resulting in additional performance improvements on the downstream tasks.", "Sentence encoders in transformer architectures as in GPT, BERT (Vaswani et al., 2017; Radford, 2018; Devlin et al., 2019) and ELECTRA (Clark et al., 2020) have shown promising results on various natural language understanding (NLU) tasks, such as question answering, text entailment and natural language inference (NLI) (Bowman et al., 2015), owing to their pre-training capabilities in modeling languages.", "The pre-training effects of the transformer-based approaches are known to be crucial for obtaining superior performance in various downstream NLU tasks.", "The main impetus lies in capturing long-range dependencies among words obtainable with bidirectional learning and self-attention (Devlin et al., 2019) and sufficiently varied corpora of a large quantity (Radford et al., 2019).", "Despite all the recent successes of the transformer-based models, little is known for how linguistic properties are processed and represented internally when the architectures are used.", "Given that self-attention heads are unique in the family of transformer architectures, we attempt to answer the question of how basic linguistic properties are captured with the attention heads across the models and used for downstream tasks.", "Once we figure out the roles of attention heads in storing various linguistic properties, we should be able to modulate them to maximize the performance of the downstream tasks.", "Given the motivation, we analyze several publicly available pre-trained transformer encoders (BERT, GPT, GPT2, and ELECTRA) trained with different model capacities ranging from 144 to 384 attention heads and 12 to 24 layers.", "Considering the output vector from each attention head of an encoder as a mini-sentence embedding, we examine whether certain linguistic properties are stored in embeddings among ten sentence probing tasks (Conneau and Kiela, 2018) that cover surface, syntactic, and semantic information and require different linguistic properties ( e.g. the depth of a parsed sentence).", "Each of the probing tasks is treated as if it were a downstream task for the examination; a classifier is attached for each of the primitive linguistic properties.", "In order to predict the depth of the parse tree, for example, an n-ary classifier is connected, where n is the number of possible depths.", "In order to aggregate and summarize the performance results out of all the attention heads, we construct an accuracy heat map for each probing task, where the patterns across layers and attention heads can be recognized easily.", "By examining the heat map, we can observe the patterns of how the attention heads contribute to the accuracy of each probing task, including whether an individual attention head is contributing to multiple linguistic features together or just specialized for a particular feature.", "Aiming at producing improved sentence representation, we use the analysis result that allows for selecting and concatenating the outputs of superior attention heads.", "The sentence representations from the hidden layers and the top-n attention heads are compared to check whether using only influential attention heads selectively could help certain downstream tasks.", "This attempt is in contrast with the common approach of using the output of the last layers of a transformer-based encoder as the representation that is fed into a downstream task.", "Our hypothesis is those final representations from the top of the transformer-based encoders might not be the best not only in carrying primitive linguistic properties of the language but also for downstream tasks.", "All the source code is publicly available 1 .", "The major contribution of our research is twofold: 1) we suggest an analysis method which helps understand where linguistic properties are learned and represented along attention heads in transformer architectures and 2) we show that using analysis results, attention heads can be maximally utilized for performance gains during the fine-tuning process on the downstream tasks and for capturing linguistic properties.", "Several studies looked into the representations learned by a neural network for various language properties (Adi et al., 2016; Qian et al., 2016a,b).", "A similar line of work focused on learned linguistic features inside the word and sentence embeddings.", "They used downstream tasks in order to probe surface information, syntactic and semantic information (Shi et al., 2016; Conneau et al., 2018).", "Some recent work looked inside the sentence encoders with various depths, by analyzing the hidden states at a layer-level (Belinkov et al., 2017; Peters et al., 2018) and even at a neuron-level (Dalvi et al., 2018).", "Tenney et al. (2019a,b) attempted to understand linguistic characteristics learned in a series of pre-trained encoder models by jointly analyzing their behaviors across different NLP tasks.", "been two streams of work: 1) visual analysis of attention weights to associate various functionalities and 2) analysis of the characteristics of the output representations from individual attention heads.", "For the first category, Vig and Jesse (2019) developed a visualization tool for attention weights of BERT and GPT2 and identified notable heads but without any quantitative analysis.", "Ghader and Monz (2017) showed the extent to which attention agrees with traditional alignments in neural machine translation (MT).", "Jain and Wallace (2019) and Brunner et al. (2019) on the other hand argued that attention rarely provides an explanation of model predictions.", "They showed through attention map analysis that attention weights frequently are not correlated with other measures of feature importance.", "For the second category that attempts to discover various roles attention heads play, Raganato and Tiedemann (2018) studied the characteristics of individual attention heads from the transformer, pre-trained with an MT task and evaluated on a limited suite of linguistic tasks, POS tagging, NER tagging, and chunking.", "Similarly, Clark et al. (2019) showed that some attention heads are specialized for dependency parsing and coreference resolution.", "Michel et al. (2019) showed through an ablation study that some dedicated heads have a significant role in MT and revealed the dynamics of attention heads during the training process.", "Voita et al. (2019) provided a method to identify the major role of each attention head in a transformer model trained for MT. The two studies are limited to MT and a particular transformer model, BERT.", "Unlike the recent studies mentioned above, our analysis is more comprehensive in its scope for generalizability.", "The analysis probes a variety of surface, syntactic, and semantic information at sentence levels with different transformer encoders pre-trained on language modeling tasks.", "More importantly, our work goes beyond an analysis and suggests a method of utilizing the analysis results for performance gains on several downstream tasks.", "It not only proposes a simple yet new method for the downstream tasks but also validates the analysis of the attention mechanisms.", "To the best of our knowledge, this is the first attempt to do an in-depth analysis of the seven recent pre-trained encoders for their internal workings in handling linguistic features, not to mention the newly proposed way for improvements on the downstream tasks.", "Consider a transformer-based encoder M , typically with a stack of L identical layers, each of which makes use of multi-head self-attention, and a two sub-layer feed-forward network coupled with layer normalization and residual connection (see Figure 1a).", "For a given input sequence x = ( x 1 , x 2 , . . . , x n ) , each word embedding xis concatenated with a positional encoding and fed into the encoder layer to produce an attention head output h i,j R d head where i and j indicate the indices of the layer and the attention head, respectively.", "Then a series of sub-layers produce hidden states of the i -th encoding layer z i R d model for each encoder.", "For all pre-trained encoders, d head = 64 and d model = H d head where H is the number of attention heads per layer.", "Since the transformer-based encoders encode the input sequence word by word, z i and h i,j are produced individually for given word x k along the input sequence x.", "In order to produce a sequence-level representation, we need to select one of the input representations of the sequence.", "Since the selection method depends on the chosen pre-trained model, we defer a detailed discussion to Section 4.1.", "For now, we assume z i and h i,j have been already determined with the specific word chosen from the input sequence and consider it as the sentence-level representation.", "Consider a classification task where the pre-trained encoder predicts a linguistic feature intended in a sentence probing task.", "Assume we have a labeled dataset containing pairs of a sentence and a linguistic property label ( e.g. tense).", "For a given sentence x and a label l in the dataset, the pre-training model ( e.g. BERT) encodes x and produces vectors corresponding to z i and h i,j .", "Usually, only the vector from the last layer z i = L is used as the input feature representing the sentence for the classification task.", "However, in order to inspect the role of each internal layer for a linguistic property, we use { z i,l , l } for all i to train a logistic regression classifier on a train dataset and record classification accuracy s ( z i ) on a test dataset (see Figure 1b).", "Each accuracy score is then compared to the accuracy of the last layer, and then the best performance among the encoding layers is measured.", "We consider this comparison as a way of generating primitive evidence that hidden states from an internal layer provide more useful linguistic information than the representation from the last layer.", "Similar to Section 3.1, we also train a logistic regression classifier on { h i,j , l } and record classification accuracy s ( h i,j ) for all i and j .", "That is, every attention head is evaluated by feeding its own output vector to the classifier as a feature (see Figure 1c).", "We assume the more an attention head stores the information essential to the probing task, the higher its accuracy.", "We construct a heat map of classification accu-Encoder L H L H Parameters GPT 12 12 144 110M GPT2 12 12 144 117M BERTBASE 12 12 144 110M BERTLARGE 24 16 384 340M ELECTRASMALL 12 4 48 14M ELECTRABASE 12 12 144 110M ELECTRALARGE 24 16 384 340M Table 1: Specification of the seven pre-trained encoders: the numbers of encoding layers ( L ), attention heads per layer ( H ), all the attention heads used ( L H ) and trained parameters.", "racy for attention heads on x-axis and layers on y-axis, so that we can easily identify the distribution of the excited attention heads for the linguistic property handled in the pre-trained model.", "The overall trend of a heat map indicates the extent to which the activation is widely distributed or localized across different layers and attention heads.", "Given the analysis results, we now propose a method for generating a new sentence representation to improve not only the probing tasks but also other downstream tasks.", "New representations are tested within the chosen pre-trained models in this work but can be applied to all other transformer-based encoders.", "Given an encoder model M , we sort the attention heads along with their classification valida-tion' accuracy s ( h i,j ) measured on a validation dataset (in order to prevent look-ahead bias during the selection process) for a given task, based on the attention head-wise evaluation method as in Section 3.2.", "Then top-n attention heads are selected and simply concatenated (see Figure 2) to form a new representation.", "We expect that the resulting vector h n R n d head would be able to store more precious information for the task than the vectors constructed out of other attention heads since it consists of superior attention heads.", "In order to make comparisons against the embeddings from different encoding layers, we also train the classifier with { h n , l } and record the corresponding classification test' accuracy s ( h n ) measured on the test dataset.", "For fair comparisons, however, we set n to H (the number of attention heads per layer) so that reconstructed sentence embedding h n could have the same dimension to that of hidden states, d model .", "We ran experiments for seven different encoders with unique characteristics, as shown in Table 1.", "GPT (Radford, 2018) was trained by basic Language Modeling (LM) on the BookCorpus dataset.", "GPT2 (Radford et al., 2019) was originally trained with the largest model capacity (1.5B parameters) with massive text dataset and LM, but we select base model for fair comparison.", "BERT (Devlin et al., 2019), which adopted masked LM (MLM) with next sentence prediction (NSP) for better contextualized embedding, was trained on Book-Corpus and English Wikipedia datasets.", "The most recent one, ELECTRA, was trained with replaced token detection (RTD) in the generator-discriminator mode.", "For GPT and GPT2, we pulled the representative sentence embedding z i and h i,j from the last input token with Byte-Pair Encoding tokenizer (Sennrich et al., 2016).", "For the BERT and ELECTRA family, we appended a special token < CLS > , which was originally designed to train sentence representations, in front of every input sequence and pulled the sentence embedding from it, using WordPiece tokenizer (Wu et al., 2016).", "Also, the implementation of the all transformers in our work are utilized from the Huggingface's transformers library (Wolf et al., 2019).", "Ten sentence probing tasks enable us to check whether the sentence embeddings generated by the encoders store the linguistic properties specific to the individual tasks.", "Table 2 shows a description of each probing task with its number of classes, roughly indicating the difficulty of the task.", "For each probing task, we evaluated performance of three types of representation; s ( z i ) , s ( h i,j ) and BERTBASEBERTLARGEGPTGPT 2 ELECTRALARGE Length Depth SubjNum BigramShift CoordInversion OddManOut Figure 3: Heat maps of attention head-wise evaluation on sentence probing tasks.", "s ( h n ) for a given pre-trained encoder by training the simple classifier with 256 batch size on the RMSProp optimizer (details on Appendix A).", "After measuring the classification accuracy for using the representation from each attention head, s ( h i,j ) for all i , j , we created a heat map showing the accuracy distribution for a pre-trained encoder and a sentence probing task.", "Figure 3 shows 30 heat maps arranged for seven pre-trained encoders and six sentence probing tasks (full results are shown in Appendix B).", "For each heat map, the brighter the color in a region, the higher the accuracy is for the corresponding attention heads.", "Comparing the heat maps along with the different probing tasks for an encoder, we can see that the influential attention heads with bright colors appear in different layers, either localized or distributed.", "This indicates that the information related to different tasks is processed at different locations and with different levels of association among attention heads.", "For the Length and Depth tasks, requiring surface and syntactic information, for example, the accuracy of the heads in the lower layers starts to diminish from the mid-upper layers.", "On the other hand, the attention heads in the mid-layers are activated for SubjNum and CoordInversion, which are more or less syntactic information.", "For BigramShift and OddManOut, which are more semantic, the attention heads along the upper layers are mostly activated.", "These results provide more detailed analyses and meaningful insights regarding the behavior of attention heads on different layers than the empirical results of Raganato and Tiedemann (2018) who shows the attention heads in lower and upper layers of the basic transformer tend to embed syntactic information and semantic information, respectively.", "More interestingly, the BigramShift and OddManOut heat maps show that all of the five encoder models represent word orders and verb/noun contexts starting from the Tasks Encoder BERTBASEBERTLARGE last best top-12 last best top-16 layer layer heads layer layer heads Length 58 .", "Comparing the heat maps along with the transformer types, we can observe that the heatmaps within the same family show similar patterns, while those from different families tend to show different distributions of the superior attention heads.", "For example, the GPT family tends to show cooperation with a larger number of attention heads for the SubjNum and CoordInversion tasks while the BERT family consists of only a few well-educated attention heads.", "In the case of BigramShift and OddManOut, the majority of upper attention heads of the BERT family are more strongly associated with word order and verb/noun meanings with higher accuracy than those of the GPT family.", "Interestingly, ELECTRALARGE shows unique patterns for most of the probing tasks; high-performance heads are located on lower layers except for OddManOut, whereas the heads on the lower layers do not seem to deal with information for the probing tasks.", "ELECTRASMALL and ELECTRABASE model have similar heat maps (see Appendix B), but the ELECTRALARGE model is totally different from them.", "These tendency implies that the learning behaviors on the attention heads are not strictly similar among each other for the same pre-training tasks even with the same architecture.", "Having observed that different attention heads on different layers play their roles for different probing tasks, we devised a method of producing new embeddings as in 3.3 and ran an experiment to compare it against two baselines for the ten probing tasks.", "Table 3 reports on a comparison result of three embeddings constructed by the BERT family: the last layer z i = L , the best-performing layer z best , and the reconstructed sentence embedding h n = H for each task and each pre-trained encoder (full results are in Appendix B).", "Comparing the accuracy between the last and best layers, we observe that the last layer is no better than the best layers for any of the probing tasks.", "From this, we can infer that certain linguistic features are dominantly processed on earlier layers and no further on later layers.", "The performance comparison between using the output of the best layer and the reconstructed sentence embedding (proposed) clearly shows that classification accuracy is increased significantly (19.22% in median) with the proposed method for almost all the tasks.", "It strongly supports that the proposed method can be employed to discover superior attention heads that can make up the final representation for processing specific linguistic information.", "Note that the newly constructed sentence embeddings consist of attention head outputs only.", "Our results imply that these embeddings might possess substantial information as much as the hidden states of the layers, which are produced by passing through the multi-head attention layers and the feed-forward network.", "We evaluated the new embedding construction method for more complex tasks in order to see whether it extracts not only simple linguistic features but also rich sentence features from the pre-trained encoder for such tasks.", "Three downstream tasks (MRPC, STS-B, and SST-2) were selected from the General Language Understanding Evaluation (GLUE) benchmark, which has the most widely used datasets for evaluating language-oriented task performances.", "MRPC Microsoft Research Paraphrase Corpus consists of 4.1k train and 1.7k test sentence pairs automatically extracted from online news sources, with human annotations for whether the sentences in the pair are semantically equivalent (Dolan and Brockett, 2005).", "STS-B The Semantic Textual Similarity Benchmark is a collection of 5.7k train and 1.4k test sentence pairs drawn from news headlines and other sources (Cer et al., 2017).", "They were annotated with a score from 1 to 5 denoting how similar the Tasks Encoder BERTBASEBERTLARGE last best top-12 last best top-16 layer layer heads layer layer heads MRPC (F1) 88.0 88.2 88.9 89.3 88.6 91.4 MRPC (Acc) 82.4 83.1 84.6 84.6 84.1 87.7 STS-B (P)* 88.2 74.6 88.6 89.5 54.8 89.4 STS-B (S) 87.9 73.5 88.3 89.1 53.6 88.7 SST-2 (Acc) 92.9 92.4 93.1 94.0 92.9 94.5 Table 4: A summary of three downstream tasks on dev set for the ordinary fine-tuning method using the last layer, best layer, and the proposed method of using top-n attention heads.", "SST-2 The Stanford Sentiment Treebank is a binary single-sentence classification task consisting of sentences extracted from movie reviews with human annotations of their sentiment (Socher et al., 2013) with 67k train and 1.8k test samples.", "It was designed to predict the sentiment score for a given sentence in binary scales.", "First, we evaluated each of the attention heads on the three downstream tasks, following the procedure in Section 4.2.", "Using the head-wise evaluation results, we again reconstructed sentence embeddings from the output vectors of superior attention heads and use them as input representations for the downstream task classifier.", "Since pre-trained transformer encoders are usually fine-tuned when applied to the downstream tasks, we unfroze the parameters of the pre-trained encoder and fine-tuned both the classifier and the encoder, end to end.", "Also, we conducted regular fine-tuning experiments by adding a classifier on the top of the last hidden vectors for each pre-trained encoder.", "We use a batch size of 32 with a learning rate of 2e-5 and fine-tune for 3 epochs over the data for all the three downstream tasks, following the fine-tuning procedure in (Devlin et al., 2019).", "Each experiment is repeated five times with different random seeds to provide fair comparisons against the performance variance of fine-tuning on small datasets.", "The results are presented in Table 4.", "Both BERTBASE and BERTLARGE obtained additional performance gains, 0.82% and 1.04% points for the base and large models, respectively, over the model with the ordinary last-layer fine-tuning.", "We find that BERTLARGE receives an additional performance gain on the MRPC task by 2.1% and 3.1% point improvements on F1 and accuracy, respectively.", "Fine-tuning with attention heads only gives a slightly negative result on STS-B with BERTLARGE .", "Fine-tuning with the best-layer did not provide consistent performance increment.", "It is noteworthy that the performance of an already pre-trained encoder model could be further improved by simply pulling the output vectors from the influential attention heads.", "In order to investigate the impact of the fine-tuning process toward the internal attention heads, we also conducted the attention head-wise evaluation on each encoder after three epochs of the fine-tuning process.", "Our question was whether the influential attention heads at the initial pre-trained state would remain superior after the fine-tuning or the spot of influential heads would be shifted toward the last layer.", "The results are presented in Figure 4.", "First, we again observe that the regions of the influential heads vary among the downstream tasks.", "In the MRPC task, influential heads are distributed across the entire layers and heads, but the ones with the SST-2 task are highly concentrated toward the very upper layer.", "Notably, the heat maps of the STS-B task are unusual in that there are two influential regions in the lower (first 2530% layers) and the upper layers.", "We can also observe that the overall heat map patterns are stretched while the model capacity is increased, as reported in (Ten-ney et al., 2019a).", "From the way feature vectors are pulled from the encoder, we observe that fine-tuning with the reconstructed sentence embeddings obtained from the topn attention heads results in the smoother heatmap amplification, especially with the BERTLARGE model.", "The most interesting result is that the intensity (performance) of the initial heatmaps are amplified after experiencing the fine-tuning process while preserving overall distribution patterns.", "Another phenomenon is that the attention heads adjacent to the superior ones also give a slight performance in-crease.", "These results imply that the fine-tuning process leverages the initial superior attention heads regardless of their corresponding locations inside the model rather than trains arbitrary attention heads.", "This behavior might be the reason for explaining Initial Pre-trained Fine-tuned with Last Layer Fine-tuned with Top-n Heads Initial Pre-trained Fine-tuned with Last Layer Fine-tuned with Top-n Heads BERTBASEBERTLARGEMRPCSTS-BSST2 Figure 4: Heat maps of attention head-wise evaluation on downstream tasks.", "the additional performance increment on the downstream tasks.", "We conjecture that our reconstruction method could act as a partial residual connection as in DenseNet (Huang et al., 2017) during the fine-tuning process by feeding the reconstructed embedding to the input of the classifier which creates the direct gradient flow from the final objective loss of downstream tasks toward the internal superior attention heads.", "We believe that further work by varying the number of concatenated the attention heads (especially, n > H ) would provide additional performance gain.", "Our analysis so far concentrated on the distribution of the influential attention heads on different layers for given task as a way of differentiating their roles for individual tasks.", "A pattern we observed was that different number of heads are influential and that upper, lower, or all the layers tend to be influential, depending on the linguistic tasks.", "Our next question is whether individual heads on different layers are responsible for processing syntactic or semantic properties exclusively or in a coordinating fashion.", "In order to observe the performance of attention head h i,j for syntactic and semantic tasks, we define a score for handling syntactic capabilities as an average of test accuracy scores, s ( h i,j ) , from the [ Depth, TopConstituents, BigramShift ] group and that for semantic capabilities from the [ Tense, SubjNumber, ObjNumber, OddManOut, Coordina-tionInversion ] group.", "We omit the accuracy results from the surface information group since they it is difficult to lablem as syntactic or semantic.", "Figure 5 shows the syntactic-semantic score distributions of the attention heads for different pre-trained transformer models.", "Each attention head seems to handle both syntactic and semantic information in a balanced way.", "This is interesting because different attention heads or layers are often more influential for many linguistic tasks.", "When averaged together over the tasks for either the syntactic or semantic group, however, it appears that processing syntactic and semantic information is shared by individual heads and layers.", "There is a tendency that the lower the layer, the less influential on syntactic and semantic processing.", "However, this tendency is not observed in the large models.", "For BERTLARGE , the highest layers (purple colors) contribute less for both syntactic and semantic properties.", "For ELECTRALARGE , the purple heads contribute the least.", "It re-confirms our hypothesis that using the last layer representation is not always the best.", "The linear relationship between syntac-50 55 60 65 70 s e m a n t i c BERT-Base layer 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 BERT-Large 50 55 60 65 70 s e m a n t i c GPT GPT2 50 55 60 65 70 s e m a n t i c ELECTRA-Small 30 40 50 syntactic ELECTRA-Base 30 40 50 syntactic 50 55 60 65 70 s e m a n t i c ELECTRA-Large Figure 5: A distribution of syntactic and semantic scores of the attention heads.", "tic and semantic processing capabilities across the heads is considered a new finding.", "Although different layers and heads tend to play stronger or weaker roles for different linguistic properties as shown in the heat maps, they contribute to both syntactic and semantic processing in a well balanced way.", "While recent research demonstrated the capability of the transformer-based encoders for generating rich sentence representations, the roles of individual self-attention heads were hardly unknown.", "Furthermore, little is known for whether and how we can utilize them for better capturing linguistic properties and eventually improving the performance of downstream tasks for which the embeddings are constructed.", "One of the major contributions of this paper is to fill the void by inspecting where and how the attention heads are trained internally for classification tasks corresponding to different linguistic properties and for the downstream tasks.", "The analysis results clearly show a tendency through the comprehensive heat maps that syntactic and semantic information is mainly handled from the lower layers to the upper layers.", "We also showed that understanding the roles of attention heads in handling task-specific information can help to develop adaptive sentence representations, by selecting influential attention heads and testing them for the three downstream tasks.", "The additional performance gains obtained by the simple method show that this approach of using the anatomy of the transformer models and the attention heads is promising in utilizing expensive pre-trained transformer models to their maximal extent.", "Furthermore, we explored how the hundreds of attention heads underwent performance variation during the fine-tuning process on the downstream tasks, revealing the internal behaviors with the proposed analysis method.", "The analysis of syntactic-semantic score distributions revealed that individual attention heads capture both syntactic and semantic information.", "It also showed that the amount of both syntactic and semantic information handled by the heads vary from layer to layer, sometimes showing that the last layer contributes much less especially with large models.", "While the empirical results are strong, additional work remains to further our understanding of the internal workings of the transformer architecture and its role in building such strong language models for a variety of tasks.", "Immediate attention should be paid to the investigation of how heat maps would vary during the extensive pre-training so that we have a better understanding of the dynamics of the learning processes.", "This work was supported by Institute for Information & communications Technology Planning & Evaluation(IITP) grant funded by the Korea govern-ment(MSIT) (No. 2013-0-00131, Development of Knowledge Evolutionary WiseQA Platform Technology for Human Knowledge Augmented Ser-vices).", "We are grateful for the support of the GPU server to IdeaFactory, Startup KAIST.", "We also appreciate Kyubyong Park, Seongok Ryu, and YJ for reviewing the earlier version of this paper." ]
[ "abstain", "abstain", "abstain", "objective", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "objective", "method", "abstain", "abstain", "objective", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "abstain", "other", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "result", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "result", "objective", "abstain", "objective", "abstain", "objective", "abstain", "abstain", "result", "abstain", "other", "other", "other" ]
[ "Yoshimasa Tsuruoka The University of Tokyo", "[email protected] Abstract A major obstacle in reinforcement learning-based sentence generation is the large action space whose size is equal to the vocabulary size of the target-side language.", "To improve the efficiency of reinforcement learning, we present a novel approach for reducing the action space based on dynamic vocabulary prediction.", "Our method first predicts a fixed-size small vocabulary for each input to generate its target sentence.", "The input-specific vocabularies are then used at supervised and reinforcement learning steps, and also at test time.", "In our experiments on six machine translation and two image captioning datasets, our method achieves faster reinforcement learning ( 2.7x faster) with less GPU memory ( 2.3x less) than the full-vocabulary counterpart.", "We also show that our method more effectively receives rewards with fewer iterations of supervised pre-training.", "Sentence generation with neural networks plays a key role in many language processing tasks, including machine translation (Sutskever et al., 2014), image captioning (Lin et al., 2014), and abstractive summarization (Rush et al., 2015).", "The most common approach for learning the sentence generation models is maximizing the likelihood of the model on the gold-standard target sentences.", "Recently, approaches based on reinforcement learning have attracted increasing attention to reduce the gap between training and test situations and to directly incorporate task-specific and more flexible evaluation metrics such as BLEU scores (Papineni et al., 2002) into optimization (Ranzato et al., 2016).", "tionally demanding to be used with large training data.", "In reinforcement learning for sentence generation, selecting an action corresponds to selecting a word in the vocabulary V .", "The number of possible actions at each time step is thus equal to the vocabulary size, which often exceeds tens of thousands.", "Among such a large set of possible actions, at most N actions are selected if the length of the generated sentence is N , where we can assume N (cid:28) | V | .", "In other words, most of the possible actions are not selected, and the large action space slows down reinforcement learning and consumes a large amount of GPU memory.", "In this paper, we propose to accelerate reinforcement learning by reducing the large action space.", "The reduction of action space is achieved by predicting a small vocabulary for each source input.", "Our method first constructs the small input-specific vocabulary by selecting K ( 1000) relevant words, and then the small vocabulary is used at both training and test time.", "Our experiments on six machine translation and two image captioning datasets show that our method enables faster reinforcement learning with less GPU memory than the standard full softmax method, without degrading the accuracy of the sentence generation tasks.", "Our method also works faster at test time, especially on CPUs.", "The implementation of our method is available at https: //github.com/hassyGo/NLG-RL .", "We first describe a neural machine translation model and an image captioning model as examples of sentence generation models.", "Machine translation is a text-to-text task, and image captioning is an image-to-text task.", "We then review how reinforcement learning is used, and present a simple and efficient method to accelerate the training.", "Recurrent Neural Networks (RNNs) are widely used to generate sentences by outputting words one by one (Sutskever et al., 2014).", "To generate a sentence Y = ( y 1 , y 2 , . . . , y N ) , where N is its length, given a source input X , a hidden state h t R d is computed for each time step t ( 1) by using its previous information: h t = RNN ( h t 1 , e ( y t 1 ) , s t 1 ) , (1) where RNN( ) is an RNN function, e ( y t 1 ) R d is a word embedding of y t 1 , and s t 1 R d is a hidden state optionally used to explicitly incorporate the information about the source input X into the transition.", "We employ Long Short-Term Memory (LSTM) units (Hochreiter and Schmid-huber, 1997) for the RNN function.", "The task here is to predict the t -th word y t by computing a target word distribution p ( y | y <t , X ) R | V | , where | V | represents the vocabulary size of the target language.", "p ( y | y <t , X ) is used to generate a sentence by either greedy/beam search or random sampling.", "To learn the model parameters, the following cross entropy loss is usually employed: L c ( Y g , X ) = N g (cid:88) t =1 log p ( y = y t | y <t , X ) , (2) where we assume that the target sentence Y g is the gold sequence.", "Once we train the model, we can use it to generate unseen sentences.", "Machine translation In the context of machine translation, the source input X corresponds to a source sentence ( x 1 , x 2 , . . . , x M ) of length M .", "Each word x i is also associated with a word embedding e ( x i ) R d .", "We assume that a hidden state h i R 2 d is computed for each x i by using a bi-directional RNN with LSTM units (Graves and Schmidhuber, 2005).", "That is, h i is the concatenation of x i 's d -dimensional hidden states [ h i ; h i ] computed by a pair of forward and backward RNNs.", "We set the initial hidden state of the sentence generator as h 0 = h M + h 1 .", "Following an attention mechanism proposed in Luong et al. (2015), s t for predicting y t is computed as follows: s t = tanh (cid:32) W s (cid:34) h t ; M (cid:88) i =1 a i h i (cid:35) + b s (cid:33) , (3) where a i = f ( h t , i, h ) is the global-attention function in Luong et al. (2015), W s R d 3 d is a weight matrix, and b s R d is a bias vector.", "s t is then used to compute the target word distribution: p ( y | y <t , X ) = softmax( W p s t + b p ) , (4) where W p R | V | d is a weight matrix, and b p R | V | is a bias vector.", "Image captioning In the case of image captioning, the source input X corresponds to an image to be described.", "We assume that in our preprocessing step, each input image is fed into a convolutional neural network to extract its fixed-length feature vector f R d f .", "More specifically, we use the pre-computed feature vectors provided by Kiros et al. (2014), and the feature vectors are never updated in any model training processes.", "The input feature vector is transformed into the initial hidden state h 0 = tanh ( W f f + b f ) , where W f R d d f is a weight matrix, and b f R d is a bias vector.", "In contrast to machine translation, we do not use s t 1 in Equation (1); more concretely, we do not use any attention mechanisms for image captioning.", "Therefore, we directly use the hidden state h t to compute the target word distribution: p ( y | y <t , X ) = softmax( W p h t + b p ) , (5) where the weight and bias parameters are analogous to the ones in Equation (4).", "For both of the tasks, we use the weight-tying technique (Inan et al., 2017; Press and Wolf, 2017) by using W p as the word embedding matrix.", "That is, e ( y t ) is the y t -th row vector in W p , and the technique has shown to be effective in machine translation (Hashimoto and Tsuruoka, 2017) and text summarization (Paulus et al., 2018).", "One well-known limitation of using the cross entropy loss in Equation (2) is that the sentence generation models work differently at the training and test time.", "More concretely, the models only observe gold sequences at the training time, whereas the models have to handle unseen sequences to generate sentences at the test time.", "To bridge the gap, reinforcement learning has started gaining much attention (Ranzato et al., 2016; Wu et al., 2016; Rennie et al., 2017; Zhang and Lapata, 2017; Paulus et al., 2018; Yang et al., 2018).", "In this work, we focus on the most popular method called REINFORCE (Williams, 1992).", "1 In REINFORCE, the sentence generation model sets an initial state given a source input, and then iterates an action selection and its corresponding state transition.", "The action selection corresponds to randomly sampling a target word from Equation (4) and (5), and the state transition corresponds to the RNN transition in Equation (1).", "Once a sentence is generated, an approximated loss function is defined as follows: L r ( Y, X ) = N (cid:88) t =1 R t log p ( y = y t | y <t , X ) , (6) where R t is the reward at time step t , and the loss is approximated by the single example Y .", "R t is used to evaluate how good the t -th action selection is.", "Unlike maximum likelihood training, the reward function can be defined by using task-specific evaluation scores like BLEU for machine translation.", "In this paper, we employ GLEU proposed by Wu et al. (2016), a variant of sentence-level BLEU.", "Following the implementation in Ranzato et al. (2016), we define R t = GLEU( Y, Y g ) b t , where b t is a baseline value estimating the future reward from the next time step to reduce the variance of the gradients.", "To estimate b t , we jointly train a linear regression model by minimizing (cid:107) b t GLEU( Y, Y g ) (cid:107) 2 , and b t is computed as b t = ( W r s t + b r ) , where W r R d is a weight vector, b r is a bias, ( ) is the logistic sigmoid function, and in the case of image captioning, h t is used instead of s t .", "Overall model training The reinforcement learning step is usually applied after pre-training the models with the cross entropy loss in Equation (2).", "At the REINFORCE phase, we define the following joint loss function: L = L c + (1 ) L r , (7) where is a hyperparameter, and = 0 .", "0 usually leads to unstable training (Wu et al., 2016).", "The vocabulary size | V | is usually more than ten thousands for datasets covering many sentences with a variety of topics.", "However, for example, at most 100 unique words are selected when generating a sentence of length 100.", "That is, the output length N is much smaller than the vocabulary 1 We tried self critic (Rennie et al., 2017), but did not observe significant improvement over REINFORCE.", "size | V | , and this fact motivated us to reduce the large action space.", "Moreover, we have in practice found that REINFORCE runs several times slower than the supervised learning with the cross entropy loss.", "To accelerate the training, we propose to construct a small action space for each source input.", "In other words, our method selects a small vocabulary V (cid:48) of size K for each source input in advance to the model training.", "In this section, we assume that V (cid:48) is given and represented with a sparse binary matrix MX RK | V | , where there are only K non-zero elements at position ( i, w i ) for 1 i K .", "w i is a unique word index in V .", "MX is used to construct a small subset of the parameters in the softmax layer: W (cid:48) p = MXW p , b (cid:48) p = MX b p , (8) and W (cid:48) p RK d and b (cid:48) p RK are used instead of W p and b p in Equation (4) and (5).", "Therefore, in mini-batched processes with a mini-batch size B , our method constructs B different sets of ( W (cid:48) p , b (cid:48) p ) .", "Relationship to previous work Sampling-based approximation methods have previously been studied to reduce the computational cost at the large softmax layer in probabilistic language modeling (Ji et al., 2016; Zoph et al., 2016), and such methods are also used to enable one to train neural machine translation models on CPUs (Eriguchi et al., 2016).", "The construction of ( W (cid:48) p , b (cid:48) p ) in our method is similar to these softmax approximation methods in that they also sample small vocabularies either at the word level (Ji et al., 2016), sentence level (Hashimoto and Tsuruoka, 2017), or mini-batch level (Zoph et al., 2016).", "However, one significant difference is that the approximation methods work only at training time using the cross entropy loss, and full softmax computations are still required at test time.", "The difference is crucial because a sentence generation model needs to simulate its test-time behavior in reinforcement learning.", "The remaining question is how to construct the input-specific vocabulary V (cid:48) for each source input X .", "This section describes our method to construct V (cid:48) by using a vocabulary prediction model which is separated from the sentence generation models.", "In the vocabulary prediction task, the input is the source X (source sentences or images) to be described, and the output is V (cid:48) .", "We should be careful not to make the prediction model computationally expensive; otherwise the computational efficiency by our method would be canceled out.", "To feed the information about X into our vocabulary prediction model, we define an input vector v ( X ) R d v .", "For image captioning, we use the feature vector f described in Section 2.1: v ( X ) = W v f + b v , where W v R d v d f is a weight matrix, and b v R d v is a bias vector.", "For machine translation, we employ a bag-of-embeddings representation: v ( X ) = 1 M (cid:80) Mi =1 e v ( x i ) , where the d v -dimensional word embedding e v ( x i ) R d v is different from e ( x i ) used in the machine translation model.", "By using the different set of the model parameters, we avoid the situation that our vocabulary prediction model is affected during training the sentence generation models.", "Relationship to previous work Vocabulary prediction has gained attention for training sequence-to-sequence models with the cross entropy loss (Weng et al., 2017; Wu et al., 2017), but not for reinforcement learning.", "Compared to our method, previous methods jointly train a vocabulary predictor by directly using source encoders as input to the predictor.", "One may expect joint learning to improve both of the vocabulary predictor and the sentence generator, but in practice such positive effects are not clearly observed.", "Weng et al. (2017) reported that the joint learning improves the accuracy of their machine translation models, but our preliminary experiments did not indicate such accuracy gain.", "Such a joint training approach requires the model to continuously update the vocabulary predictor during REINFORCE, because the encoder is shared.", "That is, the action space for each input changes during reinforcement learning, and we observed unstable training.", "Therefore, this work separately models the vocabulary predictor and focuses on the effects of using the small vocabularies for REINFORCE.", "Another note is that Jean et al. (2015) and L'Hostis et al. (2016) also proposed to construct small vocabularies in advance to the cross entropy-based training.", "They suggest that the use of word alignment works well, but using the word alignment is not general enough, considering that there exist different types of source input.", "By contrast, our method can be straightforwardly applied to the two sentence generation tasks with the different input modalities (i.e. image and text).", "Once the input representation v ( X ) is computed, we further transform it by a single residual block (He et al., 2016): r ( X ) = Res ( v ( X )) R d v .", "2 Then r ( X ) is fed into a prediction layer: o = ( W o r ( X ) + b o ) , (9) where W o R | V | d v is a weight matrix, and b o R | V | is a bias vector.", "The i -th element o i corresponds to the probability that the i -th word in the target vocabulary V appears in the target sentence Y given its source X .", "We use the training data for the sentence generations tasks to train the vocabulary predictor.", "For each X in the training data, we have its gold target sentence Y g .", "We train the vocabulary predictor as a multi-label classification model by the following loss function: | V | (cid:88) i =1 ( t i log o i + (1 t i ) log(1 o i )) , (10) where t i is equal to 1 .", "0 if the i -th word in V is included in Y g , and otherwise t i is 0 .", "0 .", "In practice, we apply the label smoothing technique (Szegedy et al., 2016) to the loss function.", "We evaluate the accuracy of the vocabulary predictor by using a separate development split D : # of correctly predicted words in D # of words in D , (11) where we select the topK predictions in Equation (9) for each source input X in D , and the evaluation metric is a recall score.", "We use the topK words to construct the input-specific vocabularies V (cid:48) for the sentence generation models, and we restrict that the recall is 100% for the training data.", "We describe our experimental settings, and the details can be found in the supplemental material.", "2 We can use arbitrary types of hidden layers or even linear models like SVMs, but we found this one performed the best.", "We describe the details of this in the supplemental material.", "We used machine translation datasets of four different language pairs: English-to-German (En-De), English-to-Japanese (En-Ja), English-to-Vietnamese (En-Vi), and Chinese-to-Japanese (Ch-Ja).", "For image captioning, we used two datasets: MS COCO (Lin et al., 2014) and Flickr8K.", "Table 1 summarizes the statistics of the training datasets, where the number of training examples (Size), the target vocabulary size ( | V | ), and the maximum length of the target sentences ( max( N ) ) are shown.", "For the machine translation datasets, we manually set max( N ) and omitted training examples which violate the constraints.", "En-De: We used 100,000 training sentence pairs from news commentary and newstest2015 as our development set, following Eriguchi et al. (2017).", "En-Ja: We used parallel sentences in ASPEC (Nakazawa et al., 2016) and constructed three types of datasets: En-Ja (100K), En-Ja (2M), and En-Ja (2M, SW).", "The 100K and 2M datasets were constructed with the first 100,000 and 2,000,000 sentence pairs, respectively.", "To test our method using subword units, we further preprocessed the 2M dataset by using the Sentence-Piece toolkit (Kudo and Richardson, 2018) to construct the En-Ja (2M, SW) dataset.", "En-Vi: We used the pre-processed datasets provided by Luong and Manning (2015).", "Our development dataset is the tst2012 dataset.", "Ch-Ja: We constructed the Ch-Ja dataset by using the first 100,000 sentences from ASPEC .", "MS COCO and Flickr8K: We used the preprocessed datasets provided by Kiros et al. (2014).", "We can also download the 4096-dimensional feature vectors f (i.e., d f = 4096 ).", "vocabulary predictor with a learning rate of 0 .", "08 and a mini-batch size of 128.", "The model for each setting was tuned based on recall scores (with K = 1000 ) for the development split.", "We set d = 256 with single-layer LSTMs for all the experiments, except for the En-Ja (2M) and (2M, SW) datasets.", "For the larger En-Ja datasets, we set d = 512 with two-layer LSTMs.", "We used stochastic gradient decent with momentum, with a learning rate of 1 .", "0 , a momentum rate of 0 .", "75 , and a mini-batch size of 128.", "The model for each setting was tuned based on BLEU scores for the development split.", "All of the models achieved the best BLEU scores for all the datasets within 15 to 20 training epochs.", "Each of the selected models with the best BLEU scores was used for the following REINFORCE step.", "For REINFORCE, we set = 0 .", "005 , and the learning rate was set to 0 .", "01 .", "The REINFORCE steps required around 5 epochs to significantly improve the BLEU scores.", "We used a single GPU of NVIDIA GeForce GTX 1080 3 to run experiments for the En-De, En-Ja (100K), En-Vi, Ch-Ja, MS COCO, and Flickr8K datasets.", "For the En-Ja (2M) and En-Ja (2M, SW) datasets, we used a single GPU of NVIDIA Tesla V100 4 to speedup our experiments.", "Mini-batch splitting It should be noted that our small softmax method can be run even on the single GTX 1080 GPU for the larger translation datasets, whereas the full softmax method runs out of the GPU memory.", "A typical strategy to address such out-of-memory issues is to use multiple GPUs, but we have found that we need at most eight GPUs to conduct our experiments on the full softmax method with REINFORCE.", "5 Moreover, using the multiple GPUs does not always speedup the training time.", "We instead employ another strategy to split the mini-batch at each training iteration.", "First, we sort the mini-batch examples according to the lengths of the source (or target) text, and then split the mini-batch into S sets of the training examples.", "For example, in our case the 3 The GPU memory capacity is 11,178MiB.", "mini-batch size is 128, and if S is set to 4, each of the smaller sets includes 32 training examples.", "We perform back-propagation for each set one by one, and at each step we delete the corresponding computational graphs to reduce the GPU memory consumption.", "Finally, the accumulated partial derivatives are used to update the model parameters.", "More details can be found in our Pytorch 0.4 implementation.", "Figure 1 shows recall scores with respect to different values of the small vocabulary size K for each dataset.", "We can see that the recall scores reach 95% with K = 1000 for most of the datasets.", "One exception is the En-De dataset, and this is not surprising because a German vocabulary would become sparse by many compound nouns.", "These results show that our vocabulary predictor works well for source inputs of different modalities (text and image) and their corresponding different target languages.", "Our method also works at the subword level as well as at the standard word level.", "For training the sentence generation models, we set K = 500 for the Flickr8K dataset and K = 1000 for the other datasets.", "The goal of this paper is achieving efficient reinforcement learning for sentence generation to encourage future research, but before evaluating the efficiency of our method, we show that using the small vocabularies does not degrade the accuracy of the sentence generation models.", "Table 2 shows BLEU scores for the development splits of the four machine translation and two image captioning datasets.", "The BLEU scores are averaged over five different runs with different random seeds, and the standard deviations are also reported.", "We can see in Table 2 that our method (Small softmax) keeps the BLEU scores as high as those of Full softmax.", "For some datasets, the BLEU scores of our method are even better than those of the full softmax method.", "The trend is consistent in both of the cross entropy training phase and the REINFORCE phase.", "These results indicate that our method works well for different machine translation and image captioning datasets.", "We also confirmed that our experimental results are competitive with previously reported results when using the same training datasets; for example, our En-Vi test set result on tst2013 is 27.87 0.21 (cf.", "26.9 in Luong and Manning (2015)).", "Better generation of rare words These BLEU scores suggest that our method for reinforcement learning has the potential to outperform the full softmax baseline.", "However, it is still unclear what is the potential advantage in terms of generation quality.", "We therefore analyzed the differences between output sentences of the small and full softmax methods, following Ott et al. (2018).", "Figure 2 shows the results of the En-De translation dataset, 0 1 2 3 4 5 6 10 20 30 40 50 60 70 80 90 100 O b s e r v ed f r equen cy [ % ] Frequency percentile in the training data Small softmax (K=1000) Full softmax Reference Figure 2: An analysis on the En-De translation results.", "and we observed the same trend for all the other datasets.", "Each entry is computed as follows: # of output words in each percentile # of output words , (12) where the 10 percentile includes the top 10% of the most frequent words, and the 100 percentile includes the top 10% of the most infrequent words.", "We can see that our small softmax method better outputs rare words, and these results suggest that using input-specific vocabularies is useful in controlling action spaces for reinforcement learning.", "Effectiveness with fewer pre-training steps We followed the standard practice that the models are pre-trained by maximum likelihood before starting reinforcement learning.", "However, such pre-training may have a negative effect in reinforcement learning.", "Consider the situation where the pre-training leads to zero cross-entropy loss.", "In this case, nothing will be learned during reinforcement learning because no exploratory action can be performed.", "Although pre-training in practice does not lead to zero cross-entropy loss, it can still overfit the data and result in very sharp out-2M 2M, SW Cross entropy 38.76 39.15 w/ beam search 39.88 40.35 REINFORCE w/ cross entropy 40.10 40.26 w/ beam search 40.36 40.38 w/ beam search ( K = 500) 40.07 40.07 w/ beam search ( K =2000) 40.30 40.50 w/ beam search ( K =3000) 40.27 40.41 Table 3: BLEU scores for the development split of the En-Ja (2M) and En-Ja (2M, SW) datasets.", "REINFORCE w/ cross entropy ( K =1000) 40.16 w/ beam search 40.50 Cross entropy (1.3M) w/ beam search 39.42 (Hashimoto and Tsuruoka, 2017) Cross entropy (2M) w/ beam search 40.29 (Oda et al., 2017b) Cross entropy (2M+1M back-trans.) 41.42 w/ beam search (Morishita et al., 2017) Table 4: BLEU scores for the En-Ja test split, where we use the En-Ja (2M, SW) dataset.", "put distributions, thereby hindering exploration in reinforcement learning.", "It is therefore important to consider a reinforcement learning setting with less or no pre-training (Liu et al., 2018).", "In Figure 3 for the En-Ja (100K) dataset, we show that the small softmax method works more effectively with fewer pre-training epochs.", "For this experiment, we set = 0 in Equation (7) to purely focus on REINFORCE.", "Using GLEU (or BLEU) scores gives sparse rewards, and thus the resulting BLEU scores are very low with fewer pre-training steps, but the small softmax method has the potential to work well if we can design more effective reward functions.", "Results on larger datasets To see whether our method works in larger scales, Table 3 shows BLEU scores for the development split when using the En-Ja (2M) and En-Ja (2M, SW) datasets.", "6 These results show that our method consistently works even on these larger datasets at the word and subword levels.", "In this table we also report how our method works with beam search, and the greedy-based BLEU scores are very close to those of beam search after the REINFORCE phase.", "When performing a beam search, we can optionally use different sizes of the small vocab-6 For the 2M dataset, the full softmax baseline achieves BLEU scores of 38.67 and 39.84 for the Cross entropy and REINFORCE w/ cross entropy settings, respectively.", "ulary, but we observe that our method is robust to the changes, whereas Wu et al. (2017) reported that their dynamic vocabulary selection method is sensitive to such changes.", "For reference, we report the test set results in Table 4.", "We cite BLEU scores from previously published papers which reported results of single models (i.e., without ensemble).", "Our method with greedy translation achieves a competitive score.", "It should be noted that Morishita et al. (2017) achieve a better score presumably because they used additional in-domain one million parallel sentences obtained by the back-translation technique (Sennrich et al., 2016).", "This section discusses our main contribution: how efficient our method is in accelerating reinforcement learning for sentence generation.", "We have examined the training-time efficiency of our method.", "Table 5 shows the training time [min-utes/epoch] for five different datasets.", "We selected the five datasets to show results with different vocabulary sizes and different maximum sentence lengths, and we observed the same trend on the other datasets.", "The vocabulary size | V | and the maximum sentence length max( N ) are shown for each training dataset.", "In the training with the standard cross entropy loss, the speedup by our method is not impressive as long as the vocabulary size | V | can be easily handled by the GPUs.", "We set S = 2 for the cross entropy training of the Full softmax method in the En-Ja (2M) setting, to reduce the GPU memory consumption as described in Section 4.4.", "In particular, in the En-Ja (2M) experiments, our method gains a factor of 2.7 speedup compared with the full softmax baseline ( S = 3 ).", "For most of the experimental settings, the speedup significantly accelerates our research and development cycles when working on reinforcement learning for sentence generation tasks.", "One exception is the Flickr8K dataset whose original vocabulary size | V | is already very small, and the lengths of the target sentences are short.", "In the supplementary material, we also show the test-time efficiency.", "Our method is also efficient in terms of GPU memory consumption at training time.", "Table 5 also shows the maximum GPU memory consumption during the training.", "These results show that our method easily fits in the memory of the single GTX 1080 GPU, whereas Full softmax is very sensitive to the vocabulary size | V | and the sentence lengths.", "In particular, we observe about 56% reduction in memory usage when using the En-Ja (2M) dataset.", "By saving the memory usage, one could try using larger models, larger mini-batches, larger vocabularies, and longer target sentences without relying on multiple GPUs.", "Scalability of our method To further show the memory efficiency our our method, we measured the GPU memory consumption with a larger mini-batch size, 2048.", "We applied the mini-batch splitting strategy to both the small and full softmax methods to handle such a large mini-batch size.", "In the En-Ja (2M) experiments with REINFORCE, our small softmax method works with the large batch-size by setting S = 6 , whereas the full softmax baseline needs S = 40 .", "Aggressively splitting the mini-batch (i.e. using larger values of S ) slows down the training time, and in that sense our method is much more efficient when we consider the larger mini-batch sizes.", "If we increase the mini-batch size to 4096, our small softmax method works with S = 12 .", "Reducing the computational cost at the large softmax layer in language modeling/generation is actively studied (Jean et al., 2015; Ji et al., 2016; Eriguchi et al., 2016; L'Hostis et al., 2016; Zoph et al., 2016; Wu et al., 2017).", "Most of the existing methods try to reduce the vocabulary size by either negative sampling or vocabulary prediction.", "One exception is that Oda et al. (2017a) propose to predict a binary code of its corresponding target word.", "Although such a sophisticated method is promising, we focused on the vocabulary reduction method to apply policy-based reinforcement learning in a straightforward way.", "As reported in this paper, one simple way to define a reward function for reinforcement learning is to use task-specific automatic evaluation metrics (Ranzato et al., 2016; Wu et al., 2016; Rennie et al., 2017; Zhang and Lapata, 2017; Paulus et al., 2018), but this is limited in that we can only use training data with gold target sentences.", "An alternative approach is to use a discriminator in generative adversarial networks (Goodfellow et al., 2014), and Yang et al. (2018) showed that REINFORCE with such a discriminator improves translation accuracy.", "However, Yang et al. (2018) only used the training data, and thus the potential of the generative adversarial networks is not fully realized.", "One promising direction is to improve the use of the generative adversarial networks for the sentence generation tasks by using our method, because our method can also accelerate the combination of REINFORCE and the discriminator.", "This paper has presented how to accelerate reinforcement learning for sentence generation tasks by reducing large action spaces.", "Our method is as accurate as, is faster than, and uses less GPU memory than the standard full softmax counterpart, on sentence generation tasks of different modalities.", "In future work, it is interesting to use our method in generative adversarial networks to further improve the sentence generation models.", "We thank anonymous reviewers for their fruitful comments.", "This work was supported by JST CREST Grant Number JPMJCR1513, Japan." ]
[ "abstain", "abstain", "objective", "objective", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "objective", "abstain", "objective", "result", "abstain", "other", "abstain", "abstain", "method", "abstain", "other", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "result", "objective", "result", "objective", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "method", "method", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "result", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "other", "other", "abstain", "abstain", "result", "abstain", "result", "result", "result", "abstain", "result", "result", "result", "result", "result", "abstain", "method", "method", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "method", "method", "abstain", "method", "method", "abstain", "method", "method", "method", "method", "method", "other", "other", "other", "method", "method", "other", "other", "abstain", "method", "method", "result", "other", "other" ]
[ "We present two neural models for event factuality prediction, which yield significant performance gains over previous models on three event factuality datasets: FactBank, UW, and MEANTIME.", "We also present a substantial expansion of the It Happened portion of the Universal Decompositional Semantics dataset, yielding the largest event factuality dataset to date.", "We report model results on this extended factuality dataset as well.", "A central function of natural language is to convey information about the properties of events.", "Perhaps the most fundamental of these properties is factuality : whether an event happened or not.", "A natural language understanding system's ability to accurately predict event factuality is important for supporting downstream inferences that are based on those events.", "For instance, if we aim to construct a knowledge base of events and their participants, it is crucial that we know which events to include and which ones not to.", "The event factuality prediction task (EFP) involves labeling event-denoting phrases (or their heads) with the (non)factuality of the events denoted by those phrases (Saur and Pustejovsky, 2009, 2012; de Marneffe et al., 2012).", "Figure 1 exemplifies such an annotation for the phrase headed by leave in (1), which denotes a factual event ( =factual, (cid:9) =nonfactual).", "In this paper, we present two neural models of event factuality (and several variants thereof).", "We show that these models significantly outperform previous systems on four existing event factuality datasets FactBank (Saur and Pustejovsky, 2009), the UW dataset (Lee et al., 2015), MEANTIME (Minard et al., 2016), and Universal De-failed Jo leave to trace no .", "compositional Semantics It Happened v1 (UDS-IH1; White et al., 2016) and we demonstrate the efficacy of multi-task training and ensembling in this setting.", "In addition, we collect and release an extension of the UDS-IH1 dataset, which we refer to as UDS-IH2, to cover the entirety of the English Universal Dependencies v1.2 (EUD1.2) treebank (Nivre et al., 2015), thereby yielding the largest event factuality dataset to date.", "1 We begin with theoretical motivation for the models we propose as well as discussion of prior EFP datasets and systems ( 2).", "We then describe our own extension of the UDS-IH1 dataset ( 3), followed by our neural models ( 4).", "Using the data we collect, along with the existing datasets, we evaluate our models ( 6) in five experimental settings ( 5) and analyze the results ( 7).", "Words from effectively every syntactic category can convey information about the factuality of an event.", "For instance, negation (2a), modal auxiliaries (2b), determiners (2c), adverbs (2d), verbs (2e), adjectives (2f), and nouns (2g) can all con-1 Data available at decomp.net.", "vey that a particular event in the case of (2), leaving event did not happen.", "(2)", "a. Jo didn't leave .", "b. Jo might leave .", "c. Jo left no trace.", "d. Jo never left .", "e. Jo failed to leave .", "f. Jo's leaving was fake.", "g. Jo's leaving was a hallucination.", "Further, such words can interact to yield nontrivial effects on factuality inferences: (3a) conveys that the leaving didn't happen, while the su-perficially similar (3b) does not.", "A main goal of many theoretical treatments of factuality is to explain why these sorts of interactions occur and how to predict them.", "It is not possible to cover all the relevant literature in depth, and so we focus instead on the broader kind of interactions our models need to be able to capture in order to correctly predict the factuality of an event denoted by a particular predicatenamely, interactions between that predicate's outside and inside context, exemplified in Figure 1. Outside context Factuality information coming from the outside context is well-studied in the domain of clause-embedding predicates, which break into at least four categories: factives, like know and love (Kiparsky and Kiparsky, 1970; Karttunen, 1971b; Hintikka, 1975); implicatives, like manage and fail (Karttunen, 1971a, 2012, 2013; Karttunen et al., 2014), veridicals, like prove and verify (Egre, 2008; Spector and Egre, 2015), and non-veridicals, like hope and want .", "Consider the factive-implicative verb forget (Karttunen, 1971a; White, 2014).", "(4)", "a. Jo forgot that Bo left .", "b. Jo forgot to leave .", "(cid:9) (5)", "a. Jo didn't forget that Bo left .", "b. Jo didn't forget to leave .", "When a predicate directly embedded by forget is tensed, as in (4a) and (5a), we infer that that predicate denotes a factual event, regardless of whether forget is negated.", "In contrast, when a predicate directly embedded by forget is untensed, as in (4b) and (5b), our inference is dependent on whether forget is negated.", "Thus, any model that correctly predicts factuality will need to not only be able to represent the effect of individual words in the outside context on factuality inferences, it will furthermore need to represent their interaction.", "Inside context Knowledge of the inside context is important for integrating factuality information coming from a predicate's argumentse.g. from determiners, like some and no .", "In simple monoclausal sentences like those in (6), the number of arguments that contain a negative quantifier, like no , determine the factuality of the event denoted by the verb.", "An even number (or zero) will yield a factuality inference and an odd number will yield a nonfactuality inference.", "Thus, as for outside context, any model that correctly predicts factuality will need to integrate interactions between words in the inside context.", "The (non)necessity of syntactic information One question that arises in the context of inside and outside information is whether syntactic information is strictly necessary for capturing the relevant interactions between the two.", "To what extent is linear precedence sufficient for accurately computing factuality?", "We address these questions using two bidirectional LSTMsone that has a linear chain topology and another that has a dependency tree topology.", "Both networks capture context on either side of an event-denoting word, but each does it in a different way, depending on its topology.", "We show below that, while both networks outperform previous models that rely on deterministic rules and/or hand-engineered features, the linear chain-structured network reliably outperforms the tree-structured network.", "Saur and Pustejovsky (2009) present the FactBank corpus of event factuality annotations, built on top of the TimeBank corpus (Pustejovsky et al., 2006).", "These annotations (performed by trained annotators) are discrete, consisting of an epistemic modal { certain , probable , possible } and a polarity { + , } .", "In FactBank, factuality judgments are with respect to a source ; following recent work, here we consider only judgments with respect to a single source: the author.", "The smaller MEANTIME corpus (Minard et al., 2016) includes sim-732 Dataset Train Dev Test Total FactBank 6636 2462 663 9761 MEANTIME 967 210 218 1395 UW 9422 3358 864 13644 UDS-IH2 22108 2642 2539 27289 Table 1: Number of annotated predicates.", "ilar discrete factuality annotations.", "de Marneffe et al. (2012) re-annotate a portion of FactBank using crowd-sourced ordinal judgments to capture pragmatic effects on readers' factuality judgments.", "Lee et al. (2015) construct an event factuality dataset henceforth, UW on the TempEval-3 data (UzZaman et al., 2013) using crowdsourced annotations on a [ 3 , 3] scale ( certainly did not happen to certainly did ), with over 13,000 predicates.", "Adopting the [ 3 , 3] scale of Lee et al. (2015), Stanovsky et al. (2017) assemble a Unified Factuality dataset, mapping the discrete annotations of both FactBank and MEANTIME onto the UW scale.", "Each scalar annotation corresponds to a token representing the event, and each sentence may have more than one annotated token.", "The UDS-IH1 dataset (White et al., 2016) consists of factuality annotations over 6,920 event tokens, obtained with another crowdsourcing protocol.", "We adopt this protocol, described in 3, to collect roughly triple this number of annotations.", "We train and evaluate our factuality prediction models on this new dataset, UDS-IH2, as well as the unified versions of UW, FactBank, and MEANTIME.", "Table 1 shows the number of annotated predicates in each split of each factuality dataset used in this paper.", "Annotations relevant to event factuality and polarity appear in a number of other resources, including the Penn Discourse Treebank (Prasad et al., 2008), MPQA Opinion Corpus (Wiebe and Riloff, 2005), the LU corpus of author belief commitments (Diab et al., 2009), and the ACE and ERE formalisms.", "Soni et al. (2014) annotate Twitter data for factuality.", "Nairn et al. (2006) propose a deterministic algorithm based on hand-engineered lexical features for determining event factuality.", "They associate certain clause-embedding verbs with implication signatures (Table 2), which are used in a recursive polarity propagation algorithm.", "TruthTeller is also a recursive rule-based system for factuality (predicate truth) prediction using implication signatures, as well as other lexicaland dependency tree-based features (Lotan et al., 2013).", "Several systems use supervised models trained over rule-based features.", "Diab et al. (2009) and Prabhakaran et al. (2010) use SVMs and CRFs over lexical and dependency features for predicting author belief commitments, which they treat as a sequence tagging problem.", "Lee et al. (2015) train an SVM on lexical and dependency path features for their factuality dataset.", "Saur and Pustejovsky (2012) and Stanovsky et al. (2017) train support vector models over the outputs of rule-based systems, the latter with TruthTeller.", "Even the largest currently existing event factuality datasets are extremely small from the perspective of related tasks, like natural language inference (NLI).", "Where FactBank, UW, MEANTIME, and the original UDS-IH1 dataset have on the order of 30,000 labeled examples combined, standard NLI datasets, like the Stanford Natural Language Inference (SNLI; Bowman et al. 2015) dataset, have on the order of 500,000.", "To begin to remedy this situation, we collect an extension of the UDS-IH1 dataset.", "The resulting UDS-IH2 dataset covers all predicates in EUD1.2.", "Beyond substantially expanding the amount of publicly available event factuality annotations, another major benefit is that EUD1.2 consists entirely of gold parses and has a variety of other annotations built on top of it, making future multitask modeling possible.", "We use the protocol described by White et al. (2016) to construct UDS-IH2.", "This protocol involves four kinds of questions for a particular predicate candidate: 1. UNDERSTANDABLE : whether the sentence is understandable 2. PREDICATE : whether or not a particular word refers to an eventuality (event or state) 3. HAPPENED : whether or not, according to the author, the event has already happened or is currently happening 4. CONFIDENCE : how confident the annotator is about their answer to HAPPENED from 0-4 If an annotator answers no to either UNDERSTANDABLE or PREDICATE , HAPPENED and CONFIDENCE do not appear.", "The main differences between this protocol and the others discussed above are:", "(i) instead of asking about annotator confidence, the other proto-733 lll l lll l llll l lll llll llll l l l l l l l l 3 2 1 0 1 2 3 l l l l FactBank UW MEANTIMEUDSIH2 Figure 2: Relative frequency of factuality ratings in training and development sets.", "cols ask the annotator to judge either source confidence or likelihood; and", "(ii) factuality and confidence are separated into two questions.", "We choose to retain White et", "al.'s protocol to maintain consistency with the portions of EUD1.2 that were already annotated in UDS-IH1.", "Annotators We recruited 32 unique annotators through Amazon's Mechanical Turk to annotate 20,580 total predicates in groups of 10.", "Each predicate was annotated by two distinct annotators.", "Including UDS-IH1, this brings the total number of annotated predicates to 27,289.", "Raw inter-annotator agreement for the HAPPENED question was 0.84 (Cohen's =0.66) among the predicates annotated only for UDS-IH2.", "This compares to the raw agreement score of 0.82 reported by White et al. (2016) for UDS-IH1.", "To improve the overall quality of the annotations, we filter annotations from annotators that display particularly low agreement with other annotators on HAPPENED and CONFIDENCE .", "(See the Supplementary Materials for details.)", "Pre-processing To compare model results on UDS-IH2 to those found in the unified datasets of Stanovsky et al. (2017), we map the HAPPENED and CONFIDENCE ratings to a single FACTUALITY value in [-3,3] by first taking the mean confidence rating for each predicate and mapping FACTUALITY to 34 CONFIDENCE if HAPPENED and 34 CONFIDENCE otherwise.", "Response distribution Figure 2 plots the distribution of factuality ratings in the train and dev splits for UDS-IH2, alongside those of FactBank, UW, and MEANTIME.", "One striking feature of these distributions is that UDS-IH2 displays a much more entropic distribution than the other datasets.", "This may be due to the fact that, unlike the newswire-heavy corpora that the other datasets annotate, EUD1.2 contains text from genres weblogs, newsgroups, email, reviews, and question-answers that tend to involve less reporting of raw facts.", "One consequence of this more entropic distribution is that, unlike the datasets discussed above, it is much harder for systems that always guess 3 i.e. factual with high confi-dence/likelihood to perform well.", "We consider two neural models of factuality: a stacked bidirectional linear chain LSTM ( 4.1) and a stacked bidirectional child-sum dependency tree LSTM ( 4.2).", "To predict the factuality v t for the event referred to by a word w t , we use the hidden state at t from the final layer of the stack as the input to a two-layer regression model ( 4.3).", "We use a standard stacked bidirectional linear chain LSTM (stacked L-biLSTM), which extends the unidirectional linear chain LSTM (Hochreiter and Schmidhuber, 1997) by adding the notion of a layer l { 1 , . . . , L } and a direction d { , } (Graves et al., 2013; Sutskever et al., 2014; Zaremba and Sutskever, 2014).", "(cid:16) (cid:17) where is the Hadamard product; prev ( t ) = t 1 and prev ( t ) = t + 1 , and x ( l,d ) t = x t if l = 1 ; and x ( l,d ) t = [ h ( l 1 , ) t ; h ( l 1 , ) t ] otherwise.", "We set g to the pointwise nonlinearity tanh.", "We use a stacked bidirectional extension to the child-sum dependency tree LSTM (T-LSTM; Tai et al., 2015), which is itself an extension of a standard unidirectional linear chain LSTM (L-LSTM).", "One way to view the difference between the L-LSTM and the T-LSTM is that the T-LSTM re-defines prev ( t ) to return the set of indices that 734 correspond to the children of w t in some dependency tree.", "Because the cardinality of these sets varies with t , it is necessary to specify how multiple children are combined.", "The basic idea, which we make explicit in the equations for our extension, is to define f tk for each child index k prev ( t ) in a way analogous to the equations in 4.1 i.e. as though each child were the only child and then sum across k within the equations for i t , o t , c t , c t , and h t .", "Our stacked bidirectional extension (stacked T-biLSTM) is a minimal extension to the T-LSTM in the sense that we merely define the downward computation in terms of a prev ( t ) that returns the set of indices that correspond to the parents of w t in some dependency tree (cf. Miwa and Bansal 2016, who propose a similar, but less minimal, model for relation extraction).", "The same method for combining children in the upward computation can then be used for combining parents in the downward computation.", "This yields a minimal change to the stacked L-biLSTM equations.", "(cid:16) (cid:17) We use a ReLU pointwise nonlinearity for g .", "These minimal changes allow us to represent the inside and the outside contexts of word t (at layer l ) as single vectors: h ( l, ) t and h ( l, ) t .", "An important thing to note here is that in contrast to other dependency tree-structured T-LSTMs (Socher et al., 2014; Iyyer et al., 2014) this T-biLSTM definition does not use the dependency labels in any way.", "Such labels could be straightforwardly incorporated to determine which parameters are used in a particular cell, but for current purposes, we retain the simpler structure", "(i) to more directly compare the Land T-biLSTMs and", "(ii) because a model that uses dependency labels substantially increases the number of trainable parameters, relative to the size of our datasets.", "To predict the factuality v t for the event referred to by a word w t , we use the hidden states from the final layer of the stacked Lor T-biLSTM as the input to a two-layer regression model.", "h ( L ) t = [ h ( L, ) t ; h ( L, ) t ] v t = V 2 g (cid:16) V 1 h ( L ) t + b 1 (cid:17) + b 2 where v t is passed to a loss function L ( v t , v t ) : in this case, smooth L1 i.e. Huber loss with = 1 .", "This loss function is effectively a smooth variant of the hinge loss used by Lee et al. (2015) and Stanovsky et al. (2017).", "We also consider a simple ensemble method, wherein the hidden states from the final layers of both the stacked L-biLSTM and the stacked T-biLSTM are concatenated and passed through the same two-layer regression model.", "We refer to this as the H(ybrid)-biLSTM.", "2 5 Experiments Implementation We implement both the L-biLSTM and T-biLSTM models using pytorch 0.2.0 .", "The L-biLSTM model uses the stock implementation of the stacked bidirectional linear chain LSTM found in pytorch , and the T-biLSTM model uses a custom implementation, which we make available at decomp.net.", "Word embeddings We use the 300-dimensional GloVe 42B uncased word embeddings (Penning-ton et al., 2014) with an UNK embedding whose dimensions are sampled iid from a Uniform[-1,1].", "We do not tune these embeddings during training.", "Hidden state sizes We set the dimension of the hidden states h ( l,d ) t and cell states c ( l,d ) t to 300 for all layers of the stacked Land stacked T-biLSTMs the same size as the input word embeddings.", "This means that the input to the regression model is 600-dimensional, for the stacked Land T-biLSTMs, and 1200-dimensional, for the stacked H-biLSTM.", "For the hidden layer of the regression component, we set the dimension to half the size of the input hidden state: 300, for 2 See Miwa and Bansal 2016; Bowman et al. 2016 for alternative ways of hybridizing linear and tree LSTMs for semantic tasks.", "We use the current method since it allows us to make minimal changes to the architectures of each model, which in turn allows us to assess the two models' ability to capture different aspects of factuality.", "Bidirectional layers We consider stacked L-, T, and H-biLSTMs with either one or two layers.", "In preliminary experiments, we found that networks with three layers badly overfit the training data.", "Dependency parses For the Tand H-biLSTMs, we use the gold dependency parses provided in EUD1.2 when training and testing on UDS-IH2.", "On FactBank, MEANTIME, and UW, we follow Stanovsky et al. (2017) in using the automatic dependency parses generated by the parser in spaCy (Honnibal and Johnson, 2015).", "3 Lexical features Recent work on neural models in the closely related domain of generic-ity/habituality prediction suggests that inclusion of hand-annotated lexical features can improve clas-sification performance (Becker et al., 2017).", "To assess whether similar performance gains can be obtained here, we experiment with lexical features for simple factive and implicative verbs (Kiparsky and Kiparsky, 1970; Karttunen, 1971a).", "When in use, these features are concatenated to the net-work's input word embeddings so that, in princi-ple, they may interact with one another and inform other hidden states in the biLSTM, akin to how verbal implicatives and factives are observed to influence the factuality of their complements.", "The hidden state size is increased to match the input embedding size.", "We consider two types: Signature features We compute binary features based on a curated list of 92 simple implicative and 95 factive verbs including their their type-level implication signatures, as compiled by Nairn et al. (2006).", "4 These signatures characterize the 3 In rebuilding the Unified Factuality dataset (Stanovsky et al., 2017), we found that sentence splitting was potentially sensitive to the version of spaCy used.", "implicative or factive behavior of a verb with respect to its complement clause, how this behavior changes (or does not change) under negation, and how it composes with other such verbs under nested recursion.", "We create one indicator feature for each signature type.", "Mined features Using a simplified set of pattern matching rules over Common Crawl data (Buck et al., 2014), we follow the insights of Pavlick and Callison-Burch (2016) henceforth, PC and use corpus mining to automatically score verbs for implicativeness.", "The insight of PC lies in Karttunen's (1971a) observation that the main sentence containing an implicative predicate and the complement sentence necessarily agree in tense.", "Accordingly, PC devise a tense agreement score effectively, the ratio of times an embedding predicate's tense matches the tense of the predicate it embeds to predict implicativeness in English verbs.", "Their scoring method involves the use of fine-grained POS tags, the Stanford Temporal Tagger (Chang and Manning, 2012), and a number of heuristic rules, which resulted in a confirma-tion that tense agreement statistics are predictive of implicativeness, illustrated in part by observing a near perfect separation of a list of implicative and non-implicative verbs from Karttunen (1971a).", "We replicate this finding by employing a simplified pattern matching method over 3B sentences of raw Common Crawl text.", "We efficiently search for instances of any pattern of the form: I $VERB to * $TIME , where $VERB and $TIME are pre-instantiated variables so their corresponding tenses are known, and * ' matches any one to three whitespace-separated tokens at runtime (not pre-instantiated).", "5 Our results in Table 3 are a close lnr/Lexical_Resources 5 To instantiate $VERB , we use a list of 1K clause-embedding verbs compiled by (White and Rawlins, 2016) as well as the python package pattern-en to conjugate each verb in past, present progressive, and future tenses; all conjugations are first-person singular.", "$TIME is instantiated 736 FactBank UW Meantime UDS-IH2 MAE r MAE r MAE r MAE r All-3.0 0.8 NAN 0.78 NAN 0.31 NAN 2.255 NAN Lee et al. 2015 -0.511 0.708 --Stanovsky et al. 2017 0.59 0.71 0.42 0.66 0.34 0.47 -L-biLSTM(2)-S 0.427 0.826 0.508 0.719 0.427 0.335 0.960 0.768 T-biLSTM(2)-S 0.577 0.752 0.600 0.645 0.428 0.094 1.101 0.704 L-biLSTM(2)-G 0.412 0.812 0.523 0.703 0.409 0.462 -T-biLSTM(2)-G 0.455 0.809 0.567 0.688 0.396 0.368 -L-biLSTM(2)-S+lexfeats 0.429 0.796 0.495 0.730 0.427 0.322 1.000 0.755 T-biLSTM(2)-S+lexfeats 0.542 0.744 0.567 0.676 0.375 0.242 1.087 0.719 L-biLSTM(2)-MultiSimp 0.353 0.843 0.503 0.725 0.345 0.540 -T-biLSTM(2)-MultiSimp 0.482 0.803 0.599 0.645 0.545 0.237 -L-biLSTM(2)-MultiBal 0.391 0.821 0.496 0.724 0.278 0.613 -T-biLSTM(2)-MultiBal 0.517 0.788 0.573 0.659 0.400 0.405 -L-biLSTM(1)-MultiFoc 0.343 0.823 0.516 0.698 0.229 0.599 -L-biLSTM(2)-MultiFoc 0.314 0.846 0.502 0.710 0.305 0.377 -T-biLSTM(2)-MultiFoc 1.100 0.234 0.615 0.616 0.395 0.300 -L-biLSTM(2)-MultiSimp w/UDS-IH2 0.377 0.828 0.508 0.722 0.367 0.469 0.965 0.771 T-biLSTM(2)-MultiSimp w/UDS-IH2 0.595 0.716 0.598 0.609 0.467 0.345 1.072 0.723 H-biLSTM(2)-S 0.488 0.775 0.526 0.714 0.442 0.255 0.967 0.768 H-biLSTM(1)-MultiSimp 0.313 0.857 0.528 0.704 0.314 0.545 -H-biLSTM(2)-MultiSimp 0.431 0.808 0.514 0.723 0.401 0.461 -H-biLSTM(2)-MultiBal 0.386 0.825 0.502 0.713 0.352 0.564 -H-biLSTM(2)-MultiSimp w/UDS-IH2 0.393 0.820 0.481 0.749 0.374 0.495 0.969 0.760 Table 4: All 2-layer systems, and 1-layer systems if best in column.", "replication of PC's findings.", "Prior work such as by PC is motivated in part by the potential for corpus-linguistic findings to be used as fodder in downstream predictive tasks: we include these agreement scores as potential input features to our networks to test whether contemporary models do in fact benefit from this information.", "Training For all experiments, we use stochastic gradient descent to train the LSTM parameters and regression parameters end-to-end with the Adam optimizer (Kingma and Ba, 2015), using the default learning rate in pytorch ( 1e-3 ).", "We consider five training regimes: 6 1. SINGLE-TASK SPECIFIC (-S) Train a separate instance of the network for each dataset, training only on that dataset.", "2. SINGLE-TASK GENERAL (-G) Train one instance of the network on the simple concatenation of all unified factuality datasets, { FactBank, UW, MEANTIME } .", "3. MULTI-TASK SIMPLE (-MULTISIMP ) Same with each of five past tense phrases (yesterday, last week, etc.) and five corresponding future tense phrases (tomor-row, next week, etc).", "See Supplement for further details.", "6 Multi-task can have subtly different meanings in the NLP community; following terminology from Mou et al. (2016), our use is best described as semantically equivalent transfer with simultaneous (MULT) network training.", "as SINGLE-TASK GENERAL , except the network maintains a distinct set of regression parameters for each dataset; all other parameters (LSTM) remain tied.", "w/UDS-IH2 is specified if UDS-IH2 is included in training.", "4. MULTI-TASK BALANCED (-MULTIBAL ) Same as MULTI-TASK SIMPLE but upsampling examples from the smaller datasets to ensure that examples from those datasets are seen at the same rate.", "5. MULTI-TASK FOCUSED (-MULTIFOC ) Same as MULTI-TASK SIMPLE but upsampling examples from a particular target dataset to ensure that examples from that dataset are seen 50% of the time and examples from the other datasets are seen 50% (evenly distributed across the other datasets).", "Calibration Post-training, network predictions are monotonically re-adjusted to a specific dataset using isotonic regression (fit on train split only).", "Evaluation Following Lee et al. (2015) and Stanovsky et al. (2017), we report two evaluation measures: mean absolute error (MAE) and Pearson correlation (r).", "We would like to note, however, that we believe correlation to be a better indicator of performance for two reasons:", "(i) for datasets with a high degree of label imbalance 737 Mean Linear Tree Modal Negated Label MAE MAE # NONE no 1.00 0.93 1.03 2244 NONE yes -0.19 1.40 1.69 98 may no -0.38 1.00 0.99 14 would no -0.61 0.85 0.99 39 ca(n't) yes -0.72 1.28 1.55 11 can yes -0.75 0.99 0.86 6 (wi)'ll no -0.94 1.47 1.14 8 could no -1.03 0.97 1.32 20 can no -1.25 1.02 1.21 73 might no -1.25 0.66 1.06 6 would yes -1.27 0.40 0.86 5 should no -1.31 1.20 1.01 22 will no -1.88 0.75 0.86 75 Table 5: Mean gold labels, counts, and MAE for LbiLSTM(2)-S and T-biLSTM(2)-S model predictions on UDS-IH2-dev, grouped by modals and negation.", "(Figure 2), a baseline that always guesses the mean or mode label can be difficult to beat in terms of MAE but not correlation, and", "(ii) MAE is harder to meaningfully compare across datasets with different label mean and variance.", "Development Under all regimes, we train the model for 20 epochs by which time all models appear to converge.", "We save the parameter values after the completion of each epoch and then score each set of saved parameter values on the development set for each dataset.", "The set of parameter values that performed best on dev in terms of Pearson correlation for a particular dataset were then used to score the test set for that dataset.", "Table 4 reports the results for all of the 2-layer L-, T-, and H-biLSTMs.", "7 The best-performing system for each dataset and metric are highlighted in purple, and when the best-performing system for a particular dataset was a 1-layer model, that system is included in Table 4. New state of the art For each dataset and metric, with the exception of MAE on UW, we achieve state of the art results with multiple systems.", "The highest-performing system for each is reported in Table 4. Our results on UDS-IH2 are the first reported numbers for this new factuality resource.", "7 Full results are reported in the Supplementary Materials.", "Note that the 2-layer networks do not strictly dominate the 1-layer networks in terms of MAE and correlation.", "topology (T-biLSTM).", "However, the hybrid topology (H-biLSTM), consisting of both a Land T-biLSTM is the top-performing system on UW for correlation (Table 4).", "This suggests that the T-biLSTM may be contributing something complementary to the L-biLSTM.", "Evidence of this complementarity can be seen in Table 6, which contains a breakdown of system performance by governing dependency relation, for both linear and tree models, on UDS-IH2-dev.", "In most cases, the L-biLSTM's mean prediction is closer to the true mean.", "This appears to arise in part because the T-biLSTM is less confident in its predictions i.e. its mean prediction tends to be closer to 0.", "This results in the L-biLSTM being too confident in certain cases", "e.g.", "in the case of the xcomp governing relation, where the T-biLSTM mean prediction is closer to the true mean.", "Lexical features have minimal impact Adding all lexical features (both SIGNATURE and MINED ) yields mixed results.", "We see slight improvements on UW, while performance on the other datasets mostly declines (compare with SINGLETASK SPECIFIC ).", "Factuality prediction is precisely the kind of NLP task one would expect these types of features to assist with, so it is notable that, in our experiments, they do not.", "Multi-task helps Though our methods achieve state of the art in the single-task setting, the best performing systems are mostly multi-task (Table 4 and Supplementary Materials).", "This is an ideal setting for multi-task training: each dataset is relatively small, and their labels capture closely-related (if not identical) linguistic phenomena.", "UDS-IH2, the largest by a factor of two, reaps the smallest gains from multi-task.", "As discussed in 2, many discrete linguistic phenomena interact with event factuality.", "Here we provide a brief analysis of some of those interactions, both as they manifest in the UDS-IH2 dataset, as well as in the behavior of our models.", "This analysis employs the gold dependency parses present in EUD1.2.", "Table 5 illustrates the influence of modals and negation on the factuality of the events they have direct scope over.", "The context with the highest factuality on average is no direct modal and no negation (first row); all other modal contexts have varying degrees of negative mean factuality scores, with will as the most negative.", "This is likely a result of UDS-IH2 annotation instructions to mark future events as not having happened.", "Table 7 shows results from a manual error analysis on 50 events from UDS-IH2-dev with highest absolute prediction error (using H-biLSTM(2)-MultiSim w/UDS-IH2).", "Grammatical errors (such as run-on sentences) in the underlying text of UDS-IH2 appear to pose a particular challenge for these models; informal language and grammatical errors in UDS-IH2 is a substantial distinction from the other factuality datasets used here.", "In 6 we observe that the linguistically-motivated lexical features that we test (+lexfeats) do not have a big impact on overall performance.", "Tables 8 and 9 help nuance this observation.", "Table 8 shows that we can achieve similar separation between implicatives and non-implicatives as the feature mining strategy presented in 5. That is, those features may be redundant with information already learnable from factuality datasets (UDS-IH2).", "Despite the un-derperformance of these features overall, Table 9 shows that they may still improve performance in the subset of instances where they appear.", "We have proposed two neural models of event factuality prediction a bidirectional linear-chain LSTM (L-biLSTM) and a bidirectional child-sum dependency tree LSTM (T-biLSTM) which yield substantial gains over previous models based on deterministic rules and hand-engineered features.", "We found that both models yield such gains, though the L-biLSTM outperforms the T-biLSTM; for some datasets, an ensemble of the two (H-biLSTM) improves over either alone.", "We have also extended the UDS-IH1 dataset, yielding the largest publicly-available factuality dataset to date: UDS-IH2.", "In experiments, we see substantial gains from multi-task training over the three factuality datasets unified by Stanovsky et al. (2017), as well as UDS-IH2.", "Future work will further probe the behavior of these models, or extend them to learn other aspects of event semantics.", "This research was supported by the JHU HLT-COE, DARPA LORELEI, DARPA AIDA, and NSF-GRFP (1232825).", "The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes.", "The views and conclusions contained in this publication are those of the authors and should not be interpreted as representing official policies or endorsements of DARPA or the U.S. Government." ]
[ "result", "method", "result", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "method", "result", "abstain", "method", "objective", "method", "result", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "objective", "other", "other", "other", "other", "other", "other", "other", "method", "other", "abstain", "other", "other", "method", "other", "other", "other", "other", "other", "other", "other", "method", "objective", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "method", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "other", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "other", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "result", "objective", "result", "abstain", "other", "other", "other" ]
[ "We propose Generation-Augmented Retrieval (GAR ) for answering open-domain questions, which augments a query through text generation of heuristically discovered relevant contexts without external resources as supervision.", "We demonstrate that the generated contexts substantially enrich the semantics of the queries and GAR with sparse representations (BM25) achieves comparable or better performance than state-of-the-art dense retrieval methods such as DPR (Karpukhin et al., 2020).", "We show that generating diverse contexts for a query is beneficial as fusing their results consistently yields better retrieval accuracy.", "Moreover, as sparse and dense representations are often complementary, GAR can be easily combined with DPR to achieve even better performance.", "GAR achieves state-of-the-art performance on Natural Questions and TriviaQA datasets under the extractive QA setup when equipped with an extractive reader, and consistently outperforms other retrieval methods when the same generative reader is used.", "1 1 Introduction Open-domain question answering (OpenQA) aims to answer factoid questions without a pre-specified domain and has numerous real-world applications.", "In OpenQA, a large collection of documents ( e.g. , Wikipedia) are often used to seek information pertaining to the questions.", "One of the most common approaches uses a retriever-reader architecture (Chen et al., 2017), which first retrieves a small subset of documents using the question as the query and then reads the retrieved documents to extract (or generate) an answer.", "The retriever is crucial as it is infeasible to examine every piece of information in the entire document collection ( e.g. , millions of Wikipedia passages) and the retrieval accuracy bounds the performance of the (extractive) reader.", "Work was done during internship at Microsoft Azure AI.", "1 Our code is available at https://github.com/ morningmoni/GAR .", "Early OpenQA systems (Chen et al., 2017) use classic retrieval methods such as TF-IDF and BM25 with sparse representations.", "Sparse methods are lightweight and efficient, but unable to perform semantic matching and fail to retrieve relevant passages without lexical overlap.", "More recently, methods based on dense representations (Guu et al., 2020; Karpukhin et al., 2020) learn to embed queries and passages into a latent vector space, in which text similarity beyond lexical overlap can be measured.", "Dense retrieval methods can retrieve semantically relevant but lexically different passages and often achieve better performance than sparse methods.", "However, the dense models are more computationally expensive and suffer from information loss as they condense the entire text sequence into a fixed-size vector that does not guarantee exact matching (Luan et al., 2020).", "There have been some recent studies on query reformulation with text generation for other retrieval tasks, which, for example, rewrite the queries to context-independent (Yu et al., 2020; Lin et al., 2020; Vakulenko et al., 2020) or well-formed (Liu et al., 2019) ones.", "However, these methods require either task-specific data ( e.g. , conversational contexts, ill-formed queries) or external resources such as paraphrase data (Zaiem and Sadat, 2019; Wang et al., 2020) that cannot or do not transfer well to OpenQA.", "Also, some rely on time-consuming training process like reinforcement learning (RL) (Nogueira and Cho, 2017; Liu et al., 2019; Wang et al., 2020) that is not efficient enough for OpenQA (more discussions in Sec. 2).", "In this paper, we propose Generation-Augmented Retrieval (GAR ), which augments a query through text generation of a pre-trained language model (PLM).", "Different from prior studies that reformulate queries, GAR does not require external resources or downstream feedback via RL as supervision, because it does not rewrite the query but expands it with heuristically discovered relevant contexts, which are fetched from PLMs and provide richer background information (Table 2).", "For example, by prompting a PLM to generate the title of a relevant passage given a query and appending the generated title to the query, it becomes easier to retrieve that relevant passage.", "Intuitively, the generated contexts explicitly express the search intent not presented in the original query.", "As a result, GAR with sparse representations achieves comparable or even better performance than state-of-the-art approaches (Karpukhin et al., 2020; Guu et al., 2020) with dense representations of the original queries, while being more lightweight and efficient in terms of both training and inference (including the cost of the generation model) (Sec. 6.4).", "Specifically, we expand the query (question) by adding relevant contexts as follows.", "We conduct seq2seq learning with the question as the input and various freely accessible in-domain contexts as the output such as the answer, the sentence where the answer belongs to , and the title of a passage that contains the answer .", "We then append the generated contexts to the question as the generation-augmented query for retrieval.", "We demonstrate that using multiple contexts from diverse generation targets is beneficial as fusing the retrieval results of different generation-augmented queries consistently yields better retrieval accuracy.", "We conduct extensive experiments on the Natural Questions (NQ) (Kwiatkowski et al., 2019) and TriviaQA (Trivia) (Joshi et al., 2017) datasets.", "The results reveal four major advantages of GAR : (1) GAR , combined with BM25, achieves significant gains over the same BM25 model that uses the original queries or existing unsupervised query expansion (QE) methods.", "(2) GAR with sparse representations (BM25) achieves comparable or even better performance than the current state-of-the-art retrieval methods, such as DPR (Karpukhin et al., 2020), that use dense representations.", "(3) Since GAR uses sparse representations to measure lexical overlap 2 , it is complementary to dense representations: by fusing the retrieval results of GAR and DPR, we obtain consistently better performance than either method used individually.", "(4) GAR outperforms DPR in the end-to-end QA performance (EM) when the same extractive reader is used: EM=41.8 (43.8 when combining with DPR) 2 Strictly speaking, GAR with sparse representations handles semantics before retrieval by enriching the queries, while maintaining the advantage of exact matching.", "on NQ and 62.7 on Trivia, creating new state-of-the-art results for extractive OpenQA.", "GAR also outperforms other retrieval methods under the generative setup when the same generative reader is used: EM=38.1 (45.3 when combining with DPR) on NQ and 62.2 on Trivia.", "Contributions .", "(1) We propose Generation-Augmented Retrieval (GAR ), which augments queries with heuristically discovered relevant contexts through text generation without external supervision or time-consuming downstream feedback.", "(2) We show that using generation-augmented queries achieves significantly better retrieval and QA results than using the original queries or existing unsupervised QE methods.", "(3) We show that GAR , combined with a simple BM25 model, achieves new state-of-the-art performance on two benchmark datasets in extractive OpenQA and competitive results in the generative setting.", "Conventional Query Expansion .", "GAR shares some merits with query expansion (QE) methods based on pseudo relevance feedback (Rocchio, 1971; Abdul-Jaleel et al., 2004; Lv and Zhai, 2010) in that they both expand the queries with relevant contexts (terms) without the use of external supervision.", "GAR is superior as it expands the queries with knowledge stored in the PLMs rather than the retrieved passages and its expanded terms are learned through text generation.", "Recent Query Reformulation .", "There are recent or concurrent studies (Nogueira and Cho, 2017; Zaiem and Sadat, 2019; Yu et al., 2020; Vakulenko et al., 2020; Lin et al., 2020) that reformulate queries with generation models for other retrieval tasks.", "However, these studies are not easily applicable or efficient enough for OpenQA because: (1) They require external resources such as paraphrase data (Zaiem and Sadat, 2019), search sessions (Yu et al., 2020), or conversational contexts (Lin et al., 2020; Vakulenko et al., 2020) to form the reformulated queries, which are not available or showed inferior domain-transfer performance in OpenQA (Zaiem and Sadat, 2019); (2) They involve time-consuming training process such as RL.", "For example, Nogueira and Cho (2017) reported a training time of 8 to 10 days as it uses retrieval performance in the reward function and conducts retrieval at each iteration.", "In contrast, GAR uses freely accessible in-domain contexts like passage titles as the generation targets and standard seq2seq learning, which, despite its simplicity, is not only more efficient but effective for OpenQA.", "Retrieval for OpenQA .", "Existing sparse retrieval methods for OpenQA (Chen et al., 2017) solely rely on the information of the questions.", "GAR extends to contexts relevant to the questions by extracting information inside PLMs and helps sparse methods achieve comparable or better performance than dense methods (Guu et al., 2020; Karpukhin et al., 2020), while enjoying the simplicity and efficiency of sparse representations.", "GAR can also be used with dense representations to seek for even better performance, which we leave as future work.", "Generative QA .", "Generative QA generates answers through seq2seq learning instead of extracting answer spans.", "Recent studies on generative OpenQA (Lewis et al., 2020a; Min et al., 2020; Izacard and Grave, 2020) are orthogonal to GAR in that they focus on improving the reading stage and directly reuse DPR (Karpukhin et al., 2020) as the retriever.", "Unlike generative QA, the goal of GAR is not to generate perfect answers to the questions but pertinent contexts that are helpful for retrieval.", "Another line in generative QA learns to generate answers without relevant passages as the evidence but solely the question itself using PLMs (Roberts et al., 2020; Brown et al., 2020).", "GAR further confirms that one can extract factual knowledge from PLMs, which is not limited to the answers as in prior studies but also other relevant contexts.", "OpenQA aims to answer factoid questions without pre-specified domains.", "We assume that a large collection of documents C ( i.e. , Wikipedia) are given as the resource to answer the questions and a retriever-reader architecture is used to tackle the task, where the retriever retrieves a small subset of the documents D C and the reader reads the documents D to extract (or generate) an answer.", "Our goal is to improve the effectiveness and efficiency of the retriever and consequently improve the performance of the reader.", "In GAR , queries are augmented with various heuristically discovered relevant contexts in order to retrieve more relevant passages in terms of both quantity and quality.", "For the task of OpenQA where the query is a question, we take the following three freely accessible contexts as the generation targets.", "We show in Sec. 6.2 that having multiple generation targets is helpful in that fusing their results consistently brings better retrieval accuracy.", "Context 1: The default target (answer) .", "The default target is the label in the task of interest, which is the answer in OpenQA.", "The answer to the question is apparently useful for the retrieval of relevant passages that contain the answer itself.", "As shown in previous work (Roberts et al., 2020; Brown et al., 2020), PLMs are able to answer certain questions solely by taking the questions as input ( i.e. , closed-book QA).", "Instead of using the generated answers directly as in closed-book QA, GAR treats them as contexts of the question for retrieval.", "The advantage is that even if the generated answers are partially correct (or even incorrect), they may still benefit retrieval as long as they are relevant to the passages that contain the correct answers ( e.g. , co-occur with the correct answers).", "Context 2: Sentence containing the default target .", "The sentence in a passage that contains the answer is used as another generation target.", "Similar to using answers as the generation target, the generated sentences are still beneficial for retrieving relevant passages even if they do not contain the answers, as their semantics is highly related to the questions/answers (examples in Sec. 6.1).", "One can take the relevant sentences in the ground-truth passages (if any) or those in the positive passages of a retriever as the reference, depending on the trade-off between reference quality and diversity.", "Context 3: Title of passage containing the default target .", "One can also use the titles of relevant passages as the generation target if available.", "Specifically, we retrieve Wikipedia passages using BM25 with the question as the query, and take the page titles of positive passages that contain the answers as the generation target.", "We observe that the page titles of positive passages are often entity names of interest, and sometimes (but not always) the answers to the questions.", "Intuitively, if GAR learns which Wikipedia pages the question is related to, the queries augmented by the generated titles would naturally have a better chance of retrieving those relevant passages.", "While it is likely that some of the generated query contexts involve unfaithful or nonfactual information due to hallucination in text generation (Mao et al., 2020) and introduce noise during retrieval, they are beneficial rather than harmful overall, as our experiments show that GAR improve both retrieval and QA performance over BM25 significantly.", "Also, since we generate 3 different (com-plementary) query contexts and fuse their retrieval results, the distraction of hallucinated content is further alleviated.", "After generating the contexts of a query, we append them to the query to form a generation-augmented query .", "3 We observe that conducting retrieval with the generated contexts ( e.g. , answers) alone as queries instead of concatenation is ineffective because (1) some of the generated answers are rather irrelevant, and (2) a query consisting of the correct answer alone (without the question) may retrieve false positive passages with unrelated contexts that happen to contain the answer.", "Such low-quality passages may lead to potential issues in the following passage reading stage.", "If there are multiple query contexts, we conduct retrieval using queries with different generated contexts separately and then fuse their results.", "The performance of one-time retrieval with all the contexts appended is slightly but not significantly worse.", "For simplicity, we fuse the retrieval results in a straightforward way: an equal number of passages are taken from the top-retrieved passages of each source.", "One may also use weighted or more sophisticated fusion strategies such as reciprocal rank fusion (Cormack et al., 2009), the results of which are slightly better according to our experiments.", "4 Next, one can use any off-the-shelf retriever for passage retrieval.", "Here, we use a simple BM25 model to demonstrate that GAR with sparse representations can already achieve comparable or better performance than state-of-the-art dense methods while being more lightweight and efficient (includ-ing the cost of the generation model), closing the gap between sparse and dense retrieval methods.", "To further verify the effectiveness of GAR , we equip it with both extractive and generative readers for end-to-end QA evaluation.", "We follow the 3 One may create a title field during document indexing and conduct multi-field retrieval but here we append the titles to the questions as other query contexts for generalizability.", "For the extractive setup, we largely follow the design of the extractive reader in DPR (Karpukhin et al., 2020).", "Let D = [ d 1 , d 2 , ..., d k ] denote the list of retrieved passages with passage relevance scores D .", "Let S i = [ s 1 , s 2 , ..., s N ] denote the top N text spans in passage d i ranked by span relevance scores S i .", "Briefly, the DPR reader uses BERT-base (De-vlin et al., 2019) for representation learning, where it estimates the passage relevance score D k for each retrieved passage d k based on the [CLS] tokens of all retrieved passages D , and assigns span relevance scores S i for each candidate span based on the representations of its start and end tokens.", "Finally, the span with the highest span relevance score from the passage with the highest passage relevance score is chosen as the answer.", "We refer the readers to Karpukhin et al. (2020) for more details.", "Passage-level Span Voting .", "Many extractive QA methods (Chen et al., 2017; Min et al., 2019b; Guu et al., 2020; Karpukhin et al., 2020) measure the probability of span extraction in different retrieved passages independently, despite that their collective signals may provide more evidence in determining the correct answer.", "We propose a simple yet effective passage-level span voting mechanism, which aggregates the predictions of the spans in the same surface form from different retrieved passages.", "Intuitively, if a text span is considered as the answer multiple times in different passages, it is more likely to be the correct answer.", "Specifically, GAR calculates a normalized score p ( S i [ j ]) for the j-th span in passage d i during inference as follows: p ( S i [ j ]) = softmax ( D )[ i ] softmax ( S i )[ j ] .", "GAR then aggregates the scores of the spans with the same surface string among all the retrieved passages as the collective passage-level score.", "For the generative setup, we use a seq2seq framework where the input is the concatenation of the question and top-retrieved passages and the target output is the desired answer.", "Such generative readers are adopted in recent methods such as SpanSe-5 We find that the number of spans used for normalization in each passage does not have significant impact on the final performance (we take N = 5 ) and using the raw or normalized strings for aggregation also perform similarly.", "qGen (Min et al., 2020) and Longformer (Belt-agy et al., 2020).", "Specifically, we use BART-large (Lewis et al., 2019) as the generative reader, which concatenates the question and top-retrieved passages up to its length limit (1,024 tokens, 7.8 passages on average).", "Generative GAR is directly comparable with SpanSeqGen (Min et al., 2020) that uses the retrieval results of DPR but not comparable with Fusion-in-Decoder (FID) (Izacard and Grave, 2020) since it encodes 100 passages rather than 1,024 tokens and involves more model parameters.", "We conduct experiments on the open-domain version of two popular QA benchmarks: Natural Questions (NQ) (Kwiatkowski et al., 2019) and TriviaQA (Trivia) (Joshi et al., 2017).", "The statistics of the datasets are listed in Table 1.", "Following prior studies (Karpukhin et al., 2020), we use top-k retrieval accuracy to evaluate the performance of the retriever and the Exact Match (EM) score to measure the performance of the reader.", "Top-k retrieval accuracy is defined as the proportion of questions for which the top-k retrieved passages contain at least one answer span, which is an upper bound of how many questions are an-swerable by an extractive reader.", "Exact Match (EM) is the proportion of the predicted answer spans being exactly the same as (one of) the ground-truth answer(s), after string normalization such as article and punctuation removal.", "For passage retrieval, we mainly compare with BM25 and DPR, which represent the most used state-of-the-art methods of sparse and dense retrieval for OpenQA, respectively.", "For query expansion, we re-emphasize that GAR is the first QE approach designed for OpenQA and most of the recent approaches are not applicable or efficient enough for OpenQA since they have task-specific objectives, require external supervision that was shown to transfer poorly to OpenQA, or take many days to train (Sec. 2).", "We thus compare with a classic unsupervised QE method RM3 (Abdul-Jaleel et al., 2004) that does not need external resources for a fair comparison.", "For passage reading, we compare with both extractive (Min et al., 2019a; Asai et al., 2019; Lee et al., 2019; Min et al., 2019b; Guu et al., 2020; Karpukhin et al., 2020) and generative (Brown et al., 2020; Roberts et al., 2020; Min et al., 2020; Lewis et al., 2020a; Izacard and Grave, 2020) methods when equipping GAR with the corresponding reader.", "Retriever .", "We use Anserini (Yang et al., 2017) for text retrieval of BM25 and GAR with its default parameters.", "We conduct grid search for the QE baseline RM3 (Abdul-Jaleel et al., 2004).", "Generator .", "We use BART-large (Lewis et al., 2019) to generate query contexts in GAR .", "When there are multiple desired targets (such as multiple answers or titles), we concatenate them with [SEP] tokens as the reference and remove the [SEP] tokens in the generation-augmented queries.", "For Trivia, in particular, we use the value field as the generation target of answer and observe better performance.", "We take the checkpoint with the best ROUGE-1 F1 score on the validation set, while observing that the retrieval accuracy of GAR is relatively stable to the checkpoint selection since we do not directly use the generated contexts but treat them as augmentation of queries for retrieval.", "Reader .", "Extractive GAR uses the reader of DPR with largely the same hyperparameters, which is initialized with BERT-base (Devlin et al., 2019) and takes 100 (500) retrieved passages during training (inference).", "Generative GAR concatenates the question and top-10 retrieved passages, and takes at most 1,024 tokens as input.", "Greedy decoding is adopted for all generation models, which appears to perform similarly to (more expensive) beam search.", "We evaluate the effectiveness of GAR in three stages: generation of query contexts (Sec. 6.1), retrieval of relevant passages (Sec. 6.2), and passage reading for OpenQA (Sec. 6.3).", "Ablation studies are mostly shown on the NQ dataset to understand the drawbacks of GAR since it achieves Question : when did bat out of hell get released?", "better performance on Trivia.", "Automatic Evaluation .", "To evaluate the quality of the generated query contexts, we first measure their lexical overlap with the ground-truth query contexts.", "As suggested by the nontrivial ROUGE scores in Table 3, GAR does learn to generate meaningful query contexts that could help the retrieval stage.", "We next measure the lexical overlap between the query and the ground-truth passage.", "The ROUGE-1/2/L F1 scores between the original query and ground-truth passage are 6.00/2.36/5.01, and those for the generation-augmented query are 7.05/2.84/5.62 (answer), 13.21/6.99/10.27 (sen-tence), 7.13/2.85/5.76 (title) on NQ, respectively.", "Such results further demonstrate that the generated query contexts significantly increase the word overlap between the queries and the positive passages, and thus are likely to improve retrieval results.", "6 Context ROUGE-1 ROUGE-2 ROUGE-L Answer 33.51 20.54 33.30 Sentence 37.14 24.71 33.91 Title 43.20 32.11 39.67 Table 3: ROUGE F1 scores of the generated query contexts on the validation set of the NQ dataset.", "Case Studies .", "In Table 2, we show several examples of the generated query contexts and their ground-truth references.", "In the first example, the correct album release date appears in both the generated answer and the generated sentence, and the generated title is the same as the Wikipedia page title of the album.", "In the last two examples, the generated answers are wrong but fortunately, the generated sentences contain the correct answer and (or) other relevant information and the generated titles are highly related to the question as well, which shows that different query contexts are complementary to each other and the noise during query context generation is thus reduced.", "Comparison w.", "the state-of-the-art .", "We next evaluate the effectiveness of GAR for retrieval.", "In Table 4, we show the top-k retrieval accuracy of BM25, BM25 with query expansion (+RM3) (Abdul-Jaleel et al., 2004), DPR (Karpukhin et al., 2020), GAR , and GAR +DPR.", "On the NQ dataset, while BM25 clearly under-performs DPR regardless of the number of retrieved passages, the gap between GAR and DPR is significantly smaller and negligible when k 100 .", "When k 500 , GAR is slightly better than DPR despite that it simply uses BM25 for retrieval.", "In contrast, the classic QE method RM3, while showing Method NQ Trivia Top-5 Top-20 Top-100 Top-500 Top-1000 Top-5 Top-20 Top-100 Top-500 Top-1000 BM25 (ours) 43.6 62.9 78.1 85.5 87.8 67.7 77.3 83.9 87.9 88.9 BM25 +RM3 44.6 64.2 79.6 86.8 88.9 67.0 77.1 83.8 87.7 88.9 DPR 68.3 80.1 86.1 90.3 91.2 72.7 80.2 84.8 -GAR 60.9 74.4 85.3 90.3 91.7 73.1 80.4 85.7 88.9 89.7 GAR +DPR 70.7 81.6 88.9 92.0 93.2 76.0 82.1 86.6 -Table 4: Top-k retrieval accuracy on the test sets .", "marginal improvement over the vanilla BM25, does not achieve comparable performance with GAR or DPR.", "By fusing the results of GAR and DPR in the same way as described in Sec. 3.3, we further obtain consistently higher performance than both methods, with top-100 accuracy 88.9% and top-1000 accuracy 93.2%.", "On the Trivia dataset, the results are even more encouraging GAR achieves consistently better retrieval accuracy than DPR when k 5 .", "On the other hand, the difference between BM25 and BM25 +RM3 is negligible, which suggests that naively considering top-ranked passages as relevant ( i.e. , pseudo relevance feedback) for QE does not always work for OpenQA.", "Results on more cutoffs of k can be found in App.", "A. Effectiveness of diverse query contexts .", "In Fig. 1, we show the performance of GAR when different query contexts are used to augment the queries.", "Although the individual performance when using each query context is somewhat similar, fusing their retrieved passages consistently leads to better performance, confirming that different generation-augmented queries are complementary to each other (recall examples in Table 2).", "Performance breakdown by question type .", "In Table 5, we show the top-100 accuracy of the compared retrieval methods per question type on the NQ test set.", "Again, GAR outperforms BM25 on all types of questions significantly and GAR +DPR achieves the best performance across the board, which further verifies the effectiveness of GAR .", "Comparison w.", "the state-of-the-art .", "We show the comparison of end-to-end QA performance of extractive and generative methods in Table 6.", "Extractive GAR achieves state-of-the-art performance among extractive methods on both NQ and Trivia datasets, despite that it is more lightweight and computationally efficient.", "Generative GAR outper-1 5 10 20 50 100 200 300 500 1000 k: # of retrieved passages 30 40 50 60 70 80 90 T op k A cc u r a cy ( % ) Answer+Sentence+Title Answer+Sentence Answer+Title Answer Title Sentence Figure 1: Top-k retrieval accuracy on the test set of NQ when fusing retrieval results of different generation-augmented queries.", "forms most of the generative methods on Trivia but does not perform as well on NQ, which is somewhat expected and consistent with the performance at the retrieval stage, as the generative reader only takes a few passages as input and GAR does not outperform dense retrieval methods on NQ when k is very small.", "However, combining GAR with DPR achieves significantly better performance than both methods or baselines that use DPR as input such as SpanSeqGen (Min et al., 2020) and RAG (Lewis et al., 2020a).", "Also, GAR outperforms BM25 significantly under both extractive and generative se-Method NQ Trivia E x t r ac ti v e Hard EM (Min et al., 2019a) 28.1 50.9 Path Retriever (Asai et al., 2019) 32.6 -ORQA (Lee et al., 2019) 33.3 45.0 Graph Retriever (Min et al., 2019b) 34.5 56.0 REALM (Guu et al., 2020) 40.4 -DPR (Karpukhin et al., 2020) 41.5 57.9 BM25 (ours) 37.7 60.1 GAR 41.8 62.7 74.8 GAR +DPR 43.8 --G e n e r a ti v e GPT-3 (Brown et al., 2020) 29.9 -71.2 T5 (Roberts et al., 2020) 36.6 60.5 SpanSeqGen (Min et al., 2020) 42.2 -RAG (Lewis et al., 2020a) 44.5 56.1 68.0 FID (Izacard and Grave, 2020) 51.4 67.6 80.1 BM25 (ours) 35.3 58.6 GAR 38.1 62.2 GAR +DPR 45.3 -Table 6: End-to-end comparison with the state-of-the-art methods in EM .", "tups, which again shows the effectiveness of the generated query contexts, even if they are heuristically discovered without any external supervision.", "The best performing generative method FID (Izacard and Grave, 2020) is not directly comparable as it takes more (100) passages as input.", "As an indirect comparison, GAR performs better than FID when FID encodes 10 passages (cf.", "Fig. 2 in Izacard and Grave (2020)).", "Moreover, since FID relies on the retrieval results of DPR as well, we believe that it is a low-hanging fruit to replace its input with GAR or GAR +DPR and further boost the performance.", "7 We also observe that, perhaps surprisingly, extractive BM25 performs reasonably well, especially on the Trivia dataset, outperforming many recent state-of-the-art methods.", "8 Generative BM25 also performs competitively in our experiments.", "Model Generalizability .", "Recent studies (Lewis et al., 2020b) show that there are significant question and answer overlaps between the training and test sets of popular OpenQA datasets.", "Specifically, 60% to 70% test-time answers also appear in the training set and roughly 30% test-set questions have a near-duplicate paraphrase in the training set.", "Such observations suggest that many questions might have been answered by simple question or 7 This claim is later verified by the best systems in the NeurIPS 2020 EfficientQA competition (Min et al., 2021).", "8 We find that taking 500 passages during reader inference instead of 100 as in Karpukhin et al. (2020) improves the performance of BM25 but not DPR.", "answer memorization.", "To further examine model generalizability, we study the per-category performance of different methods using the annotations in Lewis et al. (2020b).", "As listed in Table 7, for the No Overlap category, GAR +DPR (E) outperforms DPR on the extractive setup and GAR +DPR (G) outperforms RAG on the generative setup, which indicates that better end-to-end model generalizability can be achieved by adding GAR for retrieval.", "GAR +DPR also achieves the best EM under the Answer Overlap Only category.", "In addition, we observe that a closed-book BART model that only takes the question as input performs much worse than additionally taking top-retrieved passages, i.e. , GAR +DPR (G), especially on the questions that require generalizability.", "Notably, all methods perform significantly better on the Question Overlap category, which suggests that the high Total EM is mostly contributed by question memorization.", "That said, GAR +DPR appears to be less dependent on question memorization given its lower EM for this category.", "9 6.4 Efficiency of GARGAR is efficient and scalable since it uses sparse representations for retrieval and does not involve time-consuming training process such as RL (Nogueira and Cho, 2017; Liu et al., 2019).", "The only overhead of GAR is on the generation of query contexts and the retrieval with generation-augmented (thus longer) queries, whose computational complexity is significantly lower than other methods with comparable retrieval accuracy.", "We use Nvidia V100 GPUs and Intel Xeon Platinum 8168 CPUs in our experiments.", "As listed in 9 The same ablation study is also conducted on the retrieval stage and similar results are observed.", "More detailed discussions can be found in App.", "A. Training Indexing Retrieval DPR 24h w.", "Table 8, the training time of GAR is 3 to 6 hours on 1 GPU depending on the generation target.", "As a comparison, REALM (Guu et al., 2020) uses 64 TPUs to train for 200k steps during pre-training alone and DPR (Karpukhin et al., 2020) takes about 24 hours to train with 8 GPUs.", "To build the indices of Wikipedia passages, GAR only takes around 30 min with 35 CPUs, while DPR takes 8.8 hours on 8 GPUs to generate dense representations and another 8.5 hours to build the FAISS index (John-son et al., 2017).", "For retrieval, GAR takes about 1 min to generate one query context with 1 GPU, 1 min to retrieve 1,000 passages for the NQ test set with answer/title-augmented queries and 2 min with sentence-augmented queries using 35 CPUs.", "In contrast, DPR takes about 30 min on 1 GPU.", "In this work, we propose Generation-Augmented Retrieval and demonstrate that the relevant contexts generated by PLMs without external supervision can significantly enrich query semantics and improve retrieval accuracy.", "Remarkably, GAR with sparse representations performs similarly or better than state-of-the-art methods based on the dense representations of the original queries.", "GAR can also be easily combined with dense representations to produce even better results.", "Furthermore, GAR achieves state-of-the-art end-to-end performance on extractive OpenQA and competitive performance under the generative setup.", "Potential improvements .", "There is still much space to explore and improve for GAR in future work.", "For query context generation, one can explore multi-task learning to further reduce computational cost and examine whether different contexts can mutually enhance each other when generated by the same generator.", "One may also sample multiple contexts instead of greedy decoding to enrich a query.", "For retrieval, one can adopt more advanced fusion techniques based on both the ranking and score of the passages.", "As the generator and retriever are largely independent now, it is also interesting to study how to jointly or iteratively optimize generation and retrieval such that the generator is aware of the retriever and generates query contexts more beneficial for the retrieval stage.", "Last but not least, it is very likely that better results can be obtained by more extensive hyper-parameter tuning.", "Applicability to other tasks .", "Beyond OpenQA, GAR also has great potentials for other tasks that involve text matching such as conversation utterance selection (Lowe et al., 2015; Dinan et al., 2020) or information retrieval (Nguyen et al., 2016; Craswell et al., 2020).", "The default generation target is always available for supervised tasks.", "For example, for conversation utterance selection one can use the reference utterance as the default target and then match the concatenation of the conversation history and the generated utterance with the provided utterance candidates.", "For article search, the default target could be (part of) the ground-truth article itself.", "Other generation targets are more task-specific and can be designed as long as they can be fetched from the latent knowledge inside PLMs and are helpful for further text retrieval (matching).", "Note that by augmenting (expanding) the queries with heuristically discovered relevant contexts extracted from PLMs instead of reformulating them, GAR bypasses the need for external supervision to form the original-reformulated query pairs.", "We thank Vladimir Karpukhin, Sewon Min, Gautier Izacard, Wenda Qiu, Revanth Reddy, and Hao Cheng for helpful discussions.", "We thank the anonymous reviewers for valuable comments." ]
[ "objective", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "method", "method", "objective", "objective", "method", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "other", "other", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "method", "method", "method", "abstain", "method", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "other", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other" ]
[ "Text generation has made significant advances in the last few years.", "Yet, evaluation metrics have lagged behind, as the most popular choices (e.g., BLEU and ROUGE) may correlate poorly with human judgments.", "We propose BLEURT , a learned evaluation metric based on BERT that can model human judgments with a few thousand possibly biased training examples.", "A key aspect of our approach is a novel pre-training scheme that uses millions of synthetic examples to help the model generalize.", "BLEURT provides state-of-the-art results on the last three years of the WMT Metrics shared task and the WebNLG Competition dataset.", "In contrast to a vanilla BERT-based approach, it yields superior results even when the training data is scarce and out-of-distribution.", "In the last few years, research in natural text generation (NLG) has made significant progress, driven largely by the neural encoder-decoder paradigm (Sutskever et al., 2014; Bahdanau et al., 2015) which can tackle a wide array of tasks including translation (Koehn, 2009), summarization ( Mani, 1999; Chopra et al., 2016), structured-data-to-text generation (McKeown, 1992; Kukich, 1983; Wiseman et al., 2017) dialog (Smith and Hipp, 1994; Vinyals and Le, 2015) and image captioning (Fang et al., 2015).", "However, progress is increasingly impeded by the shortcomings of existing metrics (Wiseman et al., 2017; Ma et al., 2019; Tian et al., 2019).", "Human evaluation is often the best indicator of the quality of a system.", "However, designing crowd sourcing experiments is an expensive and high-latency process, which does not easily fit in a daily model development pipeline.", "Therefore, NLG researchers commonly use automatic evaluation metrics , which provide an acceptable proxy for quality and are very cheap to compute.", "This paper investigates sentence-level, reference-based metrics, which describe the extent to which a candidate sentence is similar to a reference one.", "The exact definition of similarity may range from string overlap to logical entailment.", "The first generation of metrics relied on handcrafted rules that measure the surface similarity between the sentences.", "To illustrate, BLEU (Pa-pineni et al., 2002) and ROUGE (Lin, 2004), two popular metrics, rely on N-gram overlap.", "Because those metrics are only sensitive to lexical variation, they cannot appropriately reward semantic or syntactic variations of a given reference.", "Thus, they have been repeatedly shown to correlate poorly with human judgment, in particular when all the systems to compare have a similar level of accuracy (Liu et al., 2016; Novikova et al., 2017; Chaganty et al., 2018).", "Increasingly, NLG researchers have addressed those problems by injecting learned components in their metrics.", "To illustrate, consider the WMT Metrics Shared Task, an annual benchmark in which translation metrics are compared on their ability to imitate human assessments.", "The last two years of the competition were largely dominated by neural net-based approaches, RUSE, YiSi and ESIM (Ma et al., 2018, 2019).", "Current approaches largely fall into two categories.", "Fully learned metrics , such as BEER, RUSE, and ESIM are trained end-to-end, and they typically rely on handcrafted features and/or learned embeddings.", "Conversely, hybrid metrics , such as YiSi and BERTscore combine trained elements, e.g., contextual embeddings, with handwritten logic, e.g., as token alignment rules.", "The first category typically offers great expressivity: if a training set of human ratings data is available, the metrics may take full advantage of it and fit the ratings distribution tightly.", "Furthermore, learned metrics can be tuned to measure task-specific properties, such as fluency, faithfulness, grammar, or style.", "On the other hand, hybrid metrics offer robustness.", "They may provide better results when there is little to no training data, and they do not rely on the assumption that training and test data are identically distributed.", "And indeed, the IID assumption is particularly problematic in NLG evaluation because of domain drifts , that have been the main target of the metrics literature, but also because of quality drifts : NLG systems tend to get better over time, and therefore a model trained on ratings data from 2015 may fail to distinguish top performing systems in 2019, especially for newer research tasks.", "An ideal learned metric would be able to both take full advantage of available ratings data for training, and be robust to distribution drifts, i.e., it should be able to extrapolate .", "Our insight is that it is possible to combine expressivity and robustness by pre-training a fully learned metric on large amounts of synthetic data, before fine-tuning it on human ratings.", "To this end, we introduce BLEURT , 1 a text generation metric based on BERT (Devlin et al., 2019).", "A key ingredient of BLEURT is a novel pre-training scheme, which uses random perturbations of Wikipedia sentences augmented with a diverse set of lexical and semantic-level supervision signals.", "To demonstrate our approach, we train BLEURT for English and evaluate it under different generalization regimes.", "We first verify that it provides state-of-the-art results on all recent years of the WMT Metrics Shared task (2017 to 2019, to-English language pairs).", "We then stress-test its ability to cope with quality drifts with a synthetic benchmark based on WMT 2017.", "Finally, we show that it can easily adapt to a different domain with three tasks from a data-to-text dataset, WebNLG 2017 (Gardent et al., 2017).", "Ablations show that our synthetic pretraining scheme increases performance in the IID setting, and is critical to ensure robustness when the training data is scarce, skewed, or out-of-domain.", "Define x = ( x 1 , .., x r ) to be the reference sentence of length r where each x i is a token and let x = ( x 1 , .., x p ) be a prediction sentence of length p .", "Let { ( x i , x i , y i ) } Nn =1 be a training dataset of size N where y i R is the human rating that indicates how good x i is with respect to x i .", "Given the training data, our goal is to learn a function f : ( x , x ) y that predicts the human rating.", "Given the small amounts of rating data available, it is natural to leverage unsupervised representations for this task.", "In our model, we use BERT (Bidirec-tional Encoder Representations from Transformers) (Devlin et al., 2019), which is an unsupervised technique that learns contextualized representations of sequences of text.", "Given x and x , BERT is a Transformer (Vaswani et al., 2017) that returns a sequence of contextualized vectors: v [CLS] , v x 1 , ..., v x r , v 1 , ..., v x p = BERT( x , x ) where v [CLS] is the representation for the special [CLS] token.", "As described by Devlin et al. (2019), we add a linear layer on top of the [CLS] vector to predict the rating: y = f ( x , x ) = W v [CLS] + b where W and b are the weight matrix and bias vector respectively.", "Both the above linear layer as well as the BERT parameters are trained (i.e. fine-tuned) on the supervised data which typically numbers in a few thousand examples.", "We use the regression loss supervised = 1 NP Nn =1 k y i y k 2 .", "Although this approach is quite straightforward, we will show in Section 5 that it gives state-of-the-art results on WMT Metrics Shared Task 17-19, which makes it a high-performing evaluation metric.", "However, fine-tuning BERT requires a sizable amount of IID data, which is less than ideal for a metric that should generalize to a variety of tasks and model drift.", "The key aspect of our approach is a pre-training technique that we use to warm up BERT before fine-tuning on rating data.", "3 We generate a large 3 To clarify, our pre-training scheme is an addition, not a replacement to BERT's initial training (Devlin et al., 2019) and happens after it.", "number of of synthetic reference-candidate pairs ( z , z ) , and we train BERT on several lexicaland semantic-level supervision signals with a multitask loss.", "As our experiments will show, BLEURT generalizes much better after this phase, especially with incomplete training data.", "Any pre-training approach requires a dataset and a set of pre-training tasks.", "Ideally, the setup should resemble the final NLG evaluation task, i.e., the sentence pairs should be distributed similarly and the pre-training signals should correlate with human ratings.", "Unfortunately, we cannot have access to the NLG models that we will evaluate in the future.", "Therefore, we optimized our scheme for generality, with three requirements.", "(1) The set of reference sentences should be large and diverse, so that BLEURT can cope with a wide range of NLG domains and tasks.", "(2) The sentence pairs should contain a wide variety of lexical, syntactic, and semantic dissimilarities.", "The aim here is to anticipate all variations that an NLG system may produce, e.g., phrase substitution, paraphrases, noise, or omissions.", "(3) The pre-training objectives should effectively capture those phenomena, so that BLEURT can learn to identify them.", "The following sections present our approach.", "One way to expose BLEURT to a wide variety of sentence differences is to use existing sentence pairs datasets (Bowman et al., 2015; Williams et al., 2018; Wang et al., 2019).", "These sets are a rich source of related sentences, but they may fail to capture the errors and alterations that NLG systems produce (e.g., omissions, repetitions, nonsensical substitutions).", "We opted for an automatic approach instead, that can be scaled arbitrarily and at little cost: we generate synthetic sentence pairs ( z , z ) by randomly perturbing 1.8 million segments z from Wikipedia.", "We use three techniques: mask-filling with BERT, backtranslation, and randomly dropping out words.", "We obtain about 6.5 million perturbations z .", "Let us describe those techniques.", "Mask-filling with BERT: BERT's initial training task is to fill gaps (i.e., masked tokens) in to-kenized sentences.", "We leverage this functionality by inserting masks at random positions in the Wikipedia sentences, and fill them with the language model.", "Thus, we introduce lexical alterations while maintaining the fluency of the sentence.", "We use two masking strategieswe either introduce the masks at random positions in the sentences, or we create contiguous sequences of masked tokens.", "More details are provided in the Appendix.", "Backtranslation: We generate paraphrases and perturbations with backtranslation, that is, round trips from English to another language and then back to English with a translation model (Bannard and Callison-Burch, 2005; Ganitkevitch et al., 2013; Sennrich et al., 2016).", "Our primary aim is to create variants of the reference sentence that preserves semantics.", "Additionally, we use the mispre-dictions of the backtranslation models as a source of realistic alterations.", "Dropping words: We found it useful in our experiments to randomly drop words from the synthetic examples above to create other examples.", "This method prepares BLEURT for pathological behaviors or NLG systems, e.g., void predictions, or sentence truncation.", "The next step is to augment each sentence pair ( z , z ) with a set of pre-training signals { k } , where k is the target vector of pre-training task k .", "Good pre-training signals should capture a wide variety of lexical and semantic differences.", "They should also be cheap to obtain, so that the approach can scale to large amounts of synthetic data.", "The following section presents our 9 pretraining tasks, summarized in Table 1.", "Additional implementation details are in the Appendix.", "Automatic Metrics: We create three signals BLEU , ROUGE , and BERTscore with sentence BLEU (Papineni et al., 2002), ROUGE (Lin, 2004), and BERTscore (Zhang et al., 2020) respectively (we use precision, recall and F-score for the latter two).", "Backtranslation Likelihood: The idea behind this signal is to leverage existing translation models to measure semantic equivalence.", "Given a pair ( z , z ) , this training signal measures the probability that z is a backtranslation of z , P ( z | z ) , normalized by the length of z .", "Let P en fr ( z fr | z ) be a translation model that assigns probabilities to French sentences z fr conditioned on English sentences z and let P fr en ( z | z fr ) be a translation model that assigns probabilities to English Task Type Pre-training Signals Loss Type BLEU BLEU Regression ROUGE ROUGE = ( ROUGE-P , ROUGE-R , ROUGE-F ) Regression BERTscore BERTscore = ( BERTscore-P , BERTscore-R , BERTscore-F ) Regression Backtrans.", "sentences given french sentences.", "If | z | is the number of tokens in z , we define our score as en-fr , z | z = log P ( z | z ) | z | , with: P ( z | z ) = X z fr P fr en ( z | z fr ) P en fr ( z fr | z ) Because computing the summation over all possible French sentences is intractable, we approximate the sum using z fr = arg max P en fr ( z fr | z ) and we assume that P en fr ( z fr | z ) 1 : P ( z | z ) P fr en ( z | z fr ) We can trivially reverse the procedure to compute P ( z | z ) , thus we create 4 pre-training signals en-fr , z | z , en-fr , z | z , en-de , z | z , en-de , z | z with two pairs of languages ( en de and en fr ) in both directions.", "Textual Entailment: The signal entail expresses whether z entails or contradicts z using a clas-sifier.", "We report the probability of three labels: Entail , Contradict , and Neutral , using BERT fine-tuned on an entailment dataset, MNLI (Devlin et al., 2019; Williams et al., 2018).", "Backtranslation flag: The signal backtran flag is a Boolean that indicates whether the perturbation was generated with backtranslation or with mask-filling.", "For each pre-training task, our model uses either a regression or a classification loss.", "We then aggregate the task-level losses with a weighted sum.", "Let k describe the target vector for each task, e.g., the probabilities for the classes Entail , Contradict , Neutral , or the precision, recall, and F-score for ROUGE.", "If k is a regression task, then the loss used is the 2 loss i.e. k = k k k k 22 / | k | where | k | is the dimension of k and k is computed by using a task-specific linear layer on top of the [CLS] embedding: k = W k v [CLS] + b k .", "If k is a classification task, we use a separate linear layer to predict a logit for each class c : kc = W kc v [CLS] + b kc , and we use the multiclass cross-entropy loss.", "We define our aggregate pre-training loss function as follows: pre-training = 1 MMX m =1 KX k =1 k k ( mk , mk ) (1) where mk is the target vector for example m , M is number of synthetic examples, and k are hy-perparameter weights obtained with grid search (more details in the Appendix).", "In this section, we report our experimental results for two tasks, translation and data-to-text.", "First, we benchmark BLEURT against existing text generation metrics on the last 3 years of the WMT Metrics Shared Task (Bojar et al., 2017).", "We then evaluate its robustness to quality drifts with a series of synthetic datasets based on WMT17.", "We test BLEURT 's ability to adapt to different tasks with the WebNLG 2017 Challenge Dataset (Gar-dent et al., 2017).", "Finally, we measure the contribution of each pre-training task with ablation experiments.", "Our Models: Unless specified otherwise, all BLEURT models are trained in three steps: regular BERT pre-training (Devlin et al., 2019), pre-training on synthetic data (as explained in Section 4), and fine-tuning on task-specific ratings (translation and/or data-to-text).", "We experiment with two versions of BLEURT , BLEURT and BLEURTbase , respectively based on BERT-Large (24 layers, 1024 hidden units, 16 heads) and BERT-Base (12 layers, 768 hidden units, 12 heads) (Devlin et al., 2019), both uncased.", "We use batch size 32, learning rate 1e-5, and 800,000 steps for pre-training and 40,000 steps for fine-tuning.", "We provide the full detail of our training setup in the Appendix.", "Datasets and Metrics: We use years 2017 to 2019 of the WMT Metrics Shared Task, to-English language pairs.", "For each year, we used the official WMT test set, which include several thousand pairs of sentences with human ratings from the news domain.", "The training sets contain 5,360, 9,492, and 147,691 records for each year.", "The test sets for years 2018 and 2019 are noisier, as reported by the organizers and shown by the overall lower correlations.", "We evaluate the agreement between the automatic metrics and the human ratings.", "For each year, we report two metrics: Kendall's Tau (for consistency across experiments), and the official WMT metric for that year (for completeness).", "The official WMT metric is either Pearson's correlation or a robust variant of Kendall's Tau called DARR, described in the Appendix.", "All the numbers come from our own implementation of the benchmark.", "4 Our results are globally consistent with the official results but we report small differences in 2018 and 2019, marked in the tables.", "Models: We experiment with four versions of BLEURT : BLEURT , BLEURTbase , BLEURT -pre and BLEURTbase -pre .", "The first two models are based on BERT-large and BERT-base.", "In the latter two versions, we skip the pre-training phase and fine-tune directly on the WMT ratings.", "For each year of the WMT shared task, we use the test set from the previous years for training and validation.", "We describe our setup in further detail in the Appendix.", "We compare BLEURT to participant data from the shared task and automatic metrics that we ran ourselves.", "In the former case, we use the the best-performing contestants for each year, that is, chrF++ , BEER , Meteor++ , RUSE , Yisi1 , ESIM and Yisi1-SRL (Mathur et al., 2019).", "All the contestants use the same WMT training data, in addition to existing sentence or token embeddings.", "In the latter case, we use Moses sentenceBLEU , BERTscore (Zhang et al., 2020), and MoverScore (Zhao et al., 2019).", "For BERTscore , we use BERT-large uncased for fairness, and roBERTa (the recommended version) for completeness (Liu et al., 2019).", "We run MoverScore on WMT 2017 using the scripts published by the authors.", "Results: Tables 2, 3, 4 show the results.", "For years 2017 and 2018, a BLEURT -based metric model de-en fi-en gu-en kk-en lt-en ru-en zh-en avg / DA / DA / DA / DA / DA / DA / DA / DA sentBLEU 19.4 / 5.4 20.6 / 23.3 17.3 / 18.9 30.0 / 37.6 23.8 / 26.2 19.4 / 12.4 28.7 / 32.2 22.7 / 22.3 BERTscore w/ BERT 26.2 / 17.3 27.6 / 34.7 25.8 / 29.3 36.9 / 44.0 30.8 / 37.4 25.2 / 20.6 37.5 / 41.4 30.0 / 32.1 BERTscore w/ roBERTa 29.1 / 19.3 29.7 / 35.3 27.7 / 32.4 37.1 / 43.1 32.6 / 38.2 26.3 / 22.7 41.4 / 43.8 32.0 / 33.6 ESIM 28.4 / 16.6 28.9 / 33.7 27.1 / 30.4 38.4 / 43.3 33.2 / 35.9 26.6 / 19.9 38.7 / 39.6 31.6 / 31.3 YiSi1 SRL 19 26.3 / 19.8 27.8 / 34.6 26.6 / 30.6 36.9 / 44.1 30.9 / 38.0 25.3 / 22.0 38.9 / 43.1 30.4 / 33.2 BLEURTbase -pre 30.1 / 15.8 30.4 / 35.4 26.8 / 29.7 37.8 / 41.8 34.2 / 39.0 27.0 / 20.7 40.1 / 39.8 32.3 / 31.7 BLEURTbase 31.0 / 16.6 31.3 / 36.2 27.9 / 30.6 39.5 / 44.6 35.2 / 39.4 28.5 / 21.5 41.7 / 41.6 33.6 / 32.9 BLEURT -pre 31.1 / 16.9 31.3 / 36.5 27.6 / 31.3 38.4 / 42.8 35.0 / 40.0 27.5 / 21.4 41.6 / 41.4 33.2 / 32.9 BLEURT 31.2 / 16.9 31.7 / 36.3 28.3 / 31.9 39.5 / 44.6 35.2 / 40.6 28.3 / 22.3 42.7 / 42.4 33.8 / 33.6 Table 4: Agreement with human ratings on the WMT19 Metrics Shared Task.", "dominates the benchmark for each language pair (Tables 2 and 3).", "BLEURT and BLEURTbase are also competitive for year 2019: they yield the best results for all language pairs on Kendall's Tau, and they come first for 3 out of 7 pairs on DARR.", "As expected, BLEURT dominates BLEURTbase in the majority of cases.", "Pre-training consistently improves the results of BLEURT and BLEURTbase .", "We observe the largest effect on year 2017, where it adds up to 7.4 Kendall Tau points for BLEURTbase ( zh-en ).", "The effect is milder on years 2018 and 2019, up to 2.1 points ( tr-en , 2018).", "We explain the difference by the fact that the training data used for 2017 is smaller than the datasets used for the following years, so pre-training is likelier to help.", "In general pretraining yields higher returns for BERT-base than for BERT-largein fact, BLEURTbase with pretraining is often better than BLEURT without.", "Takeaways: Pre-training delivers consistent improvements, especially for BLEURT-base.", "BLEURT yields state-of-the art performance for all years of the WMT Metrics Shared task.", "a series of tasks for which it is increasingly pressured to extrapolate.", "All the experiments that follow are based on the WMT Metrics Shared Task 2017, because the ratings for this edition are particularly reliable.", "5 Methodology: We create increasingly challenging datasets by sub-sampling the records from the WMT Metrics shared task, keeping low-rated translations for training and high-rated translations for test.", "The key parameter is the skew factor , that measures how much the training data is left-skewed and the test data is right-skewed.", "Figure 1 demonstrates the ratings distribution that we used in our experiments.", "The training data shrinks as increases: in the most extreme case ( = 3 . 0 ), we use only 11.9% of the original 5,344 training records.", "We give the full detail of our sampling methodology in the Appendix.", "We use BLEURT with and without pre-training and we compare to Moses sentBLEU and BERTscore .", "We use BERT-large uncased for both BLEURT and BERTscore .", "5 The organizers managed to collect 15 adequacy scores for each translation, and thus the ratings are almost perfectly repeatable (Bojar et al., 2017) Split by System Split by Input f l uen cy g r a mm a r s e m an t i cs 0/9 systems 0 records 2/9 systems 1,174 records 3/9 systems 1,317 records 5/9 systems 2,424 records 0/224 inputs 0 records 38/224 inputs 836 records 66/224 inputs 1,445 records 122/224 inputs 2,689 records 0.0 0.2 0.4 0.0 0.1 0.2 0.3 0.4 0.5 0.0 0.2 0.4 0.6 Num.", "Results: Figure 2 presents BLEURT 's performance as we vary the train and test skew independently.", "Our first observation is that the agreements fall for all metrics as we increase the test skew.", "This effect was already described is the 2019 WMT Metrics report (Ma et al., 2019).", "A common explanation is that the task gets more dif-ficult as the ratings get closerit is easier to discriminate between good and bad systems than to rank good systems.", "Training skew has a disastrous effect on BLEURT without pre-training: it is below BERTscore for = 1 .", "0 , and it falls under sentBLEU for 1 .", "5 .", "Pre-trained BLEURT is much more robust: the only case in which it falls under the baselines is = 3 .", "0 , the most extreme drift, for which incorrect translations are used for train while excellent ones for test.", "Takeaways: Pre-training makes BLEURT sig-nificantly more robust to quality drifts.", "In this section, we evaluate BLEURT 's performance on three tasks from a data-to-text dataset, the WebNLG Challenge 2017 (Shimorina et al., 2019).", "The aim is to assess BLEURT 's capacity to adapt to new tasks with limited training data.", "Dataset and Evaluation Tasks: The WebNLG challenge benchmarks systems that produce natural language description of entities (e.g., buildings, cities, artists) from sets of 1 to 5 RDF triples.", "The organizers released the human assessments for 9 systems over 223 inputs, that is, 4,677 sentence pairs in total (we removed null values).", "Each input comes with 1 to 3 reference descriptions.", "The submissions are evaluated on 3 aspects: semantics, grammar, and fluency.", "We treat each type of rating as a separate modeling task.", "The data has no natural split between train and test, therefore we experiment with several schemes.", "We allocate 0% to about 50% of the data to training, and we split on both the evaluated systems or the RDF inputs in order to test different generalization regimes.", "Systems and Baselines: BLEURT -pre -wmt , is a public BERT-large uncased checkpoint directly trained on the WebNLG ratings.", "BLEURT -wmt was first pre-trained on synthetic data, then fine-tuned on WebNLG data.", "BLEURT was trained in three steps: first on synthetic data, then on WMT data (16-18), and finally on WebNLG data.", "When a record comes with several references, we run BLEURT on each reference and report the highest value (Zhang et al., 2020).", "We report four baselines: BLEU , TER , Meteor , and BERTscore .", "The first three were computed by the WebNLG competition organizers.", "We ran the latter one ourselves, using BERT-large uncased for a fair comparison.", "Results: Figure 3 presents the correlation of the metrics with human assessments as we vary the share of data allocated to training.", "The more pre-trained BLEURT is, the quicker it adapts.", "The vanilla BERT approach BLEURT -pre -wmt requires about a third of the WebNLG data to dominate the baselines on the majority of tasks, and it still lags behind on semantics (split by system).", "In 1 task 0%: no pretraining N1 tasks 0%: all pretraining tasks B ERT s c o r e e n t a il b a c k t r a n s m e t h o d _ f l a g BLEUROUGE B ERT s c o r e e n t a il b a c k t r a n s m e t h o d _ f l a g BLEU ROUGE 15 10 5 0 5 Pretraining Task R e l a t i v e I m p r o v", "contrast, BLEURT -wmt is competitive with as little as 836 records, and BLEURT is comparable with BERTscore with zero fine-tuning.", "Takeaways: Thanks to pre-training, BLEURT can quickly adapt to the new tasks.", "BLEURT fine-tuned twice (first on synthetic data, then on WMT data) provides acceptable results on all tasks without training data.", "Figure 4 presents our ablation experiments on WMT 2017, which highlight the relative importance of each pre-training task.", "On the left side, we compare BLEURT pre-trained on a single task to BLEURT without pre-training.", "On the right side, we compare full BLEURT to BLEURT pre-trained on all tasks except one.", "Pre-training on BERTscore, entailment, and the backtranslation scores yield improvements (symmetrically, ablating them degrades BLEURT ).", "Oppositely, BLEU and ROUGE have a negative impact.", "We conclude that pre-training on high quality signals helps BLEURT, but that metrics that correlate less well with human judgment may in fact harm the model.", "6 6 Related Work The WMT shared metrics competition (Bojar et al., 2016; Ma et al., 2018, 2019) has inspired 6 Do those results imply that BLEU and ROUGE should be removed from future versions of BLEURT ?", "Doing so may indeed yield slight improvements on the WMT Metrics 2017 shared task.", "On the other hand the removal may hurt future tasks in which BLEU or ROUGE actually correlate with human assessments.", "We therefore leave the question open.", "the creation of many learned metrics, some of which use regression or deep learning (Stanojevic and Sima'an, 2014; Ma et al., 2017; Shimanaka et al., 2018; Chen et al., 2017; Mathur et al., 2019).", "Other metrics have been introduced, such as the recent MoverScore (Zhao et al., 2019) which combines contextual embeddings and Earth Mover's Distance.", "We provide a head-to-head comparison with the best performing of those in our experiments.", "Other approaches do not attempt to estimate quality directly, but use information extraction or question answering as a proxy (Wise-man et al., 2017; Goodrich et al., 2019; Eyal et al., 2019).", "Those are complementary to our work.", "There has been recent work that uses BERT for evaluation.", "BERTScore (Zhang et al., 2020) proposes replacing the hard n-gram overlap of BLEU with a soft-overlap using BERT embeddings.", "We use it in all our experiments.", "Bertr (Mathur et al., 2019) and YiSi (Mathur et al., 2019) also make use of BERT embeddings to capture similarity.", "Sum-QE (Xenouleas et al., 2019) fine-tunes BERT for quality estimation as we describe in Section 3.", "Our focus is differentwe train metrics that are not only state-of-the-art in conventional IID experimental setups, but also robust in the presence of scarce and out-of-distribution training data.", "To our knowledge no existing work has explored pretraining and extrapolation in the context of NLG.", "Previous studies have used noising for refer-enceless evaluation (Dusek et al., 2019).", "Noisy pre-training has also been proposed before for other tasks such as paraphrasing (Wieting et al., 2016; Tomar et al., 2017) but generally not with synthetic data.", "Generating synthetic data via paraphrases and perturbations has been commonly used for generating adversarial examples (Jia and Liang, 2017; Iyyer et al., 2018; Belinkov and Bisk, 2018; Ribeiro et al., 2018), an orthogonal line of research.", "We presented BLEURT , a reference-based text generation metric for English.", "Because the metric is trained end-to-end, BLEURT can model human assessment with superior accuracy.", "Furthermore, pre-training makes the metrics robust particularly robust to both domain and quality drifts.", "Future research directions include multilingual NLG evaluation, and hybrid methods involving both humans and classifiers.", "Thanks to Eunsol Choi, Nicholas FitzGerald, Jacob Devlin, and to the members of the Google AI Language team for the proof-reading, feedback, and suggestions.", "We also thank Madhavan Ki-dambi and Ming-Wei Chang, who implemented blank-filling with BERT." ]
[ "abstain", "abstain", "objective", "objective", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "objective", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "other", "method", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "other" ]
[ "Readers of academic research papers often read with the goal of answering specific questions.", "Question Answering systems that can answer those questions can make consumption of the content much more efficient.", "However, building such tools requires data that reflect the difficulty of the task arising from complex reasoning about claims made in multiple parts of a paper.", "In contrast, existing information-seeking question answering datasets usually contain questions about generic factoid-type information.", "We therefore present QASPER , a dataset of 5,049 questions over 1,585 Natural Language Processing papers.", "Each question is written by an NLP practitioner who read only the title and abstract of the corresponding paper, and the question seeks information present in the full text.", "The questions are then answered by a separate set of NLP practitioners who also provide supporting evidence to answers.", "We find that existing models that do well on other QA tasks do not perform well on answering these questions, un-derperforming humans by at least 27 F 1 points when answering them from entire papers, motivating further research in document-grounded, information-seeking QA, which our dataset is designed to facilitate.", "Machines built to assist humans who engage with texts to seek information ought to be designed with an awareness of the information need.", "Abstractly, the human's need should define the lens through which the system views the text in order to find desired information.", "Existing information-seeking machine reading datasets (e.g., Kwiatkowski et al., 2019; Clark et al., 2020) have led to significant progress in reading at scale (e.g., Asai et al., 2020; Guu et al., 2020; Liu et al., 2020).", "However, most of those benchmarks focus on an open domain setting where the questions are not anchored in any particular user context.", "The result is an emphasis Quasar: Datasets for Question Answering by Search and Reading Abstract We present two new large-scale datasets aimed at evaluating systems designed to comprehend a natural language query and extract its answer from a large corpus of text.", "We present QASPER , 1 an information-seeking question answering (QA) dataset over academic research papers.", "Each question is written as a followup to the title and abstract of a particular paper, and the answer, if present, is identified in the rest of the paper, along with evidence required to arrive at it.", "This setup results in questions requiring more complex document-level reasoning than prior datasets, because", "(i) abstracts provide rich prompts for questions that can be asked as follow-up and", "(ii) academic research papers naturally trigger ques-1 Loosely derived from Question Answering over Scien-tific Research Papers .", "The dataset, baseline code, and other information about the project can be found at https:// allenai.org/project/qasper .", "tions by their target readers that require supporting or refuting claims.", "This evidence may be spread across the paper, including tables and figures, often resulting in complex entailment problems.", "The example in Figure 1 illustrates one such case where we need to retrieve information from paragraphs in three different sections to answer the question.", "QASPER contains 5,049 questions over 1,585 natural language processing (NLP) papers, asked by regular readers of NLP papers, and answered by a separate set of NLP practitioners.", "Each paper has an average of 3.2 questions, up to a maximum of 12 questions for a single paper.", "In addition to providing answers when the questions are answerable, the annotators were asked to select text, tables, or figures as evidence required for answering the questions.", "55.5% of the questions require evidence from multiple paragraphs in the paper and 13% require tables or figures.", "To the best of our knowledge, QASPER is the first QA dataset in the academic research domain focusing on entire papers, and not just abstracts.", "To quantify the difficulty of the tasks in QASPER , we apply state-of-the-art document-level Transformer (Vaswani et al., 2017) models to the tasks of selecting evidence and generating answers, and show that the best model performance lags behind humans by 27 F 1 points at answering questions from entire papers, and 32 F 1 points at selecting the paragraphs that provide evidence to answer the questions, indicating that these are both unsolved problems.", "Additionally, we experiment with oracles that answer questions from gold evidence and find that better pretraining and domain-adaptation might be helpful.", "We now describe our process for constructing the dataset.", "We began with a set of open-access NLP papers, recruited NLP practitioners who are regular readers of research papers, and designed two different data collection interfaces: one for collecting follow-up questions given titles and abstracts, and another for obtaining evidence and answers to those questions.", "We filtered S2ORC (Lo et al., 2020), 2 a collection of machine-readable full text for open-access pa-2", "pers, to", "(i) those from arXiv with an associated LaTeX source file, 3 and", "(ii) are in the computational linguistics domain.", "4 We limited our domain to computational linguistics to ensure high quality as we have access to realistic users through our research network; broader domain collection is left to future work and should be enabled by the proof-of-concept of our protocols given in this paper.", "We used the S2ORC parser (which normalizes multi-file LaTeX sources and resolves comments and macros) to convert LaTeX markup to full text while preserving section and paragraph breaks and math equations.", "We supplemented the paper text with extracted images of figures and tables associated with their captions; these were crawled from Semantic Scholar.", "5 The result of this process was a collection of 18K full text papers for annotation.", "To ensure that our questions are realistic, we decoupled the question-writing and question-answering phases.", "For both tasks we recruited graduate students studying NLP and freelancers practicing NLP through professional networks and Upwork 6 .", "All the workers were regular readers of NLP papers, and were paid US$25 per hour on average ($20-$40 based on experience).", "We paid them on a per-hour basis and not a per-question basis to prioritize data quality over quantity.", "A total of 25 workers wrote questions while 51 answered them.", "Questions To ensure that annotators were actually interested in the paper they are reading, we provided them with a lightweight search interface to search papers from the aforementioned collection to focus on their papers of interest.", "The interface supports entering manual queries and examples of the queries annotators used include general (e.g., computer vision) or specific (e.g., question an-swering, information extraction) areas of study, specific tasks (e.g., language identification), entities (e.g., bert, transformers) or concepts (e.g., commonsense, interpretability), or domain specifications (e.g., medical, wikipedia).", "Annotators also had the option to not enter any search queries; in this case, they were shown random papers.", "Annotators were displayed only the title and abstracts of relevant papers and asked to 3 LaTeX allows us to avoid quality issues with PDF parsing.", "4 We chose those either tagged with the cs.CL arXiv category or published with an ACL Anthology identifier.", "5 http://semanticscholar.org 6 https://www.upwork.com/ write any number of questions they had about the paper.", "Annotators were instructed to only write questions that are not answerable from the title and abstract but expected to be answered somewhere in the paper.", "Annotators also provided basic information about their expertise in NLP and how familiar they already were with the paper for which they asked questions.", "Most workers (about 70%) had some experience in NLP, with 20% having more than five years of experience.", "A vast majority (94%) of the abstracts were seen by the question-writers for the first time.", "Answers Annotators were randomly assigned papers with all the corresponding questions written for that paper.", "They were shown the paper title, abstract, question, full text, and all associated figures and tables to answer the questions.", "After reading these, annotators were were asked to: Make a binary decision as to whether the question is answerable given the paper.", "If the question is answerable, select the minimal set of evidence snippets that contains the answer to the question.", "This could be (possibly discontiguous) paragraphs from the text and/or figures or tables.", "Annotators were asked to prioritize text over figures and tables, unless the information required was present only in figures or tables.", "When multiple paragraphs could serve as evidence, annotators were asked to first prioritize evidence that adequately answered the question, and then paragraphs that occurred earlier in the text.", "If the question is answerable, also provide a concise answer to the question.", "Annotators were also asked to also indicate whether their concise answer was", "(i) extracted from the evidence,", "(ii) yes or no, or", "(iii) abstractively written.", "Annotators were allowed to skip any questions they did not feel comfortable answering.", "Since the answering task is significantly more complex than the question-writing task, we designed interactive tutorials and qualification exams for the workers for this task using CrowdAQ (Ning et al., 2020).", "Workers who scored well were invited to work on the task.", "If the test performance indicated that the workers did not have sufficient NLP knowledge, or were not used to reading papers we did not let them work on the task.", "In cases where the workers misunderstood the task, but had sufficient background knowledge, we provided additional training before letting them work on the task.", "Table 1 provides representative examples from QASPER categorized by question, answer, and evidence types, which we describe here in greater detail.", "Question types We first analyze whether our annotation setup results in questions that are anchored in the context of the papers.", "To answer this question, we manually 7 categorized a set of 200 questions as being applicable to most papers in the domain (general) vs. being applicable only to the paper that the question is written about (specific).", "Table 1 shows that most of the questions (67%) are specific to the papers they are written about.", "This result indicates the advantage of viewing the QASPER task as a question answering problem, instead of an information extraction problem since a fixed schema would not be able to handle the long tail of paper-specific information needs.", "Answer types As shown in Table 1, most of the answers in the dataset are extractive.", "The average length of the extractive answers is 14.4 words (in-cluding all spans), and that of abstractive spans is 15.6 words.", "Evidence types Evidence can include one or more paragraphs from the paper, a figure, or a table, or a combination of these.", "Table 1 shows the distribution of these types.", "Among the answerable questions with text-only evidence, 55.5% of the answers have multi-paragraph evidence (Figure 1 is one example).", "Unanswerable questions do not have any evidence.", "Among the answerable ones, (3.0%) have no evidence when the answer is No , and the evidence is the lack of a mention of something specific.", "The last question in Table 4 is one example of such a case.", "Distribution of evidence paragraphs We perform an analysis to identify the main sections of a paper that contain textual evidence.", "We assign each evidence paragraph to its containing top-level 8 7 Two domain-experts independently judged these, and achieved a Cohen's of 0.94.", "section, and perform some section name normalization.", "We find that among the frequently used section names such as Experiments and Intro-duction, there was not a single section name that contained a majority of evidence spans, indicating that the distribution of evidence over section in the paper was more or less uniform.", "Inter-annotator agreement 44% of the questions in QASPER have multiple annotated answers.", "On average, each question is answered by 1.6 annotators (up to a maximum of 6 annotators for the same question).", "Using these multiple annotations, we compute some measures of agreement between annotators.", "First, we found that there is a high level of agreement (90%) regarding answerability of questions.", "Second, we find that annotators agreed on the type of the evidence (text vs. fig-ure) in 84.0% of the cases.", "Papers often provide the same information both in tables and text, and agreement over the evidence types could be a consequence of our clear annotation guidelines regarding selecting evidence.", "Correctness To estimate the correctness of the answer annotations in QASPER , we manually analyzed 100 randomly sampled questions with multiple answer annotations (averaging 2.73 answers per question).", "We found that 207 (75.8%) of the answers were correct.", "98% of the questions had at least one correct answer, and 77% had most of the answers correct.", "We formally define the QASPER tasks as follows: Given a paper, and a question about it, the primary task is to determine if the question is answerable, and output a predicted answer, that is one or more spans in the full-text of the paper, yes , no or other free-form text.", "A system built for this will be evaluated based on the correctness of the predicted answer measured against the reference answers.", "Since QASPER also provides labeled evidence for all questions, the system may also use auxiliary supervision provided by the evidence.", "One such auxiliary task is to predict the evidence required for the question.", "The inputs are the same as that of the primary task, but the outputs are expected to be one or more paragraphs in the full-text, figures, or tables, and they will be evaluated against labeled evidence spans.", "Evaluation metrics As an automatic proxy for the measure of correctness of all types of answers, we use the span-level F 1 measure proposed by Ra-jpurkar et al. (2016).", "We convert answers that are multiple selected spans into single comma-separated strings.", "For questions with multiple reference answers, we compute the max spanF 1 of the predictions over all the references.", "We evaluate the performance of a system over the auxiliary task by computing a F 1 score over the set of paragraphs, figures, and tables chosen by the system against the reference evidence, considering a max when there are multiple references.", "We refer to these metrics as AnswerF 1 and EvidenceF 1 , respectively.", "Data splits We split the dataset into train, validation, and test sets, so that each paper appears in only one of them.", "Our analysis of correctness of annotations presented in Section 3 indicates a high likelihood (98%) of evaluating against a correct reference when evaluation is aggregated over multiple references.", "Hence we ensure that most of the questions in validation and test sets have multiple references (98% in test, and 74% in validation).", "This resulted in 2,593, 1,005, and 1,451 questions in the three sets, respectively.", "Estimating human performance To estimate an upper bound on model performance given our data splits and metrics, we assess the performance of the workers when evaluated against each other using the same metrics on a sample of the test set.", "Since model performance is evaluated by aggregating over multiple references, we consider a subset of the test set containing questions with at least three references (40% of the test set), evaluate each reference against the remaining, and compute an average over all such combinations.", "This procedure estimates the human performance to be 60.9 AnswerF 1 , and 71.6 EvidenceF 1 .", "Note that given the disagreements among the workers estimated in Section 3, this is a lower bound on human performance for two reasons: first, because only two annotations are used to compute the metric, while systems are evaluated against all three; and second, because the annotators are NLP practitioners, not expert researchers, and it is likely that an expert would score higher.", "Hence we report these num-bers, along with a breakdown over answer types in Table 2 and Table 3 as human performance lower bounds.", "We base our model on pretrained Transformer (Vaswani et al., 2017) models which currently produce state-of-the-art results on a majority of QA tasks.", "9 Recall that QASPER introduces two main modeling challenges different answer types and long input documents.", "First, QASPER includes a variety of answer types, including extractive, abstractive, yes/no, and unanswerable questions, which means a typical span-selection BERT-based QA model (Devlin et al., 2019) is not sufficient to support all these answer types.", "We address this by converting all answer types into a single task: generating answer text (Raffel et al., 2020; Khashabi et al., 2020).", "10 This is a sequence-to-sequence formulation that requires an encoder-decoder Transformer model where the encoder reads the question and the document and the decoder generates the answer text.", "Second, research papers are much longer than the typical 512 or 1024 token limit of most BERT-like models, so we need a Transformer model that can process long inputs.", "We use the Longformer-Encoder-Decoder (LED; Beltagy et al., 2020), an encoder-decoder Transformer model that can effi-ciently process input sequences thousands of tokens long.", "With LED's support for input sequence length of 16K tokens, we can encode 99% of the paper full texts in the QASPER dataset without truncation.", "Longformer-Encoder-Decoder (LED) LED (Beltagy et al., 2020) is a variant of the original Transformer encoder-decoder model that replaces the Transformer's full self-attention in the encoder with the efficient local+global attention pattern 9 https://paperswithcode.com/task/ question-answering 10 We tried a model that predicts answer type, then based on the type uses a different head to predict the corresponding answer.", "This model performed much worse than the proposed seq2seq formulation.", "of Longformer.", "This allows each token to attend to only its local window and a pre-specified set of global locations of interest, thereby scaling self-attention computation linearly with the input size (as opposed to quadratically with full context self-attention).", "LED has a similar architecture to BART (Lewis et al., 2020) in terms of number of layers and hidden state sizes, with the distinction that it has a larger position embeddings matrix, allowing it to process inputs of up to 16K tokens long (up from 1K tokens in the original BART model).", "In practice, LED's parameters are initialized from a pretrained BART model, and LED copies BART's position embeddings 16 times to fill the entire 16K position embeddings matrix.", "For all experiments we use the LED-base sized model, which uses BART-base weights.", "Input and Output Encoding For the input, we follow the Longformer QA models (Beltagy et al., 2020) and encode the question and context in one concatenated string with global attention over all the question tokens.", "For the output, all answer types are encoded as single strings.", "The string is the text of the abstractive answer, a comma separated concatenation of the extractive spans, Yes, No, or Unanswerable.", "Evidence extraction To support extracting evidence paragraphs, we prepend each paragraph with a </s> token and add a classification head over these tokens on LED's encoder side.", "We also add Longformer's global attention over these tokens to facilitate direct information flow across the paragraphs.", "We then train LED using both loss functions (teacher-forced text generation and paragraph classification) in a multi-task training setup.", "For the answer generation, we use a cross-entropy loss function over the vocabulary.", "For the evidence paragraph extraction, we use a cross-entropy loss function with binary 0 or 1 gold labels for evidence/non-evidence paragraph.", "To account for class imbalance, we use loss scaling with weights proportional to the ratio of positive to negative gold paragraphs in the batch, which we found to be crucial for the model to train.", "One benefit of multi-task training of evidence extraction along with answer selection is that tasks can benefit each other (see Section 5.2).", "We evaluate model performance on question answering and evidence selection tasks, and compare", "them to estimated lower bounds on human performance.", "These human performance estimates are calculated by comparing the answers of questions for which we have multiple human annotations.", "For each question, we choose one annotation as if it were a prediction, and evaluate it against the rest of the annotations, and consider as human performance the average over all annotations chosen as predictions.", "We restrict our experiments to the subset of questions in QASPER that can be answered from text in the paper, ignoring those that require figures or tables as evidence (13% of the dataset; see Section 3) to avoid having to deal with multimodal inputs.", "We leave multimodal question answering to future work.", "We train all models using the Adam optimizer (Kingma and Ba, 2014) and a triangular learning rate scheduler (Howard and Ruder, 2018) with 10% warmup.", "To determine number of epochs, peak learning rate, and batch size, we performed manual hyperparameter search on a subset of the training data.", "We searched over {1, 3, 5} epochs with learning rates { 1 e 5 , 3 e 5 , 5 e 5 , 9 e 5 }, and found that smaller batch sizes generally work better than larger ones.", "Our final configuration was 10 epochs, peak learning rate of 5 e 5 , and batch size of 2, which we used for all reported experimental settings.", "When handling full text, we use gradient checkpointing (Chen et al., 2016) to reduce memory consumption.", "We run our experiments on a single RTX 8000 GPU, and each experiment takes 3060 minutes per epoch.", "Question answering Table 2 shows the overall performance of the LED-base model 11 on question answering, as well as the performance breakdown on the different answer types.", "The table also compares LED-base variants when the input is heuristically limited to smaller parts of the paper (i.e., no context, abstract, introduction).", "We generally observe that, by using more context, the performance improves.", "Specifically, as we observe in row 5 encoding the entire context results in significant overall performance improvement ( = +9 . 5 ) over the best heuristic (introduction).", "This signifies the importance of encoding the entire paper.", "Comparing rows 4 and 5, we observe that using the 11 We trained an LED-large model as well, but it performed much worse than the base model on the QA task.", "Evidence selection Table 3 illustrates the evidence selection performance of the LED-large and LED-base models compared with simpler baselines.", "We observe that LED variants outperform the simple TF-IDF baseline but there still remains a large gap to human performance.", "Varying amounts of training Figure 2 shows the learning curve that measures the validation AnswerF 1 and EvidenceF 1 of the LED-base variants based on training data size.", "The learning curve suggests that performance has not reached a plateau, and future data collection could be useful.", "Answer prediction from gold evidence To better isolate the question answering (as opposed to evidence selection) task performance, we perform oracle experiments where models are given the gold evidence.", "For these experiments, we are able to use larger (T5-large; Raffel et al., 2020) or better task-adapted pretrained models (UnifiedQA-large; Khashabi et al., 2020), which perform significantly better in the oracle setting.", "We did not use them in the non-oracle setting, however, as Longformer versions of these models are not available, and LED's ability to handle the full document without the need for a pipelined retrieval system was more important.", "These experiments show that (1) the human lower bound is in fact a lower bound, as large models exceed it for span answers in this setting; (2) the majority of the large headroom in the non-oracle setting can be closed with better evidence selection; and (3) research into making large pretrained models able to better scale to long documents would be beneficial.", "Error analysis To gain insight into the model's errors, we sample 67 test questions with predicted AnswerF 1 scores below 0.10 from the LED model trained with evidence prediction scaffolding.", "We remove four cases in which the predicted answers are actually correct.", "Examining gold answers of the remaining 63, we find 31 are extractive, 24 are abstractive, 3 are yes, 3 are no, and 2 are unanswerable.", "We observe that LED often predicts shorter spans than the gold answers (9.5 words shorter than gold counterparts, on average).", "Focusing only on the 55 questions with either extractive or abstractive gold answers, we manually categorize error types in Table 5.", "Information-Verifying QA A large body of work on question answering follows the information-verifying paradigm where the writer of the question already knows its answer, and the questions are written solely for evaluating the knowledge or understanding capabilities of machines.", "Some examples include SQuAD (Rajpurkar et al., 2016), TriviaQA (Joshi et al., 2017), NarrativeQA (Kocisk et al., 2018), WikiHop (Welbl et al., 2018), HotpotQA (Yang et al., 2018), CoQA (Reddy et al., 2019), DROP (Dua et al., 2019), QUOREF (Dasigi et al., 2019).", "Most datasets for QA on academic research papers also fall within the information-verifying paradigm as they automatically construct QA examples using extracted entities and relations and structured knowledge resources, like DrugBank.", "Some examples include emrQA (Pampari et al., 2018), BioRead (Pappas et al., 2018), BioMRC (Pappas et al., 2020), MedHop (Welbl et al., 2018).", "While these datasets enabled significant progress in machine comprehension, they include biases in questions that may not reflect real-world settings (Kwiatkowski et al., 2019).", "Information-Seeking QA in General Domain Recognizing this challenge, others have followed an information-seeking paradigm where the writer of questions is genuinely interested in finding the answer to the question, or at least does not have access to the answer.", "Examples of such datasets include WikiQA (Yang et al., 2015), NewsQA (Trischler et al., 2017), MsMarco (Campos et al., 2016), QuAC (Choi et al., 2018), Natural Questions (Kwiatkowski et al., 2019), TyDiQA (Clark et al., 2020), and IIRC (Ferguson et al., 2020).", "Unlike QASPER , Natural Questions and TyDiQA 12 12 TyDiQA uses short snippets to prime annotators to write questions of interest, but the annotation process does not re-questions are not grounded in any contexts, and the associated documents are linked to the questions after they are written.", "In contrast, QASPER 's questions are real follow-up questions about a paper that a reader of appropriate domain expertise would have after reading the title and the abstract.", "The priming lets the readers ask detailed questions that are specific to the papers in context, those that require a deeper understanding of the contexts, like those shown in Figure 1 and Table 1.", "QuAC used similar data collection method but with focus on entities, which QASPER does not impose.", "Domain-Specific Information-seeking QA Some work has been done on information-seeking QA on academic research papers.", "PubmedQA (Jin et al., 2019) derives Yes/No/Maybe questions from PubMed paper titles answered from the conclusion sections of the corresponding abstracts.", "BioAsq benchmarks (Balikas et al., 2013; Nentidis et al., 2018; Krallinger et al., 2020) focus on open-domain QA over PubMed abstracts.", "Like QASPER , BioAsq answers can take different forms (e.g., yes/no, extracted span(s)).", "QASPER differs from BioAsq in that questions are grounded in a single paper of interest.", "Furthermore, QASPER uses the paper full text, not just the abstract.", "To the best of our knowledge, QASPER is the first information-seeking QA dataset in a computer science domain, while most prior work using academic research papers has been in biomedicine.", "Furthermore, with over 5K annotated questions, QASPER is also larger than other comparable human-annotated QA datasets PubmedQA and BioAsq contain 1K and 3.2K questions, respectively.", "Finally, QASPER poses a challenging full document-level task while other related datasets are abstract-level.", "Beyond the domain of academic research, realistic QA datasets have also been built in the privacy policy domain (Ravichander et al., 2019; Ahmad et al., 2020).", "These tasks are similar to our evidence selection task.", "We presented QASPER , an information-seeking QA dataset over NLP research papers.", "With natural questions asked as follow-up to titles and abstracts, the task presented by QASPER requires evidence from multiple paragraphs and/or figures and tables within the full text of the papers.", "Our empirical quire workers to write questions grounded in those snippets.", "results show plenty of room for improvement when compared to the estimated human performance, and suggest that QASPER could serve as a test-bed for evaluating document-grounded QA research.", "We present a new dataset that uses papers authored by other researchers.", "To adhere to copyright, we have restricted ourselves to arXiv papers released under a CC-BY-* license, as identified via Unpay-wall, which was used in the S2ORC (Lo et al., 2020) dataset construction.", "Due to our choice to use arXiv as the source of papers, QASPER is almost entirely an English-language dataset, and QA systems built on QASPER would not be expected to work well on non-English language research papers.", "We have determined the amount we paid the annotators to be well-above the minimum wage in our local area.", "While we do collect information about annotator background in NLP and familiarity with the papers they are annotating, we have not collected personal identifiable information without their permission except for payment purposes, and do not include any such information in the released dataset." ]
[ "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "objective", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "objective", "result", "result", "method", "abstain", "method", "abstain", "other", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "method", "method", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "other", "abstain", "abstain" ]
[ "Transformer architecture achieves great success in abundant natural language processing tasks.", "The over-parameterization of the Transformer model has motivated plenty of works to alleviate its overfitting for superior performances.", "With some explorations, we find simple techniques such as dropout, can greatly boost model performance with a careful design.", "Therefore, in this paper, we integrate different dropout techniques into the training of Transformer models.", "Specifically, we propose an approach named UniDrop to unite three different dropout techniques from fine-grain to coarse-grain, i.e., feature dropout, structure dropout, and data dropout.", "Theoretically, we demonstrate that these three dropouts play different roles from regularization perspectives.", "Empirically, we conduct experiments on both neural machine translation and text classification benchmark datasets.", "Extensive results indicate that Transformer with UniDrop can achieve around 1 .", "5 BLEU improvement on IWSLT14 translation tasks, and better accuracy for the classification even using strong pre-trained RoBERTa as backbone.", "In recent years, Transformer (Vaswani et al., 2017) has been the dominant structure in natural language processing (NLP), such as neural machine translation (Vaswani et al., 2017), language modeling (Dai et al., 2019) and text classification (Devlin et al., 2019; Liu et al., 2019).", "To further improve the model performance, there has been much effort in designing better architectures or introducing external knowledge into Transformer models (Wu et al., 2019; Lu et al., 2019; Kitaev et al., 2020; Ahmed et al., 2017; Hashemi et al., 2020), which increases computational costs or requires extra resources.", "a crucial problem for Transformer.", "Regularization methods such as weight decay (Krogh and Hertz, 1992), data augmentation (Sennrich et al., 2016a), dropout (Srivastava et al., 2014), parameter sharing (Dehghani et al., 2018; Xia et al., 2019) are all widely adopted to address overfitting.", "Among these regularization approaches, dropout (Srivas-tava et al., 2014), which randomly drops out some hidden units during training, is the most popular one and various dropout techniques have been proposed for Transformer.", "For example, Fan et al. (2020a) propose LayerDrop , a random structured dropout, to drop certain layers of Transformer during training.", "Zhou et al. (2020) alternatively propose DropHead as a structured dropout method for regularizing the multi-head attention mechanism.", "Both of them achieved promising performances.", "One great advantage of dropout is that it is free of additional computational costs and resource requirements.", "Hence we ask one question: can we achieve stronger or even state-of-the-art (SOTA) results only relying on various dropout techniques instead of extra model architecture design or knowledge enhancement?", "To this end, in this paper, we propose UniDrop to integrate three different-level dropout techniques from fine-grain to coarse-grain, feature dropout , structure dropout , and data dropout , into Transformer models.", "Feature dropout is the conventional dropout (Srivastava et al., 2014) that we introduced before, which is widely applied on hidden representations of networks.", "Structure dropout is a coarse-grained control and aims to randomly drop some entire substructures or components from the whole model.", "In this work, we adopt the aforementioned LayerDrop (Fan et al., 2020a) as our structure dropout.", "Different from the previous two dropout methods, data dropout (Iyyer et al., 2015) is performed on the input data level, which serves as a data augmentation method by randomly dropping out some tokens in an input sequence.", "(a) Transformer architecture.", "(b) Structure and overview of feature dropout.", "We first theoretically analyze different regularization roles played by the three dropout techniques, and we show they can improve the generalization ability from different aspects.", "Then, we provide empirical evaluations of the UniDrop approach.", "We conduct experiments on neural machine translation with 8 translation datasets, and text classification task with 8 benchmark datasets.", "On both sequence generation and classification tasks, experimental results show that the three dropouts in UniDrop can jointly improve the performance of Transformer.", "The contributions of this paper can be summarized as follows: We introduce UniDrop , which unites three different dropout techniques into a robust one for Transformer, to jointly improve the performance of Transformer without additional computational cost and prior knowledge.", "We theoretically demonstrate that the three dropouts, i.e., feature dropout, structure dropout, and data dropout play different roles in preventing Transformer from overfitting and improving the robustness of the model.", "Extensive results indicate that Transformer models with UniDrop can achieve strong or even SOTA performances on sequence generation and classification tasks.", "Specifically, around 1 .", "5 BLEU improvement on IWSLT14 translation tasks, and better accuracy for classification even using strong pre-trained model RoBERTa as backbone.", "Feature dropout (FD) and structure dropout (SD) are highly coupled with model architecture.", "Therefore, we briefly recap Transformer and refer the readers to Vaswani et al. (2017) for details.", "As shown in Figure 1a, Transformer is stacked by several identical blocks, and each block contains two sub-layers, which are multi-head self-attention layer and position-wise fully connected feed-forward layer.", "Each sub-layer is followed by an AddNorm operation that is a residual connection Add (He et al., 2016) and a layer normalization LN (Ba et al., 2016).", "Multi-head Attention sub-layer consists of multiple parallel attention heads, and each head maps the query Q and a set of key-value pairs K , V to an output through a scale dot-product attention: Attn( Q , K , V ) = softmax( QK (cid:62) d k ) V , (1) where d k is the dimension of query and key, and 1 d k is a scaling factor.", "The outputs of these heads are then concatenated and projected again to result in the final values.", "Position-wise Feed-Forward sub-layer applies two linear transformations with an inner ReLU (Nair and Hinton, 2010) activation: FFN ( x ) = max(0 , xW 1 + b 1 ) W 2 + b 2 , (2) where W and b are parameters.", "In this section, we first introduce the details of the three different levels of dropout techniques we study, feature dropout, structure dropout and data dropout.", "Then we provide the theoretical analysis of these dropout methods on the regularization perspectives.", "Finally, we present our proposed UniDrop approach for training Transformer.", "The feature dropout (FD), as a well-known regularization method, is proposed by Srivastava et al. (2014), which is to randomly suppress neurons of neural networks during training by setting them to 0 with a pre-defined probability p .", "In practice, dropout is applied to the output of each sub-layer by default.", "Besides, Transformer also contains two specific feature dropouts for multi-head attention and activation layer of feedforward network.", "In this work, we also explore their effects on the performance of Transformer.", "FD-1 (attention dropout): according to Equation (1), we can obtain attention weight matrix A = QK (cid:62) towards value sequence V .", "Our FD-1 is applied to the attention weight A .", "FD-2 (activation dropout): FD-2 is employed after the activation function between the two linear transformations of FFN sub-layer.", "In addition to the above FDs for Transformer, we still find the risk of overfitting in pre-experiments.", "Therefore, we further introduce another two feature dropouts into the model architecture: FD-3 (query, key, value dropout): FD-1 is used to improve generalization of multi-head attention.", "However, it is directly applied to the attention weights A , where drop value A ( i, j ) means ignore the relation between token i and token j , thus a larger FD-1 means a larger risk of losing some critical information from sequence positions.", "To alleviate this potential risk, we add dropout to query, key, and value before the calculation of attention.", "FD-4 (output dropout): we also apply dropout to the output features before linear transformation for softmax classification.", "Specifically, when dealing with sequence-to-sequence tasks such as machine translation, we add FD-4 to the output features of the last layer in the Transformer decoder, otherwise the last layer of the Transformer encoder.", "The positions of each feature dropout applied in Transformer 1 are shown in Figure 1b.", "There are three structure dropouts, respectively LayerDrop (Fan et al., 2020a), DropHead (Zhou et al., 2020) and HeadMask (Sun et al., 2020), which are specifically designed for Transformer.", "Some recent studies (Voita et al., 2019; Michel et al., 2019) show multi-head attention mechanism is dominated by a small portion of attention heads.", "To prevent domination and excessive coadaptation between different attention heads, Zhou et al. (2020) and Sun et al. (2020) respectively propose structured DropHead and HeadMask that drop certain entire heads during training.", "In contrast, LayerDrop (Fan et al., 2020a) is a higher-level and coarser-grained structure dropout.", "It drops some entire layers at training time and directly reduces the Transformer model size.", "In this work, we adopt LayerDrop as the structure dropout to incorporate it into our UniDrop .", "Data dropout aims to randomly remove some words in the sentence with a pre-defined probability.", "It is often used as a data augmentation technique (Wei and Zou, 2019; Xie et al., 2020).", "However, directly applying vanilla data dropout is hard to keep the original sequence for training, which leads to the risk of losing high-quality training samples.", "To address this issue, we propose a two-stage data dropout strategy .", "Specifically, given a sequence, with probability p k (a hyperparameter lies in (0 , 1) ), we keep the original sequence and do not apply data dropout.", "If data dropout is applied, for each token, with another probability p (another hyperparameter lies in (0 , 1) ), we will drop the token.", "In this section, we provide theoretical analysis for feature dropout, structure dropout and data dropout, to show their different regularization effects.", "We first re-formulate the three dropout methods.", "For some probability p and layer representation h R d (i.e., h is the vector of outputs of some layer), we 1 We also explored other positions for feature dropout, but their performances are not so good (see Appendix A.3).", "randomly sample a scaling vector R d with each independent coordinate as follows: i = (cid:40) 1 with probability p p 1 p with probability 1-p .", "(3) Here, i indexes a coordinate of , i [1 , ..., d ] .", "Then feature dropout can be applied by computing h fd = ( 1 + ) (cid:12) h, where (cid:12) denotes element-wised product and 1 = (1 , 1 , , 1) (cid:48) .", "Similar to Wei et al. (2020), we apply Taylor expansion to L and take expectation to : E L ( F ( h fd ( x ))) = E L ( F (( 1 + ) (cid:12) h ( x ))) L ( F ( h ( x )) + 1 2 E ( (cid:12) h ( x )) TD 2 h L ( x )( (cid:12) h ( x )) = L ( F ( h ( x )) + p 2(1 p ) d (cid:88) j =1 D 2 h j ,h j L ( x ) h j ( x ) 2 , (4) where D 2 h L is the Hessian matrix of loss with respect to hidden output h and D 2 h j ,h j L ( x ) is the j -th diagonal element of D 2 h L .", "We use F ( h fd ( x )) to denote the output of a model after dropping feature from a hidden layer and L to denote the loss function.", "Expect the original loss L ( F ( h ( x ))) , the above formula shows that feature dropout implicitly regularize the term (cid:80) dj =1 D 2 h j ,h j L ( x ) h j ( x ) 2 , which relates to the trace of the Hessian.", "For structure dropout, we use a 1-dim random scalar R whose distribution is: = 1 with probability p , and = 0 with probability 1 p .", "The structure dropout is similarly applied by computing h sd = (1 + ) h .", "For input data x R m , here x is a sequence of tokens and m is the sequence length, we sample a random scaling vector R m with independent random coordinates where each coordinate is identically distributed as .", "The input data after drop data becomes x dd = ( 1 + ) (cid:12) x .", "Similar to feature dropout, we can obtain that data dropout implicitly optimizes the regularized loss as follows: L ( F ( h ( x ))) p x T x L ( x ) + p (cid:80) mj =1 D 2 x j ,x j L ( x ) x 2 j , and structure dropout implicitly optimizes the regularized loss: L ( F ( h ( x ))) p h ( x ) T h L ( x ) + p (cid:80) mi,j =1 D 2 h i ,h j L ( x ) h i ( x ) h j ( x ) , where D 2 h i ,h j L ( x ) is the ( i, j ) -th element in Hessian matrix D 2 h L .", "Interpretation From the above analysis, we can conclude that feature dropout, structure dropout and data dropout regularize different terms of the Data Dropout Layer Dropout Feature Dropout 1 +1 +2 Figure 2: Different dropout components in UniDrop .", "model, and they can not be replaced by each other.", "(1) Because the hidden output will be normalized by layer normalization, the term h ( x ) T h L ( x ) equals to zero according to Lemma 2.4 in Arora et al. (2019).", "Therefore, structure dropout implicitly regularizes the term (cid:80) mi,j =1 D 2 h i ,h j L ( x ) .", "Hence, structure dropout can regularize the whole elements of Hessian of the model with respect to hidden output, while feature dropout only regularizes the diagonal elements of the Hessian.", "Thus, integrating structure dropout and feature dropout can regularize every component of Hessian with emphasizing the diagonal elements of the Hessian.", "(2) Since x is also normalized, the term x T x L ( x ) equals to zero according to Lemma 2.4 in Arora et al. (2019).", "Different from feature dropout and structure dropout, data dropout regularizes Hessian of loss with respect to input data.", "Regularizing Hessian matrix with respect to both input and hidden output can improve model robustness and hence the generalization ability.", "We put more details in Appendix A.1.", "From the above theoretical analysis, the three dropout techniques are performed in different ways to regularize the training of Transformer, each with unique property to improve the model generalization.", "Therefore, we introduce UniDrop to take the most of each dropout into Transformer.", "The overview of UniDrop is presented in Figure 2. To better view each dropout in a model forward pass, we only show a three layers of architecture in Figure 2, and each layer with one specific dropout technique.", "The data dropout is applied in the input layer by dropping out some word embeddings (e.g., embedding of word t i is dropped).", "In the middle layer, the feature dropout randomly drops several neurons in each word representations (e.g., the third neurons of word t i 1 is dropped).", "The last layer is directly dropped out through layer dropout 2 .", "We conduct experiments on both sequence generation and classification tasks, specifically, neural machine translation and text classification, to validate the effectiveness of UniDrop for Transformer.", "In this section, we introduce the detailed settings for the neural machine translation tasks and report the experimental results.", "We adopt the widely acknowledged IWSLT14 datasets 3 with multiple language pairs, including English German (En De), English Romanian (En Ro), English Dutch (En Nl), and English Portuguese-Brazil (En Pt-br), a total number of 8 translation tasks.", "Each dataset contains about 170k 190k translation data pairs.", "The datasets are processed by Moses toolkit 4 and byte-pair-encoding (BPE) (Sennrich et al., 2016b) is applied to obtain subword units.", "The detailed statistics of datasets are shown in Appendix A.2.", "We use the transformer_iwslt_de_en configuration 5 for all Transformer models.", "Specifically, the encoder and decoder both consist of 6 blocks.", "The source and target word embeddings are shared for each language pair.", "The dimensions of embedding and feed-forward sub-layer are respectively set to 512 and 1024 , the number of attention heads is 4 .", "The default dropout (not our four feature dropout) rate is 0 .", "3 and weight decay is 0 .", "0001 .", "All models are optimized with Adam (Kingma and Ba, 2015) and the learning rate schedule is same as in Vaswani et al. (2017).", "The weight of label smoothing (Pereyra et al., 2017) is set to 0 .", "1 .", "For the Transformer models with our UniDrop , we set all feature dropout rates to 0 .", "1 .", "The structure dropout LayerDrop is only applied to the decoder with rate 0 .", "1 .", "For the data dropout, the sequence 2 Except the data dropout is only applied in the input layer, feature/structure dropout can be applied in each layer.", "keep rate p k and token dropout rate p are respectively 0 .", "5 and 0 .", "2 .", "The other settings are the same as the configuration of the baseline Transformer.", "To evaluate the model performance, we use beam search (Sutskever et al., 2014) algorithm to generate the translation results.", "The beam width is 5 and the length penalty is 1 .", "0 .", "The evaluation metric is the tokenized BLEU (Papineni et al., 2002) score with multi-bleu.perl script 6 .", "We repeat each experiment three times with different seeds and report the average BLEU.", "Table 1 shows the BLEU results of the Transformer baselines and models with different dropouts.", "Compared with baselines, we can see that the dropouts FD, SD, or DD all bring some improvements 7 .", "This observation verifies the existence of overfitting in the Transformer.", "In contrast, our model Transformer+ UniDrop achieves the most improvements across all translation tasks, which demonstrates the effectiveness of UniDrop for the Transformer architecture.", "To further explore the effects of the three different grained dropouts in UniDrop , we conduct ablation studies and respectively remove the FD, SD, and DD from Transformer+ UniDrop .", "The results in Table 1 show that three ablated models obtain lower BLEU scores compared to the full model.", "This observation validates the necessity of them for UniDrop .", "Among all ablation versions, the Transformer-UniDrop w/o FD obtains the least improvements.", "It is reasonable because FD actually contains four feature dropouts on different positions, which can effectively prevent Transformer from overfitting.", "To show the superiority of UniDrop , we also compare the Transformer+ UniDrop with several existing works on the widely acknowledged benchmark IWSLT14 De En translation.", "These works improve machine translation from different aspects, such as the training algorithm design (Wang et al., 2019b), model architecture design (Lu et al., 2019; Wu et al., 2019) and data augmentation (Gao et al., 2019).", "The detailed results are shown in Table 2. We can see that the Transformer model with our UniDrop outperforms all previous works and achieve state-of-the-art performance, with 36 .", "88 6 https://github.com/moses-smt/ mosesdecoder/blob/master/scripts/generic/multi-bleu.perl 7 The dropout rates of model Transformer+FD, Trans-former+SD, Transformer+DD are tuned with IWSLT14 De En dev set and respectively set to 0 .", "BLEU score.", "Especially, it surpasses the BERT-fused NMT model (Zhu et al., 2020), which incorporates the pre-trained language model BERT, by a non-trivial margin.", "We also show some comparisons on IWSLT14 En De, Ro En, and Nl En translations, the results are shown in Table 3. According to the above results, UniDrop successfully unites the FD, SD, and DD, and finally improves the performance of Transformer on neural machine translation tasks, without any additional computation costs and resource requirements.", "We also conduct experiments on text classification tasks to further demonstrate the effectiveness of UniDrop for the Transformer models.", "We evaluate different methods on the text classification task based on 8 widely-studied datasets, which can be divided into two groups.", "The first group is from GLUE tasks (Wang et al., 2019a), and they are usually used to evaluate the performance of the large-scale pre-trained language models after fine-tuning.", "The second group is some typical text classification datasets that are widely used in previous works (Voorhees and Tice, 1999; Maas et al., 2011; Zhang et al., 2015).", "The statistics of all datasets are shown in Appendix A.2.", "We employ RoBERTa BASE (Liu et al., 2019) as the strong baseline and fine-tune it on the text classification datasets.", "Different from BERTBASE (Devlin et al., 2019), RoBERTa BASE is pre-trained with dynamic masking, full-sentences without NSP loss and a larger mini-batches.", "It has 12 blocks, and the dimensions of embedding and FFN are 768 and 3072 , the number of attention heads is 12 .", "When fine-tuning, we set the batch size to 32 and the max epoch to 30 .", "Adam is applied to optimize the models with a learning rate of 1e-5 and a warm-up step ratio of 0 .", "1 .", "We employ the polynomial decay strategy to adjust the learning rate.", "The default dropout and weight decay are both set to 0 .", "1 .", "When adding UniDrop to RoBERTa BASE , we empirically set feature dropout rate and LayerDrop rate to 0 .", "1 .", "For data dropout, the sequence keep rate p k and token dropout rate p are respectively 0 .", "5 and 0 .", "1 .", "The other settings are the same as in the baseline RoBERTa BASE .", "We use the standard accuracy to evaluate different methods on text classification tasks.", "Table 4 and Table 5 respectively show the accuracy of different models on GLUE tasks and typical text classification datasets.", "Compared with the conventional BiLSTM and CNN based models, we can observe the pre-trained models, including ULMFiT, BERT, RoBERTa, achieve obvious improvements on most datasets.", "Benefiting from better training strategy, RoBERTa BASE outperforms BERTBASE and even BERTLARGE on GLUE tasks.", "We can see our proposed UniDrop further improve the performance RoBERTa BASE on both small-scale and large-scale datasets.", "Specifically, UniDrop brings about 0 .", "4 improvements of accuracy on the typical text classification datasets from Table 5. In contrast, RoBERTa BASE + UniDrop achieves more improvements on GLUE tasks.", "The experimental results on the 8 text classification benchmark datasets consistently demonstrate the facilitation of UniDrop for Transformer.", "We show more results and ablation study on text classification task in Appendix A.5.", "In this section, we use IWSLT14 De En translation as the analysis task to investigate the capability of UniDrop to avoid overfitting, as well as the effects of different dropout components and dropout", "To show the superiority of UniDrop to prevent Transformer from overfitting, we compare the dev loss during training of Transformer, Transformer with each dropout technique, Transformer+ UniDrop , and ablated models of Transformer+ UniDrop .", "Figure 3 shows loss curves of different models.", "We can observe that the standard Transformer is quickly overfitted during training, though it is equipped with a default dropout.", "In contrast, the feature dropout, structure dropout, and data dropout, as well as the combinations of any two dropouts (i.e., ablated models), greatly reduce the risk of overfitting to some extent.", "Among all compared models, our Transformer+ UniDrop achieves the lowest dev loss and shows great advantage to prevent Transformer from overfitting.", "Besides, we also find that the dev loss of Transformer+ UniDrop continuously falls until the end of the training.", "We stop it to keep training epochs of all models same for a fair comparison.", "In Appendix A.4, we also plot the curves of training loss for the above models, together with the dev loss, to make a better understanding of the regularization effects from these dropout techniques.", "In Table 1, we have presented some important ablation studies by removing FD, SD, or DD from UniDrop .", "The consistent decline of BLEU scores demonstrates their effectiveness.", "Besides, we further investigate the effects of the two existing feature dropouts FD-1, FD-2, two new feature dropouts FD-3, FD-4, and our proposed two-stage 1 0 0.1 0.2 0.3 0.4 FD dev 36.94 38.00 37.30 35.29 35.68 36.88 36.33 34.71 SD dev 37.91 38.00 37.72 36.70 36.70 36.88 36.59 35.65 DD dev 37.52 37.80 38.00 37.87 37.69 36.38 36.74 36.88 36.75 36.55 BLEU s c o r e 34.5 35.5 36.5 37.5 38.5 0 0.1 0.2 0.3 Dev Test BLEU s c o r e 34.5 35.5 36.5 37.5 38.5 0 0.1 0.2 0.3 Dev Test BLEU s c o r e 34.5 35.5 36.5 37.5 38.5 0 0.1 0.2 0.3 0.4 Dev Test", "From Table 6, we can see the four ablation models removing FDs underperform the full model Transformer+ UniDrop , which means they can work together to prevent Transformer from overfitting.", "In multi-head attention module, FD-3 brings more BLUE improvement than FD-1.", "This comparison shows the insufficiency of only applying FD-1 for the Transformer architecture.", "The Transformer+ UniDrop w/o 2-stage DD means we directly apply conventional data dropout to the sequence instead of our proposed 2-stage strategy.", "Compared with the full model, its performance also decreases.", "This shows the necessity of keeping the original sequence for data dropout.", "To investigate the effects of FD, SD, and DD dropout rates on the UniDrop , we respectively vary them based on the setting (FD= 0 . 1 , SD= 0 . 1 , DD= 0 . 2 ).", "When varying one dropout component, we keep other dropout rates unchanged.", "Figure 4 shows the corresponding results.", "We can observe that the performance of each dropout for Transformer+ UniDrop first increases then decreases when varying the dropout rates from small to large.", "Especially, varying the rate for FD dropout makes a more significant impact on the model performance since FD contains four feature dropout positions.", "In contrast, the DD is least sensitive to the dropout rate change, but it still plays a role in the model regularization.", "Dropout is a popular regularization method for neural networks by randomly dropping some neurons during training (Srivastava et al., 2014).", "Following the idea, there are abundant subsequent works designing specific dropout for specific architecture, such as StochasticDepth (Huang et al., 2016), DropPath (Larsson et al., 2017), Drop-Block (Ghiasi et al., 2018) for convolutional neural networks, Variational Dropout (Gal and Ghahra-mani, 2016), ZoneOut (Krueger et al., 2017), and Word Embedding Dropout (Gal and Ghahramani, 2016) for recurrent neural networks.", "Recently, the Transformer architecture achieves great success in a variety of tasks.", "To improve generalization of Transformer, some recent works propose LayerDrop (Fan et al., 2020a), DropHead (Zhou et al., 2020) and HeadMask (Sun et al., 2020) as structured regularizations, and obtain better performance than standard Transformer.", "Instead of designing a specific dropout for Transformer, in this work, we focus on integrating the existing dropouts into one UniDrop to further improve generalization of Transformer without any additional cost.", "Data augmentation aims at creating realistic-looking training data by applying a transformation to a sample, without changing its label (Xie et al., 2020).", "In NLP tasks, data augmentation often refers to back-translation (Sennrich et al., 2016a), word replacing/inserting/swapping/dropout (Wei and Zou, 2019; Xie et al., 2020), etc.", "In this work, we adopt simple but effective word dropout as data level dropout in our UniDrop .", "We, additionally, design a two-stage data dropout strategy.", "In this paper, we present an integrated dropout approach, UniDrop , to specifically regularize the Transformer architecture.", "The proposed UniDrop unites three different level dropout techniques from fine-grain to coarse-grain, feature dropout, structure dropout, and data dropout respectively.", "We provide a theoretical justification that the three dropouts play different roles in regularizing Transformer.", "Extensive results on neural machine translation and text classification datasets show that our Transformer+ UniDrop outperforms the standard Transformer and various ablation versions.", "Further analysis also validates the effectiveness of different dropout components and our two-stage data dropout strategy.", "In conclusion, the UniDrop improves the performance and generalization of the Transformer without additional computational cost and resource requirement.", "The authors would like to thank the anonymous reviewers for their valuable comments.", "Xinyu Dai and Lijun Wu are the corresponding authors.", "This work was partially supported by the NSFC (No. 61976114,61936012) and National Key R&D Program of China (No. 2018YFB1005102)." ]
[ "abstain", "abstain", "result", "abstain", "objective", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "objective", "method", "abstain", "method", "abstain", "abstain", "abstain", "objective", "method", "method", "abstain", "objective", "objective", "objective", "objective", "objective", "other", "abstain", "other", "other", "other", "other", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "method", "abstain", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "other", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "abstain", "other", "other", "method", "method", "method", "abstain", "method", "result", "abstain", "abstain", "other", "other", "other" ]
[ "We introduce KBGAN , an adversarial learning framework to improve the performances of a wide range of existing knowledge graph embedding models.", "Because knowledge graphs typically only contain positive facts, sampling useful negative training examples is a nontrivial task.", "Replacing the head or tail entity of a fact with a uniformly randomly selected entity is a conventional method for generating negative facts, but the majority of the generated negative facts can be easily discriminated from positive facts, and will contribute little towards the training.", "Inspired by generative adversarial networks ( GAN s), we use one knowledge graph embedding model as a negative sample generator to assist the training of our desired model, which acts as the discriminator in GAN s.", "This framework is inde-pendent of the concrete form of generator and discriminator, and therefore can utilize a wide variety of knowledge graph embedding models as its building blocks.", "In experiments, we adversarially train two translation-based models, TRANSE and TRANSD, each with assistance from one of the two probability-based models, DISTMULT and COMPLEX .", "We evaluate the performances of KBGAN on the link prediction task, using three knowledge base completion datasets: FB15k-237, WN18 and WN18RR.", "Experimental results show that adversarial training substantially improves the performances of target embedding models under various settings.", "Knowledge graph (Dong et al., 2014) is a powerful graph structure that can provide direct access of knowledge to users via various applications such as structured search, question answering, and intelligent virtual assistant.", "A common representation of knowledge graph beliefs is in the form of a discrete relational triple such as Locate-dIn(NewOrleans,Louisiana) .", "A main challenge for using discrete representation of knowledge graph is the lack of capability of accessing the similarities among different entities and relations.", "Knowledge graph embedding (KGE) techniques (e.g., RESCAL (Nickel et al., 2011), TRANSE (Bordes et al., 2013), DISTMULT (Yang et al., 2015), and COMPLEX (Trouil-lon et al., 2016)) have been proposed in recent years to deal with the issue.", "The main idea is to represent the entities and relations in a vector space, and one can use machine learning technique to learn the continuous representation of the knowledge graph in the latent space.", "However, even steady progress has been made in developing novel algorithms for knowledge graph embedding, there is still a common challenge in this line of research.", "For space effi-ciency, common knowledge graphs such as Freebase (Bollacker et al., 2008), Yago (Suchanek et al., 2007), and NELL (Mitchell et al., 2015) by default only stores beliefs, rather than disbeliefs.", "Therefore, when training the embedding models, there is only the natural presence of the positive examples.", "To use negative examples, a common method is to remove the correct tail entity, and randomly sample from a uniform distribution (Bordes et al., 2013).", "Unfortunately, this approach is not ideal, because the sampled entity could be completely unrelated to the head and the target relation, and thus the quality of randomly generated negative examples is often poor (e.g, Locate-dIn(NewOrleans,BarackObama) ).", "Other approach might leverage external ontological constraints such as entity types (Krompa et al., 2015) to generate negative examples, but such resource does not always exist or accessible.", "Table 1 : Some selected knowledge graph embedding models.", "The four models above the double line are considered in this paper.", "Except for COMPLEX , all boldface lower case letters represent vectors in R k , and boldface upper case letters represent matrices in R k k .", "I is the identity matrix.", "edge graph embedding models.", "Inspired by the recent advances of generative adversarial deep models (Goodfellow et al., 2014), we propose a novel adversarial learning framework, namely, KBGAN , for generating better negative examples to train knowledge graph embedding models.", "More specifically, we consider probability-based, log-loss embedding models as the generator to supply better quality negative examples, and use distance-based, margin-loss embedding models as the discriminator to generate the final knowledge graph embeddings.", "Since the generator has a discrete generation step, we cannot directly use the gradient-based approach to back-propagate the errors.", "We then consider a one-step reinforcement learning setting, and use a variance-reduction REINFORCE method to achieve this goal.", "Empirically, we perform experiments on three common KGE datasets (FB15K-237, WN18 and WN18RR), and verify the adversarial learning approach with a set of KGE models.", "Our experiments show that across various settings, this adversarial learning mechanism can significantly improve the performance of some of the most commonly used translation based KGE methods.", "Our contributions are three-fold: We are the first to consider adversarial learning to generate useful negative training examples to improve knowledge graph embedding.", "This adversarial learning framework applies to a wide range of KGE models, without the need of external ontologies constraints.", "Our method shows consistent performance gains on three commonly used KGE datasets.", "A large number of knowledge graph embedding models, which represent entities and relations in a knowledge graph with vectors or matrices, have been proposed in recent years.", "RESCAL (Nickel et al., 2011) is one of the earliest studies on matrix factorization based knowledge graph embedding models, using a bilinear form as score function.", "TRANSE (Bordes et al., 2013) is the first model to introduce translation-based embedding.", "Later variants, such as TRANSH (Wang et al., 2014), TRANSR (Lin et al., 2015) and TRANSD (Ji et al., 2015), extend TRANSE by projecting the embedding vectors of entities into various spaces.", "DISTMULT (Yang et al., 2015) simplifies RESCAL by only using a diagonal matrix, and COMPLEX (Trouillon et al., 2016) extends DISTMULT into the complex number field.", "(Nickel et al., 2015) is a comprehensive survey on these models.", "Some of the more recent models achieve strong performances.", "MANIFOLDE (Xiao et al., 2016) embeds a triple as a manifold rather than a point.", "HOLE (Nickel et al., 2016) employs circular correlation to combine the two entities in a triple.", "CONVE (Dettmers et al., 2017) uses a convolutional neural network as the score function.", "However, most of these studies use uniform sampling to generate negative training examples (Bordes et al., 2013).", "Because our framework is indepen-dent of the concrete form of models, all these models can be potentially incorporated into our framework, regardless of the complexity.", "As a proof of principle, our work focuses on simpler models.", "Table 1 summarizes the score functions and dimensions of all models mentioned above.", "Generative Adversarial Networks ( GAN s) (Good-fellow et al., 2014) was originally proposed for generating samples in a continuous space such as images.", "AGAN consists of two parts, the generator and the discriminator .", "The generator accepts a noise input and outputs an image.", "The discriminator is a classifier which classifies images as true (from the ground truth set) or fake (generated by the generator).", "When training a GAN , the generator and the discriminator play a minimax game, in which the generator tries to generate real images to deceive the discriminator, and the discriminator tries to tell them apart from ground truth images.", "GAN s are also capable of generating samples satisfying certain requirements, such as conditional GAN (Mirza and Osindero, 2014).", "It is not possible to use GAN s in its original form for generating discrete samples like natural language sentences or knowledge graph triples, because the discrete sampling step prevents gradients from propagating back to the generator.", "SEQGAN (Yu et al., 2017) is one of the first successful solutions to this problem by using reinforcement learningIt trains the generator using policy gradient and other tricks.", "IRGAN (Wang et al., 2017) is a recent work which combines two categories of information retrieval models into a discrete GAN framework.", "Likewise, our framework relies on policy gradient to train the generator which provides discrete negative triples.", "The discriminator in a GAN is not necessarily a classifier.", "Wasserstein GAN or WGAN (Arjovsky et al., 2017) uses a regressor with clipped parameters as its discriminator, based on solid analysis about the mathematical nature of GAN s.", "GOGAN (Juefei-Xu et al., 2017) further replaces the loss function in WGAN with marginal loss.", "Although originating from very different fields, the form of loss function in our framework turns out to be more closely related to the one in GOGAN .", "In this section, we first define two types of training objectives in knowledge graph embedding models to show how KBGAN can be applied.", "Then, we demonstrate a long overlooked problem about negative sampling which motivates us to propose KBGAN to address the problem.", "Finally, we dive into the mathematical, and algorithmic details of KBGAN .", "For a given knowledge graph, let E be the set of entities, R be the set of relations, and T be the set of ground truth triples.", "In general, a knowledge graph embedding (KGE) model can be formulated as a score function f ( h, r, t ) , h, t E , r R which assigns a score to every possible triple in the knowledge graph.", "The estimated likelihood of a triple to be true depends only on its score given by the score function.", "Different models formulate their score function based on different designs, and therefore interpret scores differently, which further lead to various training objectives.", "Two common forms of training objectives are particularly of our interest: Marginal loss function is commonly used by a large group of models called translation-based models, whose score function models distance between points or vectors, such as TRANSE, TRANSH, TRANSR, TRANSD and so on.", "In these models, smaller distance indicates a higher likelihood of truth, but only qualitatively.", "The marginal loss function takes the following form: L m = X ( h,r,t ) T [ f ( h, r, t ) f ( h 0 , r, t 0 ) + ] + (1) where is the margin, [ ] + = max(0 , ) is the hinge function, and ( h 0 , r, t 0 ) is a negative triple.", "The negative triple is generated by replacing the head entity or the tail entity of a positive triple with a random entity in the knowledge graph, or formally ( h 0 , r, t 0 ) { ( h 0 , r, t ) | h 0 E} { ( h, r, t 0 ) | t 0 E} .", "Log-softmax loss function is commonly used by models whose score function has probabilistic interpretation.", "Some notable examples are RESCAL , DISTMULT , COMPLEX .", "Applying the softmax function on scores of a given set of triples gives the probability of a triple to be the best one among them: p ( h, r, t ) = exp f ( h,r,t ) P ( h 0 ,r,t 0 ) exp f ( h 0 ,r,t 0 ) .", "The loss function is the negative log-likelihood of this probabilistic model: L l = X ( h,r,t ) T log exp f ( h, r, t ) P exp f ( h 0 , r, t 0 ) ( h 0 , r, t 0 ) { ( h, r, t ) } Neg ( h, r, t ) (2) where Neg ( h, r, t ) { ( h 0 , r, t ) | h 0 E} { ( h, r, t 0 ) | t 0 E} is a set of sampled corrupted triples.", "Figure 1 : An overview of the KBGAN framework.", "The generator (G) calculates a probability distribution over a set of candidate negative triples, then sample one triples from the distribution as the output.", "The discriminator (D) receives the generated negative triple as well as the ground truth triple (in the hexagonal box), and calculates their scores.", "G minimizes the score of the generated negative triple by policy gradient, and D minimizes the marginal loss between positive and negative triples by gradient descent.", "Other forms of loss functions exist, for example CONVE uses a triple-wise logistic function to model how likely the triple is true, but by far the two described above are the most common.", "Also, softmax function gives an probabilistic distribution over a set of triples, which is necessary for a generator to sample from them.", "Most previous KGE models use uniform negative sampling for generating negative triples, that is, replacing the head or tail entity of a positive triple with any of the entities in E , all with equal probability.", "Most of the negative triples generated in this way contribute little to learning an effective embedding, because they are too obviously false.", "To demonstrate this issue, let us consider the following example.", "Suppose we have a ground truth triple LocatedIn(NewOrleans,Louisiana) , and corrupt it by replacing its tail entity.", "First, we remove the tail entity, leaving Lo-catedIn(NewOrleans,?) .", "Because the relation Lo-catedIn constraints types of its entities, ? must be a geographical region.", "If we fill ? with a random entity e E , the probability of e having a wrong type is very high, resulting in ridiculous triples like Lo-catedIn(NewOrleans,BarackObama) or Locate-dIn(NewOrleans,StarTrek) .", "Such triples are considered too easy, because they can be eliminated solely by types.", "In contrast, Locate-dIn(NewOrleans,Florida) is a very useful negative triple, because it satisfies type constraints, but it cannot be proved wrong without detailed knowledge of American geography.", "If a KGE model is fed with mostly too easy negative examples, it would probably only learn to represent types, not the underlying semantics.", "The problem is less severe to models using log-softmax loss function, because they typically samples tens or hundreds of negative triples for one positive triple in each iteration, and it is likely to have a few useful negatives among them.", "For instance, (Trouillon et al., 2016) found that a 100:1 negative-to-positive ratio results in the best performance for COMPLEX .", "However, for marginal loss function, whose negative-to-positive ratio is always 1:1, the low quality of uniformly sampled negatives can seriously damage their performance.", "Inspired by GANs, we propose an adversarial training framework named KBGAN which uses a KGE model with softmax probabilities to provide high-quality negative samples for the training of a KGE model whose training objective is marginal loss function .", "This framework is inde-pendent of the score functions of these two models, and therefore possesses some extent of universality.", "Figure 1 illustrates the overall structure of KBGAN .", "In parallel to terminologies used in GAN literature, we will simply call these two models generator and discriminator respectively in the rest of this paper.", "We use softmax probabilistic models as the generator because they can adequately model the sampling from a probability distribu-1473 Algorithm 1: The KBGAN algorithm Data: training set of positive fact triples T = { ( h, r, t ) } Input: Pre-trained generator G with parameters G and score function f G ( h, r, t ) , and pre-trained discriminator D with parameters D and score function f D ( h, r, t ) Output: Adversarially trained discriminator 1 b 0 ; // baseline for policy gradient 2 repeat 3 Sample a mini-batch of data T batch from T ; 4 GG 0 , GD 0 ; // gradients of parameters of G and D 5 r sum 0 ; // for calculating the baseline 6 for ( h, r, t ) T batch do 7 Uniformly randomly sample N s negative triples Neg ( h, r, t ) = { ( h 0 i , r, t 0 i ) } i =1 ...N s ; 8 Obtain their probability of being generated: p i = exp f G ( h 0 i ,r,t 0 i ) P Nsj =1 exp f G ( h 0 j ,r,t 0 j ) ; 9 Sample one negative triple ( h 0 s , r, t 0 s ) from Neg ( h, r, t ) according to { p i } i =1 ...N s . Assume its probability to be p s ; 10 GD GD + D [ f D ( h, r, t ) f D ( h 0 s , r, t 0 s ) + ] + ; // accumulate gradients for D 11 r f D ( h 0 s , r, t 0 s ) , r sum r sum + r ; // r is the reward 12 GG GG + ( r b ) G log p s ; // accumulate gradients for G 13 end 14 G G + GGG , D D DGD ; // update parameters 15 b r sum / |T batch | ; // update baseline 16 until convergence ; tion process of discrete GAN s, and we aim at improving discriminators based on marginal loss because they can benefit more from high-quality negative samples.", "Note that a major difference between GAN and our work is that, the ultimate goal of our framework is to produce a good discriminator, whereas GANS are aimed at training a good generator.", "In addition, the discriminator here is not a classifier as it would be in most GAN s.", "Intuitively, the discriminator should assign a relatively small distance to a high-quality negative sample.", "In order to encourage the generator to generate useful negative samples, the objective of the generator is to minimize the distance given by discriminator for its generated triples.", "And just like the ordinary training process, the objective of the discriminator is to minimize the marginal loss between the positive triple and the generated negative triple.", "In an adversarial training setting, the generator and the discriminator are alternatively trained towards their respective objectives.", "Suppose that the generator produces a probability distribution on negative triples p G ( h 0 , r, t 0 | h, r, t ) given a positive triple ( h, r, t ) , and generates negative triples ( h 0 , r, t 0 ) by sampling from this distribution.", "Let f D ( h, r, t ) be the score function of the discriminator.", "The objective of the discriminator can be formulated as minimizing the following marginal loss function: LD = X ( h,r,t ) T [ f D ( h, r, t ) f D ( h 0 , r, t 0 ) + ] + ( h 0 , r, t 0 ) p G ( h 0 , r, t 0 | h, r, t ) (3) The only difference between this loss function and Equation 1 is that it uses negative samples from the generator.", "The objective of the generator can be formulated as maximizing the following expectation of negative distances: RG = X ( h,r,t ) T E [ f D ( h 0 , r, t 0 )] ( h 0 , r, t 0 ) p G ( h 0 , r, t 0 | h, r, t ) (4) RG involves a discrete sampling step, so we cannot find its gradient with simple differentiation.", "We use a simple special case of Policy Gradient Theorem 1 (Sutton et al., 2000) to obtain the gradient of RG with respect to parameters of the generator: GRG = X ( h,r,t ) T E ( h 0 ,r,t 0 ) p G ( h 0 ,r,t 0 | h,r,t ) [ f D ( h 0 , r, t 0 ) G log p G ( h 0 , r, t 0 | h, r, t )] ' X ( h,r,t ) T 1 NX ( h 0 i ,r,t 0 i ) p G ( h 0 ,r,t 0 | h,r,t ) ,i =1 ...N [ f D ( h 0 , r, t 0 ) G log p G ( h 0 , r, t 0 | h, r, t )] (5) 1 A proof can be found in the supplementary material 1474 Model Hyperparameters Constraints or Regularizations TRANSE L 1 distance, k = 50 , = 3 || e || 2 1 , || r || 2 1 TRANSD L 1 distance, k = 50 , = 3 || e || 2 1 , || r || 2 1 , || e p || 2 1 , || r p || 2 1 DISTMULT k = 50 , = 1 / 0 .", "Table 2 : Hyperparameter settings of the 4 models we used.", "For DISTMULT and COMPLEX , = 1 is used for FB15k-237 and = 0 .", "1 is used for WN18 and WN18RR.", "All other hyperparameters are shared among all datasets.", "L is the global loss defined in Equation (2).", "represents all parameters in the model.", "where the second approximate equality means we approximate the expectation with sampling in practice.", "Now we can calculate the gradient of RG and optimize it with gradient-based algorithms.", "Policy Gradient Theorem arises from reinforcement learning (RL), so we would like to draw an analogy between our model and an RL model.", "The generator can be viewed as an agent which interacts with the environment by performing actions and improves itself by maximizing the reward returned from the environment in response of its actions.", "Correspondingly, the discriminator can be viewed as the environment.", "Using RL terminologies, ( h, r, t ) is the state (which determines what actions the actor can take), p G ( h 0 , r, t 0 | h, r, t ) is the policy (how the actor choose actions), ( h 0 , r, t 0 ) is the action , and f D ( h 0 , r, t 0 ) is the reward .", "The method of optimizing RG described above is called REINFORCE (Williams, 1992) algorithm in RL.", "Our model is a simple special case of RL, called one-step RL.", "In a typical RL setting, each action performed by the agent will change its state, and the agent will perform a series of actions (called an epoch ) until it reaches certain states or the number of actions reaches a certain limit.", "However, in the analogy above, actions does not affect the state, and after each action we restart with another unrelated state, so each epoch consists of only one action.", "To reduce the variance of REINFORCE algorithm, it is common to subtract a baseline from the reward, which is an arbitrary number that only depends on the state, without affecting the expectation of gradients.", "2 In our case, we replace f D ( h 0 , r, t 0 ) with f D ( h 0 , r, t 0 ) b ( h, r, t ) in the equation above to introduce the baseline.", "To avoid introducing new parameters, we simply let b be a constant, the average reward of the whole training set: b = P ( h,r,t ) T E ( h 0 ,r,t 0 ) p G ( h 0 ,r,t 0 | h,r,t ) [ f D ( h 0 , r, t 0 )] .", "In practice, b is approximated by the mean of rewards of recently generated negative triples.", "Let the generator's score function to be f G ( h, r, t ) , given a set of candidate negative triples Neg ( h, r, t ) { ( h 0 , r, t ) | h 0 E}{ ( h, r, t 0 ) | t 0 E} , the probability distribution p G is modeled as: p G ( h 0 , r, t 0 | h, r, t ) = exp f G ( h 0 , r, t 0 ) P exp f G ( h , r, t ) ( h , r, t ) Neg ( h, r, t ) (6) Ideally, Neg ( h, r, t ) should contain all possible negatives.", "However, knowledge graphs are usually highly incomplete, so the hardest negative triples are very likely to be false negatives (true facts).", "To address this issue, we instead generate Neg ( h, r, t ) by uniformly sampling of N s entities (a small number compared to the number of all possible negatives) from E to replace h or t .", "Because in real-world knowledge graphs, true negatives are usually far more than false negatives, such set would be unlikely to contain any false negative, and the negative selected by the generator would likely be a true negative.", "Using a small Neg ( h, r, t ) can also significantly reduce computational complexity.", "Besides, we adopt the bern sampling technique (Wang et al., 2014) which replaces the 1 side in 1-to-N and N-to-1 relations with higher probability to further reduce false negatives.", "Algorithm 1 summarizes the whole adversarial training process.", "Both the generator and the dis-2 A proof of such fact can also be found in the supplementary material 1475 criminator require pre-training, which is the same as conventionally training a single KBE model with uniform negative sampling.", "Formally speaking, one can pre-train the generator by minimizing the loss function defined in Equation (1), and pre-train the discriminator by minimizing the loss function defined in Equation (2).", "Line 14 in the algorithm assumes that we are using the vanilla gradient descent as the optimization method, but obviously one can substitute it with any gradient-based optimization algorithm.", "To evaluate our proposed framework, we test its performance for the link prediction task with different generators and discriminators.", "For the generator, we choose two classical probability-based KGE model, DISTMULT and COMPLEX , and for the discriminator, we also choose two classical translation-based KGE model, TRANSE and TRANSD, resulting in four possible combinations of generator and discriminator in total.", "See Table 1 for a brief summary of these models.", "We use three common knowledge base completion datasets for our experiment: FB15k-237, WN18 and WN18RR.", "FB15k-237 is a subset of FB15k introduced by (Toutanova and Chen, 2015), which removed redundant relations in FB15k and greatly reduced the number of relations.", "Likewise, WN18RR is a subset of WN18 introduced by (Dettmers et al., 2017) which removes reversing relations and dramatically increases the difficulty of reasoning.", "Both FB15k and WN18 are first introduced by (Bordes et al., 2013) and have been commonly used in knowledge graph researches.", "Statistics of datasets we used are shown in Table 3.", "Following previous works like (Yang et al., 2015) and (Trouillon et al., 2016), for each run, we report two common metrics, mean reciprocal ranking (MRR) and hits at 10 (H@10).", "We only report scores under the filtered setting (Bordes et al., 2013), which removes all triples appeared in training, validating, and testing sets from candidate triples before obtaining the rank of the ground truth triple.", "3 In the pre-training stage, we train every model to convergence for 1000 epochs, and divide every epoch into 100 mini-batches.", "To avoid overfit-ting, we adopt early stopping by evaluating MRR on the validation set every 50 epochs.", "We tried = 0 .", "5 , 1 , 2 , 3 , 4 , 5 and L 1 , L 2 distances for TRANSE and TRANSD, and = 0 .", "01 , 0 .", "1 , 1 , 10 for DISTMULT and COMPLEX , and determined the best hyperparameters listed on table 2, based on their performances on the validation set after pre-training.", "Due to limited computation resources, we deliberately limit the dimensions of embeddings to k = 50 , similar to the one used in earlier works, to save time.", "We also apply certain constraints or regularizations to these models, which are mostly the same as those described in their original publications, and also listed on table 2.", "In the adversarial training stage, we keep all the hyperparamters determined in the pre-training stage unchanged.", "The number of candidate negative triples, N s , is set to 20 in all cases, which is proven to be optimal among the candidate set of { 5 , 10 , 20 , 30 , 50 } .", "We train for 5000 epochs, with 100 mini-batches for each epoch.", "We also use early stopping in adversarial training by evaluating MRR on the validation set every 100 epochs.", "We use the self-adaptive optimization method Adam (Kingma and Ba, 2015) for all trainings, and always use the recommended default setting = 0 .", "001 , 1 = 0 .", "9 , 2 = 0 .", "999 , (cid:15) = 10 8 .", "Results of our experiments as well as baselines are shown in Table 4.", "All settings of adversarial training bring a pronounced improvement to the model, which indicates that our method is consistently effective in various cases.", "TRANSE performs slightly worse than TRANSD on FB15k-237 and WN18, but better on WN18RR.", "Using DISTMULT or COMPLEX as the generator does not affect performance greatly.", "TRANSE and TRANSD enhanced by KBGAN can significantly beat their corresponding baseline implementations, and outperform stronger baselines in some cases.", "As a prototypical and proof-of-principle experiment, we have never expected state-of-the-art results.", "Being simple models pro-3 The KBGAN source code is available at https:// github.com/cai-lw/KBGAN 1476 FB15k-237 WN18 WN18RR Method MRR H@10 MRR H@10 MRR H@10 TRANSE -42.8 -89.2 -43.2 TRANSD -45.3 -92.2 -42.8 DISTMULT 24.1 41.9 82.2 93.6 42.5 49.1 COMPLEX 24.0 41.9 94.1 94.7 44.4 50.7 TRANSE (pre-trained) 24.2 42.2 43.3 91.5 18.6 45.9 KBGAN (TRANSE + DISTMULT ) 27.4 45.0 71.0 94.9 21.3 48.1 KBGAN (TRANSE + COMPLEX ) 27.8 45.3 70.5 94.9 21.0 47.9 TRANSD (pre-trained) 24.5 42.7 49.4 92.8 19.2 46.5 KBGAN (TRANSD + DISTMULT ) 27.8 45.8 77.2 94.8 21.4 47.2 KBGAN (TRANSD + COMPLEX ) 27.7 45.8 77.9 94.8 21.5 46.9 Table 4 : Experimental results.", "Results of KBGAN are results of its discriminator (on the left of the + sign).", "Underlined results are the best ones among our implementations.", "Results marked with are produced by running Fast-TransX (Lin et al., 2015) with its default parameters.", "Results marked with are copied from (Dettmers et al., 2017).", "All other baseline results are copied from their original papers.", "Figure 2 : Learning curves of KBGAN .", "All metrics improve steadily as training proceeds.", "posed several years ago, TRANSE and TRANSD has their limitations in expressiveness that are unlikely to be fully compensated by better training technique.", "In future researches, people may try employing more advanced models into KBGAN , and we believe it has the potential to become state-of-the-art.", "To illustrate our training progress, we plot performances of the discriminator on validation set over epochs, which are displayed in Figure 2.", "As all these graphs show, our performances are always in increasing trends, converging to its maximum as training proceeds, which indicates that KBGAN is a robust GAN that can converge to good results in various settings, although GAN s are wellknown for difficulty in convergence.", "Fluctuations in these graphs may seem more prominent than other KGE models, but is considered normal for an adversially trained model.", "Note that in some cases the curve still tends to rise after 5000 epochs.", "We do not have sufficient computation resource to train for more epochs, but we believe that they will also eventually converge.", "Table 5 : Examples of negative samples in WN18 dataset.", "The first column is the positive fact, and the term in bold is the one to be replaced by an entity in the next two columns.", "The second column consists of random entities drawn from the whole dataset.", "The third column contains negative samples generated by the generator in the last 5 epochs of training.", "Entities in italic are considered to have semantic relation to the positive one 4.3 Case study To demonstrate that our approach does generate better negative samples, we list some examples of them in Table 5, using the KBGAN (TRANSE + DISTMULT ) model and the WN18 dataset.", "All hyperparameters are the same as those described in Section 4.1.3.", "Compared to uniform random negatives which are almost always totally unrelated, the generator generates more semantically related negative samples, which is different from type relatedness we used as example in Section 3.2, but also helps training.", "In the first example, two of the five terms are physically related to the process of distilling liquids.", "In the second example, three of the five entities are geographical objects.", "In the third example, two of the five entities express the concept of gather.", "Because we deliberately limited the strength of generated negatives by using a small N s as described in Section 3.3, the semantic relation is pretty weak, and there are still many unrelated entities.", "However, empirical results (when selecting the optimal N s ) shows that such situation is more beneficial for training the discriminator than generating even stronger negatives.", "We propose a novel adversarial learning method for improving a wide range of knowledge graph embedding modelsWe designed a generator-discriminator framework with dual KGE components.", "Unlike random uniform sampling, the generator model generates higher quality negative examples, which allow the discriminator model to learn better.", "To enable backpropagation of error, we introduced a one-step REINFORCE method to seamlessly integrate the two modules.", "Experimentally, we tested the proposed ideas with four commonly used KGE models on three datasets, and the results showed that the adversarial learning framework brought consistent improvements to various KGE models under different settings." ]
[ "result", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "objective", "result", "result", "objective", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "other", "abstain", "method", "abstain", "method", "abstain", "method", "method", "abstain", "other", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "other", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "objective" ]
[ "Short textual descriptions of entities provide summaries of their key attributes and have been shown to be useful sources of background knowledge for tasks such as entity linking and question answering.", "However, generating entity descriptions, especially for new and long-tail entities, can be challenging since relevant information is often scattered across multiple sources with varied content and style.", "We introduce DESCGEN : given mentions spread over multiple documents, the goal is to generate an entity summary description.", "DESCGEN consists of 37K entity descriptions from Wikipedia and Fandom, each paired with nine evidence documents on average.", "The documents were collected using a combination of entity linking and hyperlinks to the Wikipedia and Fandom entity pages, which together provide high quality distant supervision.", "The resulting summaries are more abstractive than those found in existing datasets, and provide a better proxy for the challenge of describing new and emerging entities.", "We also propose a two-stage extract-then-generate baseline and show that there exists a large gap (19.9% in ROUGE-L) between state-of-the-art models and human performance, suggesting that the data will support significant future work.", "1 1 Introduction Entity knowledge has been shown to play an important role in various applications including language modeling (Peters et al., 2019), open-domain question answering (Xu et al., 2016), and dialogue generation (Qin et al., 2019).", "Recent studies suggest that such entity knowledge can be provided by simple textual descriptions (Chen et al., 2019), which can be incorporated to improve downstream task performance (Nie et al., 2018; Logeswaran 1 Data and code available at github.com/swj0419/DESCGEN Doc 1 ...Are bitcoins, then, really worth anything?", "According to Carl Menger's subjective theory of value, they are worth whatever individuals choose to believe they are worth.", "It is clear that many individuals value this new medium of exchange highly...", "Doc 2 ...The Austrian School of Economics has its roots outside of Austria particularly in the French economists Jean Baptiste Say and Claude-Frederic Bastiat.", "The Austrian School proper began with Carl Menger , who challenged the British labor theory of value.", "To learn more about Austrian Economics go to the website of The Ludwig von Mises Institute...", "Entity Description Carl Menger (February 23, 1840 February 26, 1921) was an Austrian economist and the founder of the Austrian School of economics.", "He contributed to the development of the marginal utility theory and to the formulation of a subjective theory of value.", "et al., 2019).", "However, manually curating entity descriptions is labor-intensive and it is challenging to keep pace with the ever growing emergence of new entities.", "In this paper, we present a new dataset DESCGEN for automatically generating entity descriptions from relevant documents and mentions, which provides high quality supervision for a highly abstractive version of this task that targets early description of new entities as they emerge.", "For example, in Table 13, machines are required to generate a description of Carl Menger , given multiple documents mentioning him.", "DESCGEN contains 37K entity descriptions extracted from Wikipedia and Fandom 2 .", "Fandom 2 Fandom is a set of encyclopedias centered around forms of entertainment such as movies, games etc. allows us to capture the key challenge of generating descriptions for emerging entities that are not in Wikipedia because they are less popular or have just been introduced to the public.", "To obtain source documents of the entities, we collect web documents and news articles where entity mentions are linked using web hyperlinks or an entity linker.", "Our dataset is distantly supervised in that these heuristically collected documents are not guaranteed to contain all the facts required to generate the descriptionas would be seen for natural text collections describing emerging entities.", "We also carefully annotate a subset of 1,000 examples to support more reliable evaluation (see Table 2 for dataset statistics).", "Unlike multi-document summarization that makes the assumption that a set of documents to be summarized are written on the same topic (Zopf et al., 2016), DESCGEN only assumes that source documents mention the entity.", "In contrast to an existing entity summarization benchmark (Liu et al., 2018, WikiSum), DESCGEN is more abstractive and better approximates challenges faced when describing new entities.", "Section 4.4 provides more details on these comparisons.", "Overall, our documents for generating a description can cover a much wider range of topics as well as text genres, including news, blog posts, and scientific articles.", "For instance, the documents 1 and 2 mentioning Carl Menger in Figure 13 discuss topics on bitcoins and the Austrian School of Economics.", "Finally, we also propose a two-stage method that first extracts salient sentences relevant to the entity and then abstracts them into a description.", "We test a range of models to establish baseline results with both automatic and human evaluation.", "The best model based on BART (Lewis et al., 2020b) achieves 28.2% in the ROUGE-L F measure with a significant gap compared to the human performance 48.1%, suggesting there was great room for future improvement.", "In summary, our contributions include: We propose a new dataset DESCGEN that includes challenging, abstractive entity summaries.", "Our dataset contains over 37K pairs of entity descriptions and their associated documents, along with a human-annotated subset of 1,000 pairs.", "We conduct an extensive analysis of properties of the dataset and identify its challenges extractive content selection from large Wikipedia Fandom Entities 26,585 11,366 Documents 177,454 170,204 Input size 11,568 1,872 Output size 53 32 Human-authored descriptions 598 403 Table 2: Basic statistics for DESCGEN .", "amounts of text and abstractive generation from it, particularly for emerging entities.", "We present a two-stage method and benchmark various models on our dataset, aiming to facilitate future work on this dataset.", "Existing Entity Description Generation Task and Dataset Previous works (Novikova et al., 2017; Cheng et al., 2020; Trisedya et al., 2020) mainly take as input some structured data such as knowledge graphs to generate entity descriptions.", "However, knowledge graphs, often mined from text corpora, are overwhelmingly incomplete on real-world entities and may not be updated in real-time (Dong et al., 2014).", "Therefore, we focus on generating descriptions from natural language sources such as web texts and news because they are often primary sources for entities and have better coverage of entities across multiple domains.", "DESCGEN is most related to WikiSum, a recent dataset for generating Wikipedia summaries from textual sources (Liu et al., 2018).", "WikiSum source documents primarily come from high-quality articles cited in the Wikipedia pages which makes their data more extractive (Section 4.4).", "In contrast, we collect our source documents heuristically using web texts and news, providing a better proxy for emerging entities where high-quality citation sources may not be available.", "In addition, their evaluation is conducted only on distantly supervised test data.", "However, our experiments demonstrate that manually annotated data allows for much better evaluation of model performance (Table 7).", "Multi-document summarization aims to condense a cluster of thematically-related documents into a short and informative summary.", "A wide range of multi-document summarization datasets have been built for the Document Understanding and Text Analysis Conferences (Over and Yen, 2004; Owczarzak and Dang, 2011), news (Fab-bri et al., 2019), events (Gholipour Ghalandari et al., 2020) and Wikipedia summaries (Liu et al., 2018).", "Recent work has studied both extractive (Ya-sunaga et al., 2017; Nallapati et al., 2017; Tohalino and Amancio, 2018) and abstractive summarization (Banerjee et al., 2015; Chali et al., 2017; Nay-eem et al., 2018).", "However, existing datasets typically are not entity focused and assume the input documents are at least loosely centered around a coherent topic or event.", "Wikipedia generation Our work is also related to research on generating Wikipedia articles.", "For instance, Sauper and Barzilay (2009) learn to build content templates using an integer linear program to generate full articles.", "Similarly, Banerjee and Mitra (2016) generate Wikipedia pages by building a topic classifier to assign web retrieved contents into relevant sections.", "We focus on a different task generating a short text description that can identify and best summarize an entity.", "Task definition Given a collection of documents D = { D i | i = 1 ...n } with mentions linked to the same entity e , the goal is to generate a description of e .", "For example, Table 13 shows a description of an entity ( Carl Menger ) and three source documents with mentions.", "Distant supervision We make use of existing knowledge bases, such as Wikipedia and Fandom, to collect entity descriptions.", "To obtain source documents and mentions for each entity, we use a combination of hyperlinks to Wikipedia pages and an entity linker that links entity mentions in text.", "Our dataset is distantly supervised in that these heuristically collected documents are not guaranteed to contain all the facts required to generate the description.", "To analyze the quality of distant supervision, we collect a smaller verified set of entity descriptions using human annotators.", "In contrast with our work, WikiSum (Liu et al., 2018) used documents cited in the Wikipedia pages or web pages returned by Google as source documents to generate Wikipedia lead sections.", "Because high-quality citation sources constitute a substantial part of overall documents (75%), their dataset is less abstractive than DESCGEN and unsuited for emerging entities where citations are not available.", "Sources We paired entity descriptions with source documents from three sources: Wikilinks, RealNews, and Fandom using distant supervision.", "To capture the challenge of emerging entities, we retrieve source documents that are not in Wikipedia using Wikilinks and RealNews.", "We also include specialized entities in Fandom that do not have Wikipedia pages.", "For quality control, we filter out entities for which the unigram recall of the entity description against its concatenated source documents is lower than 0.6.", "Wikilinks Wikilinks (Singh et al., 2012) is a large dataset designed for cross-document coreference.", "It consists of non-Wikipedia web pages (discovered using the Google search index) containing entities that are hyperlinked to Wikipedia.", "For each entity, we retrieve a collection of web pages in Wikilink with the anchor text linked to it and use the lead section of target Wikipedia page as its description.", "We further parse the HTML texts of the web pages and extract contents as source documents.", "Real News To expand the collection of source documents, we extract entity mentions in Real-News (Zellers et al., 2019), a large corpus of news articles from Common Crawl.", "We first conduct a longest prefix match between the entity surface form and text tokens via trie, a prefix tree structure that supports efficient string searching.", "More specifically, we build a trie of entity names where each node is a word and its children indicate all possible continuations from the prefix.", "After retriv-ing candidates for entity mentions, we use an off-the-shelf entity linking model (Gupta et al., 2017) to rank the candidates and add the corresponding news articles as source documents of the rank-1 candidate.", "Fandom Fandom 3 is a collection of encyclopedias, centered around particular subjects and themes such as movies, TV shows, and games.", "It contains specialized entities that require domain experts with background knowledge to make edits.", "Entities and their source documents can be automatically extracted by internal links.", "We filter out entities and only keep those without Wikipedia pages, which can be viewed as new or emerging entities.", "The description of the entity is extracted 3 https://www.fandom.com/ Entity source Train Dev Test Wikipedia Distant 21,267 2,659 2,659 (Wikilinks + Real news) Verified 299 299 Fandom Distant 9,092 1,137 1,137 Verified 202 201 Table 3: Number of entities for train, dev and test set.", "from the lead section of its Fandom page.", "We collect data from the 32 largest Fandom Wikis.", "Entity descriptions extracted from Wikipedia and Fandom have been authored and edited by multiple community contributors largely independently of our source documents.", "We collected additional entity descriptions via Upwork, 4 a freelancing platform, to better analyze how descriptions sourced from documents in our dataset contrast with those from Wikipedia and Fandom.", "We provided the entity and its source documents to annotators on Upwork, and asked them to write the entity descriptions.", "The annotators are also asked to mark sentences they used to write the description.", "Each entity was assigned to 2 annotators.", "We collected 500 entity descriptions for dev examples and 500 descriptions for test examples.", "We control the quality of the crowdsourced descriptions by filtering annotators who produced low-quality descriptions.", "We ask every candidate to annotate the same 20 examples and use two criteria for narrowing down candidates: (1) missing key information in descriptions (2) unjustified information in descriptions that cannot be inferred from source documents alone.", "Eventually, we filtered out 4 annotators and accepted 7 qualified annotators.", "The total annotation cost was around $3500.", "All 37K entity description and document pairs in the dataset are randomly split into train, development and test sets.", "In addition to automatically collected descriptions from Wikipedia and Fandom, we use the human-authored descriptions (Sec-tion 3.2) as verified subsets into dev and test splits.", "Table 3 shows basic statistics of the final dataset.", "We report model performance on automatically collected descriptions (distant) and human-authored descriptions (verified).", "An analysis of the data shows that DESCGEN contains a high proportion of emerging entities from diverse domains, and is more extractive compared to other multi-document summarization datasets.", "Table 2 shows data statistics.", "DESCGEN contains about 37K entity descriptions from Wikipedia and Fandom.", "On average, each entity has nine source documents.", "We can see that 36% percent of entities come from Fandom, and therefore have never had a Wikipedia page written about them.", "Domain diversity Figure 1 shows that DESCGEN covers a diverse set of entity domains.", "For analysis, we associate entities in Wikipedia with domains (GPE, LOC, PER, ORG, EVENT, COMPANY, GROUP and MISC) by querying the DBPe-dia knowledge-base (Lehmann et al., 2015).", "Each entity in Fandom is manually categorized into 5 domains: movie, game, fiction, TV series and cartoon based on its source Wiki.", "An analysis of baseline performance by entity type and domain (Sec-tion 7.3) reveals a notable drop for less popular domains such as Games and Fiction, highlighting generalization challenges.", "Each entity in the verified subset has two descriptions written by two annotators.", "Following previous work (Chen et al., 2015), we quantify inter-annotator agreement on descriptions by treating one of the descriptions as the prediction and the other as the reference to compute ROUGE (Lin, 2004) and METEOR (Denkowski and Lavie, 2014).", "Table 4 shows high inter-annotator agreement of 47.7 in terms of ROUGE-L.", "We additionally measure the agreement on content selection using sentences marked by annotators.", "In particular, agreement is achieved when both annotators selected the exact same sentences in all source documents for an entity.", "Cohen's Kappa is 0.38, which indicates high agreement (Brennan and Prediger, 1981) considering the strict criterion of reaching agreement.", "To understand how human-authored descriptions differ with Wikipedia and Fandom descriptions in terms of content and style, we compare them using automatic metrics (ROUGE) and manual evaluation.", "ROUGE Table 5 shows the averaged ROUGE scores of human-authored descriptions against Wikipedia and Fandom descriptions.", "Human-authored descriptions have higher word overlap with Wikipedia descriptions than with Fandom descriptions.", "Pairwise comparison Can humans distinguish between Wikipedia/Fandom and human-authored descriptions?", "We have two human assessors evaluate 50 randomly sampled pairs of human-authored and Wikipedia/Fandom descriptions in a blind pairwise comparison, and ask them to classify descriptions into two categories: human-authored or Wikipedia/Fandom.", "The classification accuracy in Wikipedia and Fandom is 64.4% and 61.1% respectively and the inter-annotator agreement is 0.67 in Cohen's Kappa.", "The relatively low classification accuracy suggests that there is no substantial Category Paraphrasing Missing info.", "Extra details Wikipedia 29 16 22 Fandom 32 15 26 Table 6: Number of times a human-authored description is classified into error categories with Wikipedia/Fandom descriptions as reference.", "Quality analysis of distant supervision We are interested in understanding if automatically gathered documents can provide enough signals for writing the entity descriptions.", "To study the quality of distant supervision, we manually analyze 40 human-authored descriptions that have low n-grams overlap with Wikipedia/Fandom descriptions, in terms of paraphrasing (does the human-authored description express the same meaning but use different words?), missing information (does the human-authored description miss any information in Wikipedia/Fandom description?) and extra details (does the human-authored description contain extra details not included in the Wikpe-dia/Fandom description?).", "We use Wikipedia and Fandom descriptions as the ground truth and classify each human-authored description into one or more categories.", "The results are shown in Table 6.", "We find that the difference between the two sources of descriptions are mainly caused by paraphrasing and missing information.", "This suggests that even for entities that have very different human-authored and extracted descriptions, most of the information in the Wikipedia/Fandom descriptions is present in the documents.", "Generating entity descriptions involves extracting essential information about the entity and condensing them into a short description.", "To measure how much DESCGEN requires paraphasing and compressing, we quantify the extractive nature of our dataset by the measuring extractive fragment coverage and density defined in Grusky et al. (2018).", "Extractive fragment coverage computes the percentage of words in summary that appear in source documents: Coverage ( A, S ) = 1 | S | (cid:88) f F | f | Figure 2: Density and coverage on different datasets.", "where A is a concatenation of the source documents, S is the description and F is the set of shared token sequences in A and S .", "Likewise, extractive fragment density is related to the average length of shared token sequences.", "For example, an entity description with high coverage and low density shares many individual words with source documents but almost no long phrases.", "We compare our dataset with several multi-document summarization datasets, including CNN / Daily Mail, Multi-News (Fabbri et al., 2019) and WikiSum (Liu et al., 2018).", "Figure 2 presents the density and coverage distribution.", "The density of Multi-News, CNN / Daily Mail and WikiSum are high, showing that there is much copying of long sequences with respect to source documents.", "DESCGEN shows high coverage but low density, suggesting it is not common to copy long sequences and the data overall is much more abstractive.", "In this section, we introduce several new baseline methods, building on state-of-the-art pre-trained models.", "The input documents can be long (Sec-tion 8), making it computationally infeasible to train end-to-end models.", "We instead introduce a pipelined approach to generate an entity description in two stages.", "In the first extractive stage, a selector is used to identify representative sentences relevant to the entity from multiple source documents.", "In the second abstractive stage, a neural generation model is used to fuse the selected sentences to a description of the entity.", "We compare a number of different approaches for each stage, as summarized in the subsections below.", "Trivial concatenates all sentences that mention the entity, along with one sentence before and after each.", "The content is truncated to the first 1,000 tokens to fit the token limit of models in the abstractive stage.", "Cheating ranks sentences according to their unigram recall against the description and selects the top 15 sentences.", "This heuristic demonstrates the effect of extraction on final performance.", "BERT (Devlin et al., 2019) with a classifier uses a linear layer stacked on top of the BERT outputs and predict whether a sentence should be selected.", "The model is trained on our training dataset in which sentences are labeled by the cheating method.", "We compare three pre-trained language generation models, including BART (Lewis et al., 2020b), T5 (Raffel et al., 2019) and MARGE (Lewis et al., 2020a) to generate abstractive entity descriptions.", "We fine-tuned these models on our training dataset in a sequence-to-sequence fashion.", "T5 is a text-to-text transformer pre-trained on a multi-task mixture of unsupervised and supervised tasks.", "We consider models of two sizes: base and large containing 220M and 770M parameters respectively.", "We use the Hugging Face version.", "5 BART introduces a denoising autoencoder combining a bidirectional encoder and auto-regressive decoder.", "It is trained by reconstructing text corrupted with a noising function.", "We consider the base model with 139M parameters.", "MARGE is a multi-lingual sequence-to-sequence model trained by reconstructing target documents retrieving paraphrased documents in other languages.", "It has around 960M parameters.", "Following other summarization tasks, we evaluate the quality of generated descriptions by ROUGE", "5 https://github.com/huggingface/transformers", "F1-score (Lin, 2004), which measures the overlap of unigram (R-1), bigram (R-2), and the longest matching sequence of words (R-L).", "In addition, we evaluate content selection by unigram and bigram recall to assess the importance of the extractive stage.", "Lastly, in addition to automatic evaluation, we also conduct human evaluation for non-redudancy, fluency, informativeness, and accuracy.", "Automatic evaluation In Table 8, we report the experimental results in the extractive stage.", "We observe that BERT consistently outperforms the unsupervised method Trivial, suggesting that training a model to predict sentence relevance can bring in immediate improvement in content selection.", "Meanwhile, the performance of BERT still lags behind the upper bound defined by Cheating by 1.7-7.3% in unigram.", "Table 7 presents ROUGE scores of various baselines in the abstractive stage.", "T5-large and BART show similar performance and outperform other models for both distant supervision and verified subsets, by a small margin.", "Increasing model size from T5-base (220M) to T5-large (770M) parameters leads to a relatively large performance gain.", "The human baseline is superior to all the models and maintains a R-L score over 33 in distant supervision and 48 in the verified subset.", "The large gap between the human baseline and the best-performing model shows there is much room for future work.", "Manual evaluation We present two human assessors with source documents and descriptions generated from different abstractive models and asked them to rate descriptions in terms of non-redundancy (does the description avoid repeating information?), fluency (Is the description well-formed and gramatically correct?), informativeness (does the description capture the salient information about the entity?) and faithfulness (Is the description faithful to the source text?).", "We compared BART, T5-Large, and T5-Base.", "For each model, we selected 100 descriptions and showed outputs of models to assessors side by side without revealing which model generates them.", "The score for each description was averaged between two assessors.", "As can be seen from Table 9, BART shows strong performance on all dimensions, except for fluency.", "Overall, all three models can generate flu-ent descriptions (high fluency ) but struggle with producing accurate statements (low faithfulness ).", "In most cases of low faithfulness, we observe that the model directly copies words from the input that are not relevant to the entity as part of the description or synthesize information that are not directly inferable from the input.", "Wikipedia description Carl Menger (February 23, 1840 February 26, 1921) was an Austrian economist and the founder of the Austrian School of economics.", "He contributed to the development of the marginal utility theory and to the formulation of a subjective theory of value.", "Human-authored description Carl Menger is an Austrian economist and one of founders of Marginal Utility Theory.", "He challenged the British labor theory of value and proposed subjective theory of value.", "He founded the Austrian School of Economics.", "In this section, we perform qualitative and quantitative analysis of baseline results to better understand strengths and weaknesses of models, and hypothesize avenues for future work.", "A qualitative analysis of model predictions suggests that these models tend not to generate novel words in the description, and mostly copy words from the original text.", "The entity-centric nature of DESCGEN makes extractive content selection difficult as evidenced by the gap between BERT extraction and the Cheating model (Section 6.2).", "For example, Table 10 shows the model-generated entity descriptions for Carl Menger using source documents from Table 13.", "BART, one of the best performing baselines, generates a description that has highest overlap with the Wikipedia description, but it still misses some important facts.", "T5-Base and MARGE confuse Carl Menger and his son, and incorrectly include information that does not describe the target entity.", "BART, T5, and MARGE are language models pretrained on text corpora including Wikipedia and Common Crawl.", "The parameters of the models appear to contain substantial linguistic and factual information (Petroni et al., 2019; Peters et al., 2018).", "In particular, we wonder if entity-related knowledge is captured in the pretraining stage and investigate the following questions:", "(a) Can the model memorize entity descriptions in pretraining stage?", "(b) Does the memorized knowledge improve model performance on generating entity descriptions?", "To investigate the questions, we test the model's ability to write a description given only the entity name instead of source documents.", "We train the model on our training dataset to adapt to the style of Wikipedia in a similar way.", "The results are shown in Table", "11. Considering the name-only baselines, we can see that all of them perform worse on Fandom entities than Wikipedia entities.", "However, the regular baselines perform similarly on Fandom and Wikipedia.", "This result suggests that facts about entities learnt in pretraining stage have much less influence on model performance when source documents are provided.", "To understand how the performance of the models varies with different types of entities, we report the performance breakdown for different entity types in Table", "12. Among domains in Wikipedia, our model obtains low scores on group and company, suggesting that they are more challenging than other domains.", "In Fandom, entities from the game domain prove to be most difficult.", "Siddhartha Banerjee, Prasenjit Mitra, and Kazunari Sugiyama.", "Multi-document abstractive summarization using ilp based multi-sentence compression.", "1981.", "Co-efficient kappa: Some uses, misuses, and alternatives.", "Educational and psychological measurement , 41(3):687699.", "Demian Gholipour Ghalandari, Chris Hokamp, Nghia The Pham, John Glover, and Georgiana Ifrim.", "2020.", "A large-scale multi-document summarization dataset from the Wikipedia current events portal.", "In summary, our analysis suggests there is room for improvement in extractive content selection and abstractive generation, particularly for new and emerging entities from less popular domains.", "In this work, we introduce DESCGEN , a new dataset for generating entity descriptions from mentions.", "DESCGEN contains 37K pairs of entity descriptions from Wikipedia and Fandom, and 481K automatically gathered source documents based on distant supervision.", "We also present a clean human-authored subset of 1,000 pairs for test.", "We show that, as compared to existing benchmarks, DESCGEN requires more abstractive summaries, which we argue better approximate the challenge of describing emerging entities.", "We also show that the performance of state-of-art models is far from human levels, suggesting that our task remains a significant challenge with room for improvement.", "Our study points to an interesting research direction on modeling entity knowledge from contexts.", "We hope it will facilitate future work on incorporating entity knowledge into downstream tasks and generating descriptions for emerging entities.", "This work was supported in part by the ARO (AROW911NF-16-1-0121) and the NSF (IIS1562364).", "The authors would like to thank Ari Holtzman, Bhargavi Paranjape, Elizabeth Clark, Terra Blevins and anonymous reviewers for helpful comments." ]
[ "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "method", "result", "result", "result", "abstain", "abstain", "abstain", "method", "abstain", "objective", "result", "abstain", "objective", "abstain", "method", "objective", "objective", "other", "other", "method", "other", "other", "objective", "other", "objective", "other", "other", "other", "other", "method", "other", "other", "objective", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "method", "result", "result", "abstain", "method", "other", "other" ]
[ "Rumours can spread quickly through social media, and malicious ones can bring about significant economical and social impact.", "Motivated by this, our paper focuses on the task of rumour detection; particularly, we are interested in understanding how early we can detect them.", "Although there are numerous studies on rumour detection, few are concerned with the timing of the detection.", "A successfully-detected malicious rumour can still cause significant damage if it isn't detected in a timely manner, and so timing is crucial.", "To address this, we present a novel methodology for early rumour detection.", "Our model treats social media posts (e.g. tweets) as a data stream and integrates reinforcement learning to learn the number minimum number of posts required before we classify an event as a rumour.", "Experiments on Twitter and Weibo demonstrate that our model identifies rumours earlier than state-of-the-art systems while maintaining a comparable accuracy.", "The concept of rumour has a long history, and there are various definitions from different research communities (Allport and Postman, 1947).", "In this paper, we follow a commonly accepted definition of rumour, that it is an unverified statement, circulating from person to person and pertaining to an object, event, or issue of public concern and it is circulating without known authority for its truthfulness at the current time, but it may turn out to be true, or partly or entirely false; alternatively, it may also remain unresolved (Peterson and Gist, 1951; Zubiaga et al., 2018).", "Rumours have the potential to spread quickly through social media, and bring about significant economical and social impact.", "Figure 1 illustrates an example of a rumour propagating on TWITTER .", "The source message started a claim about 0 hour 4 hours 8 hours 12 hours 16 hours 20 hours 24 hours This is unbelievable, or should be.", "the cause of Michael Brown's shooting, and it was published shortly after the shooting happened.", "It claimed that he was shot ten times by the police for stealing candy.", "The message was retweeted by multiple users on TWITTER , and within 24 hours there were about 900K users involved, either by reposting, commenting, or questioning the original source message.", "From Figure 1, we see some users (e.g. User 7) question the veracity of the original message.", "Had the rumour been identified timely and rebutted, its propagation could have been contained.", "Most studies (Qazvinian et al., 2011; Zhang et al., 2015) consider rumour detection as a binary classification problem, where they extract various features to capture rumour indicative signals for detecting a rumour, and a few recent works explore deep learning approaches to enhance detection accuracy (Long et al., 2017; Ruchansky et al., 2017).", "In all these studies, however, the timeliness of the rumour detection is not evaluated.", "There are a few exceptions.", "In Ma et al. (2015) and Kwon et al. (2017), the authors define a checkpoint (e.g. number of posts or time elapsed after the source message) in the timeline and use all the posts prior to this checkpoint to classify a rumour.", "The checkpoint is often a pre-determined value for all rumours, and so does not capture the variation of propagation patterns for different rumours.", "The focus of our paper is on early rumour detection.", "That is, our aim is to identify rumours as early as possible, while keeping a reasonable detection accuracy.", "Our early rumour detection system (ERD) features two modules: a rumour detection module that classifies whether an event (which consists of a number of posts) constitutes a rumour, and a checkpoint module that determines when to trigger the rumour detection module.", "ERD treats incoming posts as a data stream and monitors the posts in real time.", "When ERD receives a new post, this post along with all prior posts of the same event will be used to decide if it constitutes an appropriate checkpoint to trigger the rumour detection module.", "ERD integrates reinforcement learning for the checkpoint module to guide the rumour detection module, using its classification accuracy as a reward.", "Through reinforcement learning ERD is able to learn the minimum number of posts required to identify a rumour.", "In other words, ERD can dynamically determine the appropriate checkpoint for different rumours, and this feature is the core novelty of our methodology.", "To evaluate our approach, we use standard microblog data sets from WEIBO and TWITTER .", "We compare our method with benchmark rumour detection systems (Ma et al., 2016; Ruchansky et al., 2017; Dungs et al., 2018) and found that ERD could on average identify rumours within 7.5 and 3.4 hours with an accuracy of 93.3% and 85.8% on WEIBO and TWITTER respectively.", "Our detection accuracy performance is better than a state-of-the-art system that that detects rumours within 12 hours.", "To summarise, we present a novel methodology for rumour detection.", "Unlike most rumour detection systems, our approach determines the checkpoint for each event dynamically, by learning when it should classify it as a rumour.", "Our experimental results showed that ERD outperforms state-of-the-art methods over two benchmark data sets in detection accuracy and timeliness.", "Our proposed framework is flexible and the individual modules (i.e. the rumour detection and checkpoint module) can be extended to incorporate more complex networks for further improvements.", "An open source implementation of our model is available at: https://github.com/ DeepBrainAI/ERD .", "Traditionally, research on rumour detection has mainly focused on developing handcrafted features for machine learning algorithms (Qazvinian et al., 2011).", "Takahashi and Igata (2012) propose a method for rumour detection on Twitter using cue words and tweets statistics.", "Yang et al. (2012) apply two new types of features client-based and location-based features to rumour detection on Sina Weibo.", "Beyond this, user-based (Liang et al., 2015) and topic-based (Yang et al., 2015) features have also been explored.", "Friggeri et al. (2014) demonstrate that there are structural differences in the propagation of rumours and non-rumours, and Wu et al. (2015) and Ma et al. (2017) experiment with using these propagation patterns extensively to improve detection.", "More recently, deep learning models are explored for the task.", "Compared to traditional machine learning approaches, these deep learning models tend to rely less on sophisticated handcrafted features.", "Ma et al. (2016) introduce a rumour detection model for microblogs based on recurrent networks.", "The input to their model is simple tf-idf features but it outperforms models leveraging handcrafted features.", "Sampson et al. (2016) show that implicit linkages between conversation fragments improve detection accuracy.", "Long et al. (2017) present a deep attention model that learns a hidden temporal representation for each sequential posts to represent the hypothesis.", "Ruchansky et al. (2017) integrate textual, user response, and source information into their neural models and achieve better performance.", "Most of these works focus on detection accuracy, and so largely ignore the timing of the detection.", "Ma et al. (2015) develop a dynamic time series structure to incorporate temporal information to the features to understand the whole life cycle of rumours.", "Zhao et al. (2015) propose a detection model using a set of regular expressions to find posts that question or rebut the rumour to detect it earlier.", "Dungs et al. (2018) present an approach that checks for a rumour after 5 or 10 retweets.", "These models are interested in early rumour detection, although the checkpoint for triggering a detection is pre-determined, and succeeding posts after the checkpoint are usually ignored.", "On a similar note but a different task, Farajtabar et al. (2017) experiment with reinforcement learning by combining it with a point process network activity model to detect fake news and found some success.", "Let E denote an event, and it consists of a series of relevant posts x i , where x 0 denotes the source message and x T the last relevant message.", "1 The objective of early rumor detection is to make a classification decision whether E is a rumour as early as possible while keeping an acceptable detection accuracy.", "2 As shown in Figure 2, ERD has two modules: a rumour detection module (RDM) that classifies whether an event is a rumour, and a checkpoint module (CM) that decides when the rumour detection module should be triggered.", "The checkpoint module plays an important role here, as it is responsible for the timeliness of a detection.", "RDM contains three layers: a word embedding layer that maps input words into vectors, a max-pooling layer that extracts important features of a post, and a GRU (Cho et al., 2014) that processes the sequential posts of an event.", "In the word embedding layer, we map words in post x i into vectors, yielding vectors e ji for each word.", "To capture the most salient features of a post, we apply a max pooling operation (Collobert et al., 2011; Kim, 2014; Lau et al., 2017), producing a fixed size vector m i : m i = maxpool ([ W m e 0 i ; W m e 1 i ; ... ; W m e Ki ]) 1 Relevant posts are defined as retweets or responses to a source message.", "2 The earliest possible time to classify E is when we receive the first post x 0 .", "where K is the number of words in the post.", "Henceforth W in all equations are model parameters.", "To capture the temporal relationship between multiple posts, we use a GRU (Cho et al., 2014): h i = GRU ( m i , h i \u0000 1 ) (1) We take the final state h N ( N = number of posts received to date) and use it to perform rumour classification: p = softmax ( W p h N + b p ) (2) where p 2 R 2 , i.e. p 0 ( p 1 ) gives the probability of the positive (negative) class.", "3 3.2 Checkpoint Module (CM) Rather than setting a static checkpoint when to classify an event as a rumour, CM learns the number of posts needed to trigger RDM.", "To this end, we leverage deep reinforcement learning to identify the optimal checkpoint.", "We reward CM based on RDM's accuracy and also penalise CM slightly every time it decides to not trigger RDM (and continue to monitor the event).", "This way CM learns the trade-off between detection accuracy and timeliness.", "The reward function is detailed in Section 3.3.", "We use the deep Q-learning model (Mnih et al., 2013) for CM.", "The optimal action-value function Q ( s, a ) is defined as the maximum expected return achievable under state s , which can be formulated as follows: Q ( s, a ) = E s 0 \" [ r + \u0000 max a 0 Q i ( s 0 , a 0 ) | s, a ] where r is the reward value, \u0000 the discount rate, and the optimal action in all action sequence a 0 is selected to maximise the expected value of r + \u0000 Q ( s 0 , a 0 ) . The optimal action-value function obeys the Bellman equation and is used for iterative value update: Q i +1 ( s, a ) = E [ r + \u0000 max a 0 Q i ( s 0 , a 0 ) | s, a ] The above iterative algorithm will converge and reach the optimal action value function, i.e. Q i ! Q when q ! 1 (Sutton et al., 1998). 3 Although sigmoid activation is more appropriate here as it is a binary classification task, we used the softmax function because in preliminary experiments we considered a third neural class. x 0 e 0 m 0 h 0 h i a i p i e 1 m 1 h 1 x 1 e N m N h N x N reward i output Checkpoint Module Rumor Detection Module GRU State GRU Layer Max-pooling Layer Words Embedding Layer Inputs New Length Sequence Figure 2: Architecture of ERD. CM takes as input the hidden states produced by the GRU in RDM to compute the action-value function using a two-layer feedforward network: a i = W a ( ReLu ( W h h i + b h )) + b a (3) where a i 2 R 2 is the action value for terminate ( a 0 i ) or continue ( a 1 i ) at post x i . Note that a random action will be taken with the probability of irrespective to the action value a i . 3.3 Joint Training We train both RDM and CM jointly, and the training process is similar to that of generative adversarial networks (Goodfellow et al., 2014). The checkpoint module serves as the generator for action sequences, while the detection module is the discriminator. A key contrast, however, is that the two modules are working cooperatively rather than adversarially. CM is trained using RDM's accuracy as reward. To compute the reward, we first pre-train RDM based on cross entropy: \u0000 X j [ L j log( p 0 j ) + (1 \u0000 L j )(log( p 1 j ))] + l 2 where L j is a binary label indicating the true class for event E j , p is computed based on Equation (2), l 2 is the L2 loss for RDM parameters, and is a hyper-parameter for scaling l 2 . We then train CM while keeping RDM's parameters fixed. In each step of the training, new posts that have arrived and previous GRU states are first fed to the RDM to produce the new states (Equa-tion (1)), which will in turn be used by CM to calculate the action values (Equation (3)). This decides whether the system takes the continue or terminate action. If terminate is chosen, the reward is given in accordance to RDM's prediction; otherwise, a small penalty is incurred: r i = ( log M, terminate with correct prediction \u0000 P, terminate with incorrect prediction \u0000 \" , continue where M is the number of correct predictions accumulated thus far, P is a large value to penalise an incorrect prediction, and \" is a small penalty value for delaying the detection.", "To optimise our action value function, we apply the deep Q-learning approach with the experience replay algorithm (Mnih et al., 2013).", "Based on the optimal action-value function Q ( s, a ) , the objective of the action value function y i is given as Interval 0 Interval 1 Interval 2 Interval 3 Interval 4 Sample distribution 0:00 2:00 4:00 10:00 12:00 Interval 0 Interval 1 Interval 2 0:00 2:00 4:00 6:00 8:00 10:00 12:00 0:00 2:00 4:00 6:00 8:00 10:00 12:00 Interval 0 Interval 1 Interval 2 Interval 5 0:00 2:00 4:00 6:00 8:00 10:00 12:00 Fixed number of posts Fixed time intervals 6:00 8:00 Dynamic intervals Time Figure 3: Three bucketing strategies to process streaming posts in batches.", "= ( r i , terminate r i + \u0000 max a 0 Q ( h i +1 , a 0 ; ) , continue", "where \u0000 is the discount rate that decides how much experience is taken into consideration.", "And lastly, CM is optimised by minimising the cost: ( y i \u0000 a i ) 2 We train CM and RDM in an alternating fashion, i.e. we train CM for several iterations while keeping RDM's parameters fixed, and then we move to train RDM for several iterations while keeping CM's parameters fixed.", "Training converges when CM's reward value stabilises between consecutive epochs.", "For processing efficiency purposes, instead of processing each incoming post individually, we experiment with several bucketing strategies that group posts together and process them in batches.", "As Figure 3 illustrates, we group posts based on: (1) a fixed number of posts (FN), e.g. every 3 posts (i.e. 3 posts are combined together forming 1 single post); (2) a fixed time interval (FT), e.g. every 2 hours; and (3) a dynamic interval (DI) that ensures the number of posts collected in an interval is close to the mean number of posts collected in an hour in the full data set.", "We experiment with two data sets: WEIBO and TWITTER , developed by Ma et al. (2016) and Zubiaga et al. (2016) respectively.", "4 Statistics of the data sets is presented in Table 1.", "Even though both data sets have a comparable number of events, WEIBO is an order of magnitude larger than TWITTER as there are more posts per event.", "We reserve 10% of the events as the validation set for hyper-parameter tuning and early stopping, and split the rest in a ratio of 3:1 for training and test partitions.", "As a baseline, we use an SVM with tf-idf features.", "We also include several state-of-the-art rumour detection systems for comparisons: CSI (Ruchan-sky et al., 2017) on WEIBO ; CRF (Zubiaga et al., 2016) and HMM (Dungs et al., 2018) on TWITTER ; and GRU-2 (Ma et al., 2016) on both data sets.", "For GRU-2 (Ma et al., 2016) we also report performance on several variants that use a different recurrent network: simple RNN with tanh activation (RNN); single-layer LSTM (LSTM); and single-layer GRU (GRU-1).", "CSI is a neural model that integrates text and users representations to classify rumours.", "CRF and HMM are classical models that use crowd opinions (a.k.a. stance) of the event for classification.", "GRU-2 is based on a two-layer GRU that captures contextual information of posts with tf-idf features as inputs.", "4 There is a small difference in the definition of a ru-mour in these two data sets.", "For WEIBO , all labelled rumours are false rumours (i.e. the source message contains verified untruthful statements), where else for TWITTER , rumours maybe truthful, untruthful, or unverified.", "We preprocess each post by segmenting them into words, and remove all stop words.", "5 We pretrain word embeddings and kept them fixed during training.", "6 is set to 0.01 and \u0000 to 0.95; both values are determined empirically based on validation data.", "We use the Adam optimiser (Kingma and Ba, 2014) with a learning rate of 0.001 during joint training, which we found to produce stable training.", "We present the training loss and reward values over time during joint training in Figure 4 and Figure 5.", "We pre-train RDM for 2 epochs before joint training, and then we train RDM and CM in an alternating fashion for 1 epoch and 200K iterations respectively.", "We can see that loss declines steadily after 20K iterations and converges 5 For TWITTER , words are tokenised using white spaces, and stopword list is based on NLTK (Bird et al., 2009).", "For WEIBO , Jieba is used for tokenisation: https://pypi.", "org/project/jieba/ ; and stopword list is a customised list based on: http://blog.sina.com.cn/s/blog_ a19ab3770102wjav.html .", "6 For WEIBO , the embeddings are pre-trained using word2vec (Mikolov et al., 2013) on a separate Weibo data set we collected.", "For TWITTER , the embeddings are pre-trained GloVe embeddings (Pennington et al., 2014).", "Unknown words are initialised as zero vectors.", "at around 50K iterations.", "The reward curve, on the other hand, fluctuates more as the reward was calculated based on the accuracy of RDM.", "When switching between training RDM and CM, the reward value tends to change abruptly, although over time we see a consistent improvement.", "Recall that we explore 3 different methods to group posts in order to process them in batches (Section 3.4).", "Here we evaluate them on rumour classification accuracy over the validation set of TWITTER .", "Note that we do not use CM here (and hence no reinforcement learning is involved) we simply use all posts of an event to perform rumour classification with RDM.", "In terms of metrics we use standard accuracy, precision, recall and F1 scores.", "Results are presented in Table 2.", "We see FN produces the best performance, and so FN is used for all following experiments as the default bucketing strategy.", "7 As certain events have a long delay between posts, we also incorporate a maximum delay of one hour before processing the posts in a batch.", "In this section, we assess how accurately the models classify rumours.", "All baselines and benchmark systems uses all posts of an event to perform classification, with the exception of HMM which uses only the first 5 posts.", "For our models, we present: (1) the full model ERD, which uses a subset of posts for classification (checkpoint decided by CM); and (2) RDM, which uses the full set of posts.", "Results are detailed in Table 3 and 4.", "We can see that RDM outperforms all models across most metrics, including state-of-the-art rumour detection systems CSI (marginally) and CRF (substantially).", "ERD, on the other hand, performs very competitively, outperforming most 7 FN value: 5 posts for WEIBO and 2 posts for TWITTER .", "benchmark systems and baselines, with the exception of CSI on WEIBO .", "Note, however, that unlike most other systems, ERD leverages only a subset of posts for rumour classification.", "HMM is the only benchmark system on TWITTER that uses a subset (first 5), and its performance is markedly worse than that of ERD (which uses 4.03 posts on average).", "Next we evaluate the timeliness of the detection, and we focus on comparing our system with GRU-2 (Ma et al., 2016), as it performed competitively in Section 4.5.2.", "Note that GRU-2 uses a manually set checkpoint (12 hours) that were found to be optimal, while ERD determines the checkpoint dynamically.", "TWITTER , the majority of the events (approxi-mately 80%) are classified within the first 6 hours.", "GRU-2's optimal checkpoint is 12 hours (dashed line), and so ERD is detecting rumours much earlier than GRU-2.", "We next present the classification accuracy of these events over time (again, in 6-hour interval) in Figure", "7. ERD generally outperforms GRU-2 (dashed lines) over all checkpoints.", "To be fair, checkpoints that are longer than 12 hours are not exactly comparable, as ERD uses more posts than GRU-2 in these instances.", "But even if we consider only the first 2 intervals (0-6 and 6-12 hours), ERD still outperforms GRU-2 across both data sets, demonstrating that ERD detects rumours earlier and more accurately.", "For the two checkpoints on WEIBO where GRU-2 outperforms ERD, in the first checkpoint (24-30) we find that there are only 5 events and so the difference is unlikely to be statistically robust.", "For the second checkpoint (42-48), we hypothesise that these events are possibly the diffi-0.80 0.85 0.90 0.95 1.00 0 4 8 12 16 20 24 28 32 36 40 44 48 A cc u r a c y Detection Deadline (Hours) RDM-Weibo RDM-Twitter ERD-Weibo ERD-Twitter Figure 8: Detection accuracies of ERD and RDM over time.", "cult cases, and as such the classification decision is deferred until much later (and classification performance is ultimately still low due to its diffi-culty).", "To understand the advantage of incorporating reinforcement learning (CM) for rumour detection, we compute the detection accuracy over time for ERD and RDM in Figure", "8. The dashed lines indicate the average accuracy performance of ERD, which detects rumours on average in 7.5 and 3.4 hours on WEIBO and TWITTER respectively.", "The solid lines show the accuracy performance of RDM, which increases over time as it has more evidence.", "For RDM to achieve the performance of ERD, we see that it requires approximately at least 20 hours of posts on both data sets.", "This highlights the importance of the checkpoint module, which allows ERD to detect rumours much earlier.", "In certain events, they are detected within 3 minutes.", "To provide a qualitative analysis for our approach, we showcase an example of a rumour event from WEIBO in Table 5.", "We present a set of salient words (second column) and their translations (third column) extracted from posts published during a particular period (first column) using simple tf-idf features.", "The rumour was started by a message claiming that hairy crabs contain harmful hormones and toxins on August 18th, 2012.", "After the message was posted, within 12 hours 2.3M users participated in its propagation, either by re-posting, commenting, or questioning the original source message.", "The rumour spread quickly and led to significant economic damage to the aquaculture industry in China.", "Officially the rumour was rebutted after 24 hours, but in Table 5 we see that ERD detects the rumour in 34 minutes.", "We present ERD, an early rumour detection system.", "Rather than setting a static checkpoint that determines when an event should be classified as rumour, ERD learns dynamically the minimum number of posts required to identify a rumour.", "To this end, we integrate reinforcement learning with recurrent neural networks to monitor social media posts in real time to decide when to classify rumours.", "We evaluate our model on two standard data sets, and demonstrate that ERD identifies rumours within 7.5 hours and 3.4 hours on WEIBO and TWITTER on average, compared to 12 hours of a competitive system.", "In terms of detection accuracy, ERD achieves a performance of 93.3% and 85.8%, which is comparable to state-of-the-art rumour detection systems.", "This work is partially funded by the National Natural Science Foundation of China (61502115, U1636103, U1536207).", "We would also like to thank Wei Gao and Jing Li for their valuable suggestions." ]
[ "abstain", "objective", "abstain", "abstain", "objective", "abstain", "objective", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "result", "abstain", "objective", "abstain", "result", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "other", "method", "other", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "method", "abstain", "abstain", "objective", "abstain", "other", "other" ]
[ "Most of the previous Rhetorical Structure Theory (RST) parsing methods are based on supervised learning such as neural networks, that require an annotated corpus of sufficient size and quality.", "However, the RST Discourse Treebank (RST-DT), the benchmark corpus for RST parsing in English, is small due to the costly annotation of RST trees.", "The lack of large annotated training data causes poor performance especially in relation labeling.", "Therefore, we propose a method for improving neural RST parsing models by exploiting silver data , i.e., automatically annotated data.", "We create large-scale silver data from an unlabeled corpus by using a state-of-the-art RST parser.", "To obtain high-quality silver data, we extract agreement subtrees from RST trees for documents built using the RST parsers.", "We then pre-train a neural RST parser with the obtained silver data and fine-tune it on the RST-DT.", "Experimental results show that our method achieved the best micro-F1 scores for Nuclearity and Relation at 75.0 and 63.2, respectively.", "Furthermore, we obtained a remarkable gain in the Relation score, 3.0 points, against the previous state-of-the-art parser.", "Rhetorical Structure Theory (RST) (Mann and Thompson, 1987) is one of the most widely used theories for representing the discourse structure of a text as a tree.", "RST trees are a kind of constituent tree, whose leaves are Elementary Discourse Units (EDUs), i.e., clause-like units, and whose non-terminal nodes cover text spans consisting of either a sequence of EDUs or a single EDU.", "The label of a non-terminal node represents the attribution of a text span, i.e., nucleus (N) or satellite (S).", "A discourse relation is also assigned between two adjacent non-terminal nodes.", "(Wang et al., 2017b; Yu et al., 2018; Kobayashi et al., 2020; Lin et al., 2019; Zhang et al., 2020), which require a high-quality annotated corpus of sufficient size.", "Generally, they train the following three components of the RST parsing: (1) structure prediction by splitting a text span consisting of contiguous EDUs into two smaller ones or merging two adjacent spans into a larger one, (2) nuclearity status prediction for two adjacent spans by solving a 3-class classification problem, and (3) relation label prediction for two adjacent spans by solving an 18-class classification problem (see Section 3.3 for details).", "However, it is costly to annotate RST trees for a huge collection of documents, and thus it is difficult to obtain a large amount of human-annotated data for RST parsing.", "As a result, research on RST parsing has focused on English, with the largest annotated corpus being the RST Discourse Treebank (RST-DT) (Carlson et al., 2001), although even this is still small with only 385 documents.", "1 Many RST parsing methods have recently been developed based on neural models (Ji and Eisenstein, 2014; Li et al., 2014a, 2016; Liu and Lapata, 2017; Braud et al., 2016, 2017).", "Among them, Kobayashi et al. (2020) is the current state-of-the-art system and is based on the neural top-down method.", "While its Span and Nuclearity scores achieved the highest level, its Relation score still has room for improvement.", "One of the reasons for its poor Relation score might be its small amount of training data for solving the 18-class classification problem.", "Currently, we can refer to various studies on improving neural models for NLP tasks through acquiring large-scale synthetic training data, sometimes called silver data .", "Among them, one of the studies on Neural Machine Translation (NMT) 1 We can find some exceptions for other languages such as Spanish (da Cunha et al., 2011) and German (Stede and Neumann, 2014).", "(Sennrich et al., 2016) introduced a simple learning framework: first pre-train an NMT model with silver data , i.e., pseudo-parallel data generated by automatic back-translation, and then fine-tune it with gold data , i.e., real parallel data, to overcome the data sparseness problem.", "Since the frameworks successfully improved the NMT systems, it has become a standard approach.", "Inspired by the above research, we propose a method for improving a student neural parser by exploiting large-scale silver data , thus generating RST trees using an automatic RST parser.", "2 Specifi-cally, we improve the state-of-the-art neural RST parser (Kobayashi et al., 2020), in terms of Relation, by employing another RST parser whose Relation score is also state-of-the-art (Wang et al., 2017b) as a teacher parser to generate the silver data.", "To yield high-quality silver data, we extract a collection of agreement subtrees (ASTs), which are common subtrees among multiple RST trees automatically parsed by the teacher parser with different seeds.", "Our method includes an efficient algorithm for extracting the agreement subtrees to handle large-scale data.", "We first pre-train the student parser by using the obtained silver data .", "We then fine-tune parameters of the parser on gold data , using the RST-DT.", "Experimental results on the RST-DT clearly indicate the effectiveness of our silver data.", "Our method obtained remarkable Nuclearity and Relation F 1 scores of 75.0 and 63.2, respectively.", "Early studies on RST parsing were based on traditional supervised learning methods with handcrafted features and the shift-reduce or CKY-like parsing algorithms (duVerle and Prendinger, 2009; Feng and Hirst, 2012; Joty et al., 2013, 2015; Feng and Hirst, 2014).", "Recently, Wang et al. (2017b) proposed a shift-reduce parser based on SVMs and achieved the current best results in classical statistical models on the RST-DT.", "The method first built nuclearity-labeled RST trees and then assigned relation labels between two adjacent spans consisting of a single or multiple EDUs.", "2 Nguyen et al. (2020) proposed a similar approach in NMT and introduced a method named data diversification: it diversifies the training data by using multiple forward and backward translation models.", "We can find some weak supervision approaches for other discourse representation formalisms such as (Badene et al., 2019).", "many NLP tasks, several neural network-based models have been proposed for RST parsing (Ji and Eisenstein, 2014; Li et al., 2014a, 2016; Liu and Lapata, 2017).", "Yu et al. (2018) proposed a shift-reduce parser based on neural networks and leveraged the information from their neural dependency parsing model within a sentence for RST parsing.", "The best Relation score on the RST-DT, i.e., F 1 of 60.2, was achieved with their method.", "Recently, a top-down neural parser was proposed (Lin et al., 2019) for use only at the sentence-level.", "The method parses a tree in a depth-first manner with a pointer-generator network.", "Zhang et al. (2020) extended the method and applied it to document-level RST parsing.", "Kobayashi et al. (2020) proposed another top-down RST parsing method exploiting multiple granularity levels in a document and achieved the best Span and Nuclearity scores on the RST-DT, i.e., F 1 of 87.0 and 74.6, respectively.", "Since the RST-DT, the largest treebank, contains only 385 documents, several studies have been conducted on overcoming the problem of a limited number of training data.", "Braud et al. (2016) leveraged multi-task learning not only with 13 related tasks as an auxiliary task but also for multiple views of discourse structures, such as Constituent, Nuclearity, and Relation.", "Braud et al. (2017) used multilingual RST discourse datasets that share the same underlying linguistic theory.", "Huber and Carenini (2019) adopted distant supervision with an auxiliary task of sentiment classification to create large-scale training data, i.e., they trained a two-stage RST parser (Wang et al., 2017a) with RST trees automatically built based on attention and sentiment scores from the Multiple-Instance Learning network, which was trained with a review dataset.", "However, these studies need other annotated corpora than the RST-DT, which means we still face the problem of being dependent on costly annotated corpora.", "Jiang et al. (2016) proposed a framework for enriching training data based on co-training to improve the performance for infrequent relation labels.", "However, the method failed to improve the overall Relation score, while they did not aim at improving the Span and Nuclearity scores.", "Unsupervised RST parsing methods have also been proposed recently (Kobayashi et al., 2019; Nishida and Nakayama, 2020).", "Since they are unsupervised, they do not require any annotated corpora.", "However, they can predict only tree structures and P stu P stu P tch=1 P tch=k Gold data : RST-DT UnlabeledDocumentsCNN Silver data : Agreementsubtrees Teacherparsers Auto-parsedtrees Student parser Fine-tuning Pre-training Gold data : RST-DT Training Auto-parsedtrees Figure 1: Overview of proposed method.", "cannot predict nucleus and relation labels.", "Therefore, the predicted trees cannot be used for learning for predicting relation labels.", "We should mention the relationship of our work with semi-supervised learning as a machine learning framework.", "First, the reason why we do not adopt self-training, where the student and teacher parsers are the same, but instead use two different parsers that rely on different parsing algorithms is that we can acquire instances that the student parser cannot correctly parse yet the teacher parser can parse as the training data.", "Second, using multiple different RST parsers in a semi-supervised manner in our work might seem reminiscent of coor tri-training.", "While coor tri-training is attractive, it is time consuming to repeat the step of alternately training multiple different neural network-based parsers many times.", "Thus, previous studies have focused on simplifying the repetition step in constituency and dependency parsing (McClosky et al., 2006; Yu et al., 2015; Pekar et al., 2014; Weiss et al., 2015; Li et al., 2014b).", "We believe our method is similar to these simpli-fied version as a semi-supervised framework with two different RST parsers.", "Traditional semi-supervised learning frameworks, such as self-, co-, and tri-training, tend to iteratively train a student classifier with the training data that contains human-annotated (gold) data and iteratively added silver data.", "Since neural network-based models require a large amount of time for training, the iterative procedure is not suitable for training them.", "Furthermore, the training method may be affected by the bias problem in relation-label distribution because frequent labels in the original training data become yet more frequent in the future training data.", "For these reasons, we adopt a simple pre-training and fine-tuning strategy, which is inspired by the NMT research (Sennrich et al., 2016), to train a student RST parser.", "Since early statistical RST parsing methods relied on handcrafted features, i.e., sentence-level features obtained from parse trees and document-level features, they require complete documents with complete sentences for their feature extraction.", "On the other hand, recent neural models do not necessarily need such features.", "Thus, we can exploit subtrees as training data for the neural networks.", "Our method involves the following two steps: First, we extract a collection of ASTs from RST trees for each document in unlabeled data as the silver data.", "In this step, each document is first parsed using multiple teacher RST parsers with different seeds, trained with a gold dataset, the RST-DT.", "We then apply our algorithm for extracting the ASTs, which are common subtrees among multiple automatically parsed RST trees.", "In the second step, we pre-train the student RST parser with the collection of ASTs to complement the amount of training data.", "The parameters of the student parser are then fine-tuned on the RST-DT.", "Figure 1 shows an overview of our proposed method.", "A good strategy for obtaining high-quality silver data is to get agreement among the results of multiple RST parsers.", "However, it is difficult to reach agreement for the entire RST trees at the document-level because their size is big.", "Thus, we believe we cannot collect enough silver data using agreement for the whole trees.", "On the other hand, we find that many subtrees agreed among multiple RST trees, even when the whole trees do not agree with each other.", "Accordingly, we extract ASTs as the silver data.", "mon subtrees among multiple RST trees for a document.", "Note that we need to extract multiple maximal common subtrees among the RST trees.", "This requires a different algorithm from the maximum agreement subtree problem, which is well-known in bioinformatics (Deepak and Fernndez-Baca, 2014).", "Thus, we develop the algorithm in Algorithm", "1. This algorithm follows a tree-traversal algorithm and works with O ( n ) , where n indicates the number of nodes in an RST tree.", "In the algorithm, a tree is represented as a fully-labeled nested span structure (see the example in Figure 2).", "The function AGREEMENT receives an arbitrary span as the input and returns a Boolean value indicating whether the subtree for the span is an AST.", "AGREEMENT first counts how many times the input span appears in the set of given RST trees and checks the status of the left and right children of the input span.", "Len() returns the length of the span and Count() returns the frequency of the fully-labeled span among the trees, which indicates how many trees agree on the subtree.", "The minimum and maximum values of Count() are 1 and k respectively, where k indicates the number of RST trees.", "The variables S c , S l , and S r store the Boolean value for the input span and the left and right children of the span, respectively.", "Here, root, leftChild, and rightChild are functions for returning the root span and the left and right children spans, respectively.", "To obtain the status of each child, AGREEMENT calls itself with the child span.", "When the frequency of the input span is k , indicating all of the trees in the set agree on the span, and the status of the left and right children is True, indicating the left and right children are ASTs, the function returns True.", "Furthermore, the information regarding which subtrees are ASTs is stored in variable S during the execution of AGREEMENT .", "The function FINDROOT returns the list of ASTs, based on the information in variable S, given by AGREEMENT .", "FINDROOT first checks the S(span), the Boolean value of the span.", "If it is True, the function appends the span, corresponding to the root node of an AST, to the output.", "Otherwise, it searches both left and right children for ASTs recursively.", "The function, therefore, lists all of the maximal ASTs in a depth-first fashion, based on the information in variable S. In the algorithm, l min and l max are used to control the size of extracted ASTs.", "If the trees parsed using the multiple teacher parsers significantly differ from each other, the extracted ASTs tend to be small, which might become noise.", "To avoid such noise, we do not take into account subtrees with less than l min EDUs.", "Excessively large subtrees are difficult to handle because they need a lot of time and space for training.", "Therefore, if the size of subtrees exceeds l max , the algorithm tries to find smaller ASTs from both their left and right children.", "Initially, we call the function AGREEMENT with an arbitrary tree in multiple RST trees.", "We show an example of extracting ASTs in Figure", "2. Assume the two trees at the left are from two RST parsers.", "The right part represents how the algorithm works with the top tree at the left as the input.", "In the figure, two subtrees consisting of spans (1,4) and (5,7) are extracted as ASTs since the frequency of (1,10) (cid:1) (1,4) (cid:1) (1,1) (cid:1) (2,4) (cid:1) (2,3) (cid:1) (4,4) (cid:1) (2,2) (cid:1) (3,3) (cid:1) (5,10) (cid:1) (5,7) (cid:1) (8,10) (cid:1) (5,5) (cid:1) (6,7) (cid:1) (6,6) (cid:1) (7,7) (cid:1) (8,9) (cid:1) (10,10) (cid:1) (8,8) (cid:1) (9,9) (cid:1) (1,10) (cid:1) (1,4) (cid:1) (1,1) (cid:1) (2,4) (cid:1) (2,3) (cid:1) (4,4) (cid:1) (2,2) (cid:1) (3,3) (cid:1) (5,10) (cid:1) (5,8) (cid:1) (9,10) (cid:1) (5,7) (cid:1) (8,8) (cid:1) (5,5) (cid:1) (6,7) (cid:1) (9,9) (cid:1) (10,10) (cid:1) (6,6) (cid:1) (7,7) (cid:1) AGREE (1,10) (cid:1) AGREE ( 1,4 ) (cid:1) AGREE (1,1) (cid:1) AGREE (2,4) (cid:1) AGREE (2,3) (cid:1) AGREE (4,4) (cid:1) AGREE (2,2) (cid:1) AGREE (3,3) (cid:1) AGREE (5,10) (cid:1) AGREE ( 5,7 ) (cid:1) AGREE (8,10) (cid:1) AGREE (5,5) (cid:1) AGREE (6,7) (cid:1) AGREE (6,6) (cid:1) AGREE (7,7) (cid:1) AGREE (8,9) (cid:1) AGREE (10,10) (cid:1) AGREE (8,8) (cid:1) AGREE (9,9) (cid:1) S c (1,10)=T (cid:1) S r (1,10)=F (cid:1) S l (1,10)=T (cid:1) S c (1,4)=T (cid:1) S r (1,4)=T (cid:1) S l (1,4)=T (cid:1) S c (1,4)=T (cid:1) S r (1,4)=T (cid:1) S l (1,4)=T (cid:1) S c (2,3)=T (cid:1) S r (2,3)=T (cid:1) S l (2,3)=T (cid:1) S c (5,7)=T (cid:1) S r (5,7)=T (cid:1) S l (5,7)=T (cid:1) S c (6,7)=T (cid:1) S r (6,7)=T (cid:1) S l (6,7)=T (cid:1) S c (8,10)=F (cid:1) S r (8,10)=T (cid:1) S l (8,10)=F (cid:1) S c (8,9)=F (cid:1) S r (8,9)=T (cid:1) S l (8,9)=T (cid:1) S c (5,10)=T (cid:1) S r (5,10)=F (cid:1) S l (5,10)=T (cid:1) S(2,2)=T (cid:1) S(3,3)=T (cid:1) S(2,3)=T (cid:1) S(4,4)=T (cid:1) S(1,1)=T (cid:1) S(2,4)=T (cid:1) S(1,4)=T (cid:1) S(5,7)=T (cid:1) S(5,5)=T (cid:1) S(6,7)=T (cid:1) S(6,6)=T (cid:1) S(7,7)=T (cid:1) S(8,8)=T (cid:1) S(9,9)=T (cid:1) S(10,10)=T (cid:1) S(8,9)=F (cid:1) S(8,10)=F (cid:1) S(5,10)=F (cid:1) S(10,10)=T (cid:1) S(1,10)=F (cid:1) N-S, Elab.", "these two spans and all their descendant spans is 2, which is the number of given RST trees.", "Note that, while several spans, such as spans (2,3) and (6,7), are also common subtrees, we do not extract them since they are contained in either span (1,4) or (5,7).", "As described in Section 3.1, the advantage of recent neural models is that they can utilize the annotation for partial documents, or subtrees, as training data.", "Among the neural models, the span-based neural top-down RST parsing method (Kobayashi et al., 2020) achieved the best Span and Nuclearity scores.", "Thus, we employ it as the student parser.", "The method builds a tree by recursively splitting a text span into two smaller ones while predicting the nuclearity status and relation labels.", "As we explain below, the parser can be trained with arbitrary subtrees for spans consisting of EDUs.", "For each position k in a span which consists of i -th EDU to j -th EDU, a scoring function, s split ( i, j, k ) is defined as follows:", "right spans, respectively.", "h i : k and h k +1: j are defined as follows: h i : k = MLP left ( u i : k ) , h k +1: j = MLP right ( u k +1: j ) , where MLP is the multi-layer perceptron.", "The vector representation of a span, u i : j , is obtained by feeding word embedding vectors into LSTMs.", "Then, the span is split at position k that maximizes Eq.", "(1): k = argmax k { i,...,j 1 } [ s split ( i, j, k )] .", "When splitting a span at position k , the score of the nuclearity status and relation labels for the two spans is defined as follows:", "where W (cid:96) is a weight matrix and u 1: i ; u j : n are vector representation of left and right spans that appear outside the current focus.", "Then, the label that maximizes Eq.", "(3) is assigned to the spans: (cid:96) = argmax (cid:96) L [ s label ( i, j, k, (cid:96) )] , (4) where L denotes a set of valid nuclearity status combinations, {N-S,S-N,N-N}, for predicting the nuclearity, and a set of relation labels, {Elaboration, Condition, . . . }, for predicting the relation.", "Accordingly, we solve a 3-class classification problem for the nuclearity labeling and an 18-class classification problem for the relation labeling.", "Note that the weight parameters W (cid:96) and MLP for the nuclearity and relation labeling are separately learned.", "All parameters, W u , W (cid:96) , v r , v (cid:96) , and the parameters for LSTMs are optimized by using margin-based learning.", "When the correct splitting position k and labels (cid:96) are given, loss functions for splitting and labeling are defined as follows: max(0 , 1+ s split ( i, j, k ) s split ( i, j, k )) , max(0 , 1+ s label ( i, j, k, (cid:96) ) s label ( i, j, k, (cid:96) )) .", "Since the student parser still has room for improvement in Relation, it is desirable to utilize another state-of-the-art parser based on a different parsing algorithm with a good Relation score.", "While the current best Relation score was achieved by NNDisParser (Yu et al., 2018), we cannot reproduce this score with their official code.", "Therefore, we employ the two-stage parser (Wang et al., 2017b), which obtained the second-best Relation score, as the teacher parser.", "This two-stage parser is based on a shift-reduce parsing algorithm and utilizes SVMs to determine actions to build trees.", "Since their SVMs are optimized by a dual coordinate descent method, we build multiple two-stage parser models with different seeds to obtain enough agreement between teacher parsers, 3 and create silver data by the agreement among the parsers.", "We used the RST-DT to evaluate the performance of our student RST parser and compared it with state-of-the-art parsers.", "It is officially divided into 347 documents as the training dataset and 38 documents as the test dataset.", "Since there is no development dataset, we used a part of the training dataset, 40 documents, as the development dataset by following the previous study (Heilman and Sagae, 2015).", "By following conventional studies, we used gold EDU segmentation for the RST-DT.", "The training and development datasets were used as gold data to fine-tune our student parser.", "To obtain silver data for pre-training, we used the CNN dataset (Hermann et al., 2015).", "To parse each document, we split sentences into EDUs by using 3 We could not adopt multiple different parser architectures, for example, the Two-stage parser and Span-based parser, because the agreement was low.", "l min and l max for AST extraction : Since the number of EDUs for a document in the RST-DT is from 7 to 240, we selected l min with a range from 5 to 10 and set l max to 240.", "Based on the results for the development dataset, l min was fixed to 9 (see Appendix A for details).", "Student Parser: We used the official code of the span-based neural top-down parsing method.", "5 The dimension of the hidden layers was set to 500.", "We trained the model in 5 and 10 epochs for pretraining and fine-tuning, respectively.", "Other parameters of the model and an optimizer were the same as those used by Kobayashi et al. (2020) (see Appendix E for details).", "Kobayashi et al. (2020) achieved the best results in the D2P2S2E setting, training the models in three levels of granularity, i.e., paragraph trees for documents, sentence trees for paragraphs, and EDU trees for sentences.", "This setting requires us to train many models corresponding to multiple granularity levels.", "To simplify this, we trained only the model for building an RST tree whose leaves are EDUs for a document, which corresponds to their D2E setting.", "In decoding, we split spans at sentence and paragraph boundaries to make the setting closer to D2P2S2E.", "We also used ensemble decoding by following 4 https://github.com/PKU-TANGENT/ NeuralEDUSeg 5 https://github.com/nttcslab-nlp/ Top-Down-RST-Parser Model Average Ensemble S N R F S N R F SBP 86.3 73.1 57.6 57.3 87.1 74.6 60.0 59.6 SBP+DT 86.9 74.1 61.8 61.0 87.4 74.7 62.7 61.7 SBP+ADT 86.6 73.5 59.5 58.8 86.9 74.3 60.5 59.7 SBP+AST 86.8 74.7 62.5 61.8 87.1 75.0 63.2 62.6 Table 2: Micro-averaged F 1 scores of span-based neural top-down parser with or without silver data on the test dataset of the RST-DT.", "Kobayashi et al. (2020).", "Since it takes a large amount of time to train multiple models in pretraining, we trained only a single model in the pretraining stage, while multiple models were trained in the fine-tuning stage with the pre-trained model as the initial state.", "Teacher parser: 6 We used the official code of the two-stage parsing method 7 and re-trained it four times with different random seeds.", "A smaller value of k made reliability of the agreement lower since we could not exclude coincidentally agreed trees.", "On the other hand, a larger value required us more time to create silver data, while the reliability of the agreement is high.", "Thus, we set k to 4, that is a moderate number in terms of both the reliability of the agreement and the data creation time.", "By following previous studies (Sagae and Lavie, 2005), we transformed RST-trees into right-heavy binary trees and evaluated system results with micro-averaged F 1 scores of Span, Nuclearity, Relation, and Full, based on RST-Parseval (Marcu, 2000).", "Span, Nuclearity, Relation, and Full were used to evaluate unlabeled, nuclearity-labeled, relation-labeled, and fully-labeled tree structures, respectively.", "Since Morey et al. (2017) made a suggestion to use a standard parseEval toolkit for evaluation, we also report the results using this in Appendix C. 4.4 Compared Methods To demonstrate the effectiveness of our proposed method, we pre-trained the span-based neural top-down parser, i.e., our student parser, in various settings for creating the silver data and compared 6 We show the results for a case of using SBP as a teacher parser in Appendix B. 7 https://github.com/yizhongw/StageDP the performance after fine-tuning on the RST-DT.", "Table 1 summarizes the statistics of the different types of silver data.", "DT' denotes RST trees obtained by using a single two-stage parser.", "The number of RST trees is the same as that of documents in the CNN dataset.", "ADT' denotes agreement document-level RST trees, i.e., the cases in which the parsers built the same trees for the whole document.", "AST' denotes ASTs of RST trees obtained from the teacher parsers.", "Table 2 shows the average and ensemble scores with five models for different types of silver data.", "In the table, SBP indicates the results obtained from the original span-based neural top-down parser, which means the parser was trained only with the RST-DT; this setting is without any silver data.", "With AST as the silver data, performance in all metrics improved against the baseline.", "In most metrics, AST achieved the best scores.", "In particular, the gains in Relation and Full were impressive.", "DT and ADT, which consist of document-level RST trees, also outperformed the baseline.", "However, the gains against the baseline were smaller than those by AST.", "We believe this is related to the size and quality of the silver data.", "The number of trees and nodes in ADT is only 2,142 and 57,940, respectively, while AST has 175,709 trees and 2,279,275 nodes.", "Thus, a small number of silver data for pre-training is not effective.", "On the other hand, while DT has only 91,536 trees, the number of their nodes is huge, at about 8,000 K. The lower score of DT would come from unreliable parse trees contained in the silver data built by a single teacher parser.", "As described above, to pre-train the student parser, we do not need to use the entire RST trees for documents.", "Thus, AST, with a large collection of RST subtrees, is more effective than the other approaches.", "Since the training time depends on the number of nodes contained in the data, SBP+AST can be learned in a quarter of the time required by SBP+DT.", "Consequently, AST has another advantage against DT.", "Furthermore, the performance of averaging five models was greatly improved by pre-training with the silver data.", "The gains against the baseline were larger than those for Ensemble,' and the differences between their performances became small.", "The neural model tends to converge to a different local optimum solution by mini-batch training, so the convergence is not stable when the data size is small.", "Pre-training can improve this.", "This is another advantage of pre-training with silver data.", "We also compare the results of our parser pre-trained with AST with and without fine-tuning in Appendix D. 5.2 Effect of Data Size To investigate how the data size of AST for pretraining affects the performance, we show Span, Nuclearity, Relation, and Full scores while varying the size in Figure", "3. Span scores showed only small gains even by increasing the amount of data because identifying splitting points for spans is a simple 2-class classification problem.", "On the other hand, identifying nuclearity and relation labels is a multi-class classification problem.", "Thus, we believe we need more training data than that for identifying splitting points.", "In particular, the Relation score could be improved with more silver data.", "To investigate the effectiveness of SBP+AST in more detail, we show Relation F 1 scores for relation labels with SBP, SBP+AST, and the two-stage parser in Figure", "4. The results of SBP and SBP+AST were obtained from a five-model ensemble.", "In most relation labels, since the two-stage parser, the teacher parser, is comparable or supe-rior to SBP, i.e., the student parser, the performance of SBP+AST can be improved.", "It finally outperformed the two-stage parser by introducing pretraining with silver data, even for less frequent relation labels.", "Furthermore, SBP+AST can correctly parse for some relation labels that the student parser 55 60 65 70 75 80 85 90 1,141 (5%) 2,290 (10%) 5,701 (25%) 11,389 (50%) 22,793 (100%) M i c r o a v e r aged F 1 Number of nodes [x100] S N R F Figure 3: Results of changing the data size used for pretraining.", "Finally, we compare our SBP+AST with the ensemble to current state-of-the-art parsers.", "Table 3 shows the micro-averaged F 1 scores.", "We used Paired Bootstrap Resampling (Koehn, 2004) for the significance test.", "We can see that our method achieved the best scores except for Span.", "The gains against the previous best scores were 0.4, 3.0, and 2.7 points for Nuclearity, Relation, and Full, respectively.", "In particular, the gains for Relation and Full are remarkable.", "To solve the problem of the limited amount of training data available for neural RST parsing, we proposed a method of exploiting agreement subtrees as silver data: We pre-train a parser with the silver data and fine-tune it with the gold data.", "We also presented an algorithm that efficiently extracts overlapping subtrees as the agreement subtrees from multiple trees.", "Experimental results on the RST-DT demonstrated that our method significantly improves the performance of relation-labeled and fully-labeled F 1 scores, which are strongly affected by data sparseness due to a small number of training data.", "Furthermore, the results showed that our method achieves the state-of-the-art nuclearity-labeled, relation-labeled, and fully-labeled F 1 scores." ]
[ "abstain", "abstain", "abstain", "objective", "method", "result", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "objective", "result", "method", "method", "objective", "method", "result", "result", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "abstain", "objective", "method", "other", "other", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "objective", "method", "objective", "result" ]
[ "[email protected]", "Abstract Distributed representations of words learned from text have proved to be successful in various natural language processing tasks in recent times.", "While some methods represent words as vectors computed from text using predictive model (Word2vec) or dense count based model (GloVe), others attempt to represent these in a distributional thesaurus network structure where the neighborhood of a word is a set of words having adequate context overlap.", "Being motivated by recent surge of research in network embedding techniques (DeepWalk, LINE, node2vec etc.), we turn a distributional thesaurus network into dense word vectors and investigate the usefulness of distributional thesaurus embedding in improving overall word representation.", "This is the first attempt where we show that combining the proposed word representation obtained by distributional thesaurus embedding with the state-of-the-art word representations helps in improving the performance by a significant margin when evaluated against NLP tasks like word similarity and relatedness, synonym detection, analogy detection.", "Additionally, we show that even without using any handcrafted lexical resources we can come up with representations having comparable performance in the word similarity and relatedness tasks compared to the representations where a lexical resource has been used.", "Natural language understanding has always been a primary challenge in natural language processing (NLP) domain.", "Learning word representations is one of the basic and primary steps in understanding text and nowadays there are predominantly two views of learning word representations.", "In one realm of representation, words are vectors of distributions obtained from analyzing their contexts in the text and two words are considered meaningfully similar if the vectors of those words are close in the euclidean space.", "In recent times, attempts have been made for dense representation of words, be it using predictive model like Word2vec (Mikolov et al., 2013) or count-based model like GloVe (Pennington et al., 2014) which are computationally efficient as well.", "Another stream of representation talks about network like structure where two words are considered neighbors if they both occur in the same context above a certain number of times.", "The words are finally represented using these neighbors.", "Distributional Thesaurus is one such instance of this type, which gets automatically produced from a text corpus and identifies words that occur in similar contexts; the notion of which was used in early work about distributional semantics (Grefenstette, 2012; Lin, 1998; Curran and Moens, 2002).", "One such representation is JoBimText proposed by Biemann and Riedl (2013) that contains, for each word, a list of words that are similar with respect to their bigram distribution, thus producing a network representation.", "Later, Riedl and Biemann (2013) introduced a highly scalable approach for computing this network.", "We mention this representation as a DT network throughout this article.", "With the emergence of recent trend of embedding large networks into dense low-dimensional vector space efficiently (Perozzi et al., 2014; Tang et al., 2015; Grover and Leskovec, 2016) which are focused on capturing different properties of the network like neighborhood structure, community structure, etc., we explore representing DT network in a dense vector space and evaluate its useful application in various NLP tasks.", "There has been attempt (Ferret, 2017) to turn distributional thesauri into word vectors for synonym extraction and expansion but the full utilization of DT embedding has not yet been explored.", "In this paper, as a main contribution, we 463 investigate the best way of turning a Distributional Thesaurus (DT) network into word embeddings by applying efficient network embedding methods and analyze how these embeddings generated from DT network can improve the representations generated from prediction-based model like Word2vec or dense count based semantic model like GloVe.", "We experiment with several combination techniques and find that DT network embedding can be combined with Word2vec and GloVe to outperform the performances when used independently.", "Further, we show that we can use DT network embedding as a proxy of WordNet embedding in order to improve the already existing state-of-the-art word representations as both of them achieve comparable performance as far as word similarity and word relatedness tasks are concerned.", "Considering the fact that the vocabulary size of WordNet is small and preparing WordNet like lexical resources needs huge human engagement, it would be useful to have a representation which can be generated automatically from corpus.", "We also attempt to combine both WordNet and DT embeddings to improve the existing word representations and find that DT embedding still has some extra information to bring in leading to better performance when compared to combination of only WordNet embedding and state-of-the-art word embeddings.", "While most of our experiments are focused on word similarity and relatedness tasks, we show the usefulness of DT embeddings on synonym detection and analogy detection as well.", "In both the tasks, combined representation of GloVe and DT embeddings shows promising performance gain over state-of-the-art embeddings.", "The core idea behind the construction of distributional thesauri is the distributional hypothesis (Firth, 1957): You should know a word by the company it keeps.", "The semantic neighbors of a target word are words whose contexts overlap with the context of a target word above a certain threshold.", "Some of the initial attempts for preparing distributional thesaurus are made by Lin (1998), Curran and Moens (2002), Grefenstette (2012).", "The semantic relation between a target word and its neighbors can be of different types, e.g., synonymy, hypernymy, hyponymy or other relations (Adam et al., 2013; Budanitsky and Hirst, 2006) which prove to be very useful in different natural language tasks.", "Even though computation of sparse count based models used to be inefficient, in this era of high speed processors and storage, attempts are being made to streamline the computation with ease.", "One such effort is made by Kilgarriff et al. (2004) where they propose Sketch Engine, a corpus tool which takes as input a corpus of any language and corresponding grammar patterns, and generates word sketches for the words of that language and a thesaurus.", "Recently, Riedl and Biemann (2013) introduce a new highly scalable approach for computing quality distributional thesauri by incorporating pruning techniques and using a distributed computation framework.", "They prepare distributional thesaurus from Google book corpus in a network structure and make it publicly available.", "In another stream of literature, word embeddings represent words as dense unit vectors of real numbers, where vectors that are close together in euclidean space are considered to be semantically related.", "In this genre of representation, one of the captivating attempt is made by Mikolov et al. (2013), where they propose Word2vec, basically a set of two predictive models for neural embedding whereas Pennington et al. (2014) propose GloVe, which utilizes a dense count based model to come up with word embeddings that approximate this.", "Comparisons have also been made between count-based and prediction-based distributional models (Baroni et al., 2014) upon various tasks like relatedness, analogy, concept categorization etc., where researchers show that prediction-based word embeddings outperform sparse count-based methods used for computing distributional semantic models.", "In other study, Levy and Goldberg (2014) show that dense count-based methods, using PPMI weighted co-occurrences and SVD, approximates neural word embeddings.", "Later, Levy et al. (2015) show the impact of various parameters and the best performing parameters for these methods.", "All these approaches are completely text based; no external knowledge source has been used.", "More recently, a new direction of investigation has been opened up where researchers are trying to combine knowledge extracted from knowledge bases, images with distributed word representations prepared from text with the expectation of getting better representation.", "Some use 464 Knowledge bases like WordNet (Miller, 1995), FreeBase (Bollacker et al., 2008), PPDB (Gan-itkevitch et al., 2013), ConceptNet (Speer et al., 2017), whereas others use ImageNet (Frome et al., 2013; Kiela and Bottou, 2014; Both et al., 2017; Thoma et al., 2017) for capturing visual representation of lexical items.", "There are various ways of combining multiple representations.", "Some of the works extract lists of relations from knowledge bases and use those to either modify the learning algorithms (Halawi et al., 2012; Wang et al., 2014; Tian et al., 2016; Rastogi et al., 2015) or postprocess pre-trained word representations (Faruqui et al., 2015).", "Another line of literature prepares dense vector representation from each of the modes (text, knowledge bases, visual etc.) and tries to combine the vectors using various methods like concatenation, centroid computation, principal component analysis (Jolliffe, 1986), canonical correlation analysis (Faruqui and Dyer, 2014) etc.", "One such recent attempt is made by Goikoetxea et al. (2016) where they prepare vector representation from WordNet following the method proposed by Goikoetxea et al. (2015), which combines random walks over knowledge bases and neural network language model, and tries to improve the vector representation constructed from text using this.", "As in lexical knowledge bases, the number of lexical items involved is much less than the raw text and preparing such resources is a cumbersome task, our goal is to see whether we can use DT network instead of some knowledge bases like WordNet and achieve comparable performance on NLP tasks like word similarity and word relatedness.", "In order to prepare vector representation from DT network, we attempt to use various network embeddings like DeepWalk (Per-ozzi et al., 2014), LINE (Tang et al., 2015), struc2vec (Ribeiro et al., 2017), node2vec (Grover and Leskovec, 2016) etc.", "Some of those try to capture the neighbourhood or community structure in the network while others attempt to capture structural similarity between nodes, second order proximity, etc. 3 Proposed Methodology Our aim is to analyze the effect of integrating the knowledge of Distributional Thesaurus network with the state-of-the-art word representation models to prepare a better word representation.", "We first prepare vector representations from Distributional Thesaurus (DT) network applying network representation learning model.", "Next we combine this thesaurus embedding with state-of-the-art vector representations prepared using GloVe and Word2vec model for analysis.", "Riedl and Biemann (2013) use the Google books corpus, consisting of texts from over 3.4 million digitized English books published between 1520 and 2008 and construct a distributional thesauri (DT) network using the syntactic n-gram data (Goldberg and Orwant, 2013).", "The authors first compute the lexicographer's mutual information (LMI) (Kilgarriff et al., 2004) for each bigram, which gives a measure of the collocational strength of a bigram.", "Each bigram is broken into a word and a feature, where the feature consists of the bigram relation and the related word.", "Then the top 1000 ranked features for each word are taken and for each word pair, intersection of their corresponding feature set is obtained.", "The word pairs having number of overlapping features above a threshold are retained in the network.", "In a nutshell, the DT network contains, for each word, a list of words that are similar with respect to their bigram distribution (Riedl and Biemann, 2013).", "In the network, each word is a node and there is a weighted edge between a pair of words where the weight corresponds to the number of overlapping features.", "A sample snapshot of the DT is shown in Figure 1.", "Now, from the DT network, we prepare the vector representation for each node using network representation learning models which produce vector representation for each of the node in a network.", "For this purpose, we use three state-of-the-art network representation learning models as discussed below.", "DeepWalk: DeepWalk (Perozzi et al., 2014) learns social representations of a graph's vertices by modeling a stream of short random walks.", "Social representations signify latent features of the vertices that capture neighborhood similarity and community membership.", "LINE: LINE (Tang et al., 2015) is a network embedding model suitable for arbitrary types of networks: undirected, directed and/or weighted.", "The model optimizes an objective which preserves both the local and global network structures by capturing both first-order and second-order proximity between vertices.", "node2vec: node2vec (Grover and Leskovec, 2016) is a semi-supervised algorithm for scalable feature learning in networks which maximizes the likelihood of preserving network neighborhoods of nodes in a d-dimensional feature space.", "This algorithm can learn representations that organize nodes based on their network roles and/or communities they belong to by developing a family of biased random walks, which efficiently explore diverse neighborhoods of a given node.", "Note that, by applying network embedding models on DT network we obtain 128 dimensional vectors for each word in the network.", "We only consider edges of the DT network having edge weight greater or equal to 50 for network embedding.", "Henceforth, we will use D2V-D , D2V-L and D2V-N to indicate vector representations obtained from DT network produced by DeepWalk, LINE and node2vec, respectively.", "After obtaining vector representations, we also explore whether these can be combined with the pre-trained vector representation of Word2vec and GloVe to come up with a joint vector representation.", "For that purpose, we directly use very wellknown GloVe 1.2 embeddings (Pennington et al., 2014) trained on 840 billion words of the common crawl dataset having vector dimension of 300.", "As an instance of pre-trained vector of Word2vec, we use prominent pre-trained vector representations prepared by Mikolov et al. (2013) trained on 100 billion words of Google News using skip-grams with negative sampling, having dimension of 300.", "In order to integrate the word vectors, we apply two strategies inspired by Goikoetxea et al. (2016): concatenation (CC) and principal component analysis (PCA).", "Concatenation (CC): This corresponds to the simple vector concatenation operation.", "Vector representations of both GloVe and Word2vec are of 300 dimensions and word embeddings learnt form DT are of 128 dimensions.", "The concatenated representation we use are of 428 dimensions.", "Principal Component Analysis (PCA): Principal component analysis (Jolliffe, 1986) is a dimensionality reduction statistical procedure that uses an orthogonal transformation to convert a set of observations of possibly correlated variables into a set of values of linearly uncorrelated variables called principal components (linear combinations of the original variables).", "We apply PCA to the concatenated representations (dimension of 428) reducing these to 300 dimensions.", "In addition to PCA, we try with truncated singular value decomposition procedure (Hansen, 1987) as well, but as per the experiment set up, it shows negligible improvement in performance compared to simple concatenation; hence we do not continue with the truncated singular value decomposition for dimensionality reduction.", "After obtaining the combined representations of words, we head towards evaluating the quality of the representation.", "In order to evaluate the quality of the word representations, we first conduct qualitative analysis of the joint representation.", "Next, we follow the most acceptable way of applying on different NLP tasks like word similarity and word relatedness, synonym detection and word analogy as described next.", "On qualitative analysis of some of the word pairs from the evaluation dataset, we observe that the joint representation (PCA (GloVe,D2V-N)) captures the notion of similarity much better than GloVe.", "For example, it gives a higher cosine similarity scores to the pairs (car, cab), (sea, ocean), (cottage,cabin), (vision, perception) etc. in com-466 Dataset GloVe W2V D2V-D D2V-L D2V-N WSSim 0.799 0.779 0.737 0.073 0.764 SimL-N 0.427 0.454 0.418 0.015 0.421 RG-65 0.791 0.777 0.804 -0.121 0.813 MC-30 0.799 0.819 0.859 -0.067 0.869 WSR 0.637 0.631 0.287 0.077 0.333 M771 0.707 0.655 0.636 0.027 0.63 M287 0.8 0.755 0.558 -0.027 0.591 MEN-N 0.819 0.764 0.619 0.004 0.612 WS-353 0.706 0.697 0.51 0.088 0.547 Table 1: Comparison of individual performances of different vector representation models w.r.t. word similarity and relatedness tasks.", "parison to GloVe.", "However, in some cases, where words are not similar but are related, e.g., (air-port, flight), (food, meat), (peeper, soup), (har-bour, shore), the joint representation gives a lower cosine similarity score than GloVe comparatively.", "In the next set of evaluation experiments, we observe this utility of joint representation towards word similarity task and word relatedness task to some extent.", "In this genre of tasks, the human judgment score for each word pair is given; we report the Spearman's rank correlation coefficient ( ) between human judgment score and the predicted score by distributional model.", "Note that, we take cosine similarity between vector representations of words in a word pair as the predicted score.", "Datasets: We use the benchmark datasets for evaluation of word representations.", "Four word similarity datasets and four word relatedness datasets are used for that purpose.", "The descriptions of the word similarity datasets are given below.", "WordSim353 Similarity (WSSim) : 203 word pairs extracted from WordSim353 dataset (Finkel-stein et al., 2001) by manual classification, prepared by Agirre et al. (2009), which deals with only similarity.", "SimLex999 (SimL) : 999 word pairs rated by 500 paid native English speakers, recruited via Amazon Mechanical Turk, 1 who were asked to rate the similarity.", "This dataset is introduced by Hill et al. (2016).", "RG-65 : It consists of 65 word pairs collected by Rubenstein and Goodenough (1965).", "These word pairs are judged by 51 humans in a scale from 0 to 4 according to their similarity, but ig-1 www.mturk.com noring any other possible semantic relationships.", "MC-30 : 30 words judged by 38 subjects in a scale of 0 and 4 collected by Miller and Charles (1991).", "datasets is given below: WordSim353 Relatedness (WSR) : 252 word pairs extracted from WordSim353 (Finkelstein et al., 2001) dataset by manual classification, prepared by Agirre et al. (2009) which deals with only relatedness.", "MTURK771 (M771) : 771 word pairs evaluated by Amazon Mechanical Turk workers, with an average of 20 ratings for each word pair, where each judgment task consists of a batch of 50 word pairs.", "Ratings are collected on a 15 scale.", "This dataset is introduced by Halawi et al. (2012).", "MTURK287 (M287) : 287 word pairs evaluated by Amazon Mechanical Turk workers, with an average of 23 ratings for each word pair.", "This dataset is introduced by Radinsky et al. (2011).", "MEN : MEN consists of 3,000 word pairs with [0, 1]-normalized semantic relatedness ratings provided by Amazon Mechanical Turk workers.", "This dataset was introduced by Bruni et al. (2014).", "Along with these datasets we use the full WordSim353 (WS-353) dataset (includes both similarity and relatedness pairs) (Finkelstein et al., 2001) which contains 353 word pairs, each associated with an average of 13 to 16 human judgments in a scale of 0 to 10.", "Being inspired by Baroni et al. (2014), we consider only noun pairs from SimL and MEN datasets, which will be denoted as SimL-N and MEN-N whereas other datasets only contain the noun pairs.", "We start with experiments to inspect individual performance of each of the vector representations for each of the datasets.", "Table 1 represents individual performances of GloVe, Word2vec, D2V-467 Dataset GloVe CC (GloVe,D2V-D) PCA (GloVe,D2V-D) CC (GloVe,D2V-N) PCA (GloVe,D2V-N) WSSim 0.799 0.838 0.839 0.84 0.832 SimL-N 0.427 0.443 0.468 0.446 0.483 RG-65 0.791 0.816 0.879 0.809 0.857 MC-30 0.799 0.86 0.89 0.866 0.874 WSR 0.637 0.676 0.645 0.67 0.657 M771 0.707 0.708 0.707 0.711 0.719 M287 0.8 0.781 0.807 0.795 0.82 MEN-N 0.819 0.792 0.799 0.806 0.817 WS-353 0.706 0.751 0.74 0.75 0.75 Table 2: Comparison of performances (Spearman's ) of GloVe against the combined representation of word representations obtained from DT network using network embeddings (DeepWalk, node2vec) with GloVe.", "D, D2V-L and D2V-N for different datasets.", "In most of the cases, GloVe produces the best results although no model is a clear winner for all the datasets.", "Interestingly, D2V-D and D2V-N give results comparable to GloVe and Word2vec for the word similarity datasets, even surpassing GloVe and Word2vec for few of these.", "D2V-L gives very poor performance, indicating that con-Dataset PCA (GloVe,D2V-N) PCA (GloVe, WN2V) PCA (GloVe, WN2V, D2V-N) WSSim 0.832 0.828 0.853 SimL-N 0.483 0.525 0.531 RG-65 0.857 0.858 0.91 MC-30 0.874 0.882 0.92 WSR 0.657 0.699 0.682 M771 0.719 0.762 0.764 M287 0.82 0.816 0.81 MEN-N 0.817 0.848 0.7993 WS-353 0.75 0.7801 0.7693 Table 5: Performance ( ) reported for three combined representations: GloVe and DT embedding using node2vec (D2V-N), GloVe and WordNet embedding (WN2V), GloVe, WN2V and D2V-N.", "sidering second order proximity in the DT network while embedding has an adverse effect on performance in word similarity and word relatedness tasks, whereas random walk based D2V-D and D2V-N which take care of neighborhood and community, produce decent performance.", "Hence-468 Dataset GloVe GloVe with retrofitting PCA (GloVe,D2V-N) WSSim 0.799 0.799 0.832 SimL-N 0.427 0.423 0.483 RG-65 0.791 0.791 0.857 MC-30 0.799 0.799 0.874 WSR 0.637 0.69 0.657 M771 0.707 0.708 0.719 M287 0.8 0.795 0.82 MEN-N 0.819 0.819 0.817 WS-353 0.706 0.703 0.75 Table 6: Comparison of performances (Spearman's ) between GloVe representation and retrofitted (by DT network) GloVe representation.", "Next, we investigate whether network embeddings applied on Distributional Thesaurus network can be combined with GloVe and Word2vec to improve the performance on the pre-specified tasks.", "In order to do that, we combine the vector representations using two operations: concatenation (CC), and principal component analysis (PCA).", "Table 2 represents the performance of combining GloVe with D2V-D and D2V-N for all the datasets using these combination strategies.", "In general, PCA turns out to be better technique for vector combination than CC.", "Clearly, combining DT embeddings and GloVe boosts the performance for all the datasets except for the MEN-N dataset, where the combined representation produces comparable performance.", "In order to ensure that this observation is consistent, we try combining DT embeddings with Word2vec.", "The results are presented in Table 3 and we see very similar improvements in the performance except for a few cases, indicating the fact that combining word embeddings prepared form DT network is helpful in enhancing performances.", "From Tables 1, 2 and 3, we see that GloVe proves to be better than word2Vec for most of the cases, D2V-N is the best performing network embedding, and PCA turns out to be the best combination technique.", "Henceforth, we consider PCA (GloVe, D2V-N ) as our model for comparison with the baselines for the rest of the experiments.", "Further, to scrutinize that the achieved result is not just the effect of combining two different word vectors, we compare PCA (GloVe, D2V-N) against combination of GloVe and Word2vec (W2V).", "Table 4 shows the performance comparison on different datasets and it is evident that PCA (GloVe, D2V-N) gives better results compared to PCA (GloVe, W2V) in most of the cases.", "Now, as we observe that the network embedding from DT network helps to boost the performance of Word2vec and GloVe when combined with them, we further compare the performance against the case when text based embeddings are combined with embeddings from lexical resources.", "For that purpose, we take one baseline (Goikoetxea et al., 2016), where authors combined the text based representation with WordNet based representation.", "Here we use GloVe as the text based representation and PCA as the combination method as prescribed by the author.", "Note that, WordNet based representation is made publicly available by Goikoetxea et al. (2016).", "From the second and third columns of Table 5, we observe that even though we do not use any manually created lexical resources like WordNet our approach achieves comparable performance.", "Additionally we check whether we gain in terms of performance if we integrate the three embeddings together.", "Fourth column of Table 5 shows that we gain for some of the datasets and for other cases, it has a negative effect.", "Looking at the performance, we can conclude that automatically generated DT network from corpus brings in useful additional information as far as word similarity and relatedness tasks are concerned.", "So far, we use concatenation and PCA as methods for combining two different representations.", "However, as per the literature, there are different ways of infusing knowledge from different lexical sources to improve the quality of pre-trained vector embeddings.", "So we compare our proposed way of combination with a completely different way of integrating information from both dimensions, known as retrofitting .", "Retrofitting is a novel way proposed by Faruqui et al. (2015) for refining vector space representations using relational information from semantic lexicons by encouraging linked words to have similar vector representations.", "Here instead of using semantic lexicons, we use the DT network to produce the linked words to have similar vector representation.", "Note that, for a target word, we consider only those words as linked words which are having edge weight greater than a certain threshold.", "While experimenting with various thresholds, the best results were obtained for a 469 threshold value of 500.", "Table 6 shows the performance of GloVe representations when retrofitted with information from DT network.", "Even though in very few cases it gives little improved performance, compared to other combinations presented in Table2, the correlation is not very good, indicating the fact that retrofitting is probably not the best way of fusing knowledge from a DT network.", "Further, we extend our study to investigate the usefulness of DT embedding on other NLP tasks like synonym detection, SAT analogy task as will be discussed next.", "We consider two gold standard datasets for the experiment of synonym detection.", "The descriptions of the used datasets are given below.", "TOEFL: It contains 80 multiple-choice synonym questions (4 choices per question) introduced by Landauer and Dumais (1997), as a way of evaluating algorithms for measuring degree of similarity between words.", "Being consistent with the previous experiments, we consider only nouns for our experiment and prepare TOEFL-N which contains 23 synonym questions.", "ESL: It contains 50 multiple-choice synonym questions (4 choices per question), along with a sentence for providing context for each of the question, introduced by Turney (2001).", "Here also we consider only nouns for our experiment and prepare ESL-N which contains 22 synonym questions.", "Note that, in our experimental setup we do not use the context per question provided in the dataset for evaluation.", "While preparing both the datasets, we also keep in mind the availability of word vectors in both downloaded GloVe representation and prepared DT embedding.", "For evaluation of the word embeddings using TOEFL-N and ESL-N , we consider the option as the correct answer which is having highest cosine similarity with the question and report accuracy.", "From the results presented in Table 7, we see that DT embedding leads to boost the performance of GloVe representation.", "For analogy detection we experiment with SAT analogy dataset.", "This dataset contains 374 multiple-choice analogy questions (5 choices per question) introduced by Turney and Bigham (2003) as a way of evaluating algorithms for measuring relational similarity.", "Considering only Dataset GloVe D2V-N PCA (GloVe, D2V-N) TOEFL-N 0.826 0.739 0.869 ESL-N 0.636 0.591 0.682 SAT-N 0.465 0.509 0.515 Table 7: Comparison of accuracies between GloVe representation, DT embedding using node2vec and combination of both where PCA is the combination technique.", "contains 159 questions.", "In order to find out the correct answer from the 5 options given for each question, we take up a score ( s ) metric proposed by Speer et al. (2017), where for a question a 1 is to b 1 ', we will consider a 2 is to b 2 ' as the correct answer among the options, whose score ( s ) is the highest.", "Score ( s ) is defined by the author as follows: s = a 1", "w 2 ( b 2 b 1 ) .", "( a 2 a 1 )", "As mentioned by the authors, the appropriate values of w 1 and w 2 are optimized separately for each system using grid search, to achieve the best performance.", "We use accuracy as the evaluation metric.", "The last row of Table 7 presents the comparison of accuracies (best for each model) obtained using different embeddings portraying the same observation that combination of GloVe and DT embeddings leads to better performance compared to GloVe and DT embeddings when used separately.", "Note that, the optimized values of ( w 1 , w 2 ) are (0.2,0.2), (0.8,0.6), (6,0.6) for GloVe, DT embedding, combined representation of GloVe and DT embeddings, respectively, for the analogy task.", "In this paper we showed that both dense count based model (GloVe) and predictive model (Word2vec) lead to improved word representation when they are combined with word representation learned using network embedding methods on Distributional Thesaurus (DT) network.", "We tried with various network embedding models among which node2vec proved to be the best in our experimental setup.", "We also tried with different methodologies to combine vector representations and PCA turned out to be the best among them.", "The combined vector representation of words yielded the better performance for most 470 of the similarity and relatedness datasets as compared to the performance of GloVe and Word2vec representation individually.", "Further we observed that we could use the information from DT as a proxy of WordNet in order to improve the state-of-the-art vector representation as we were getting comparable performances for most of the datasets.", "Similarly, for synonym detection task and analogy detection task, the same trend of combined vector representation continued, showing the superiority of the combined representation over state-of-the-art embeddings.", "All the datasets used in our experiments which are not under any copyright pro-tection, along with the DT embeddings are made publicly available 2 .", "In future we plan to investigate the effectiveness of the joint representation on other NLP tasks like text classification, sentence completion challenge, evaluation of common sense stories etc.", "The overall aim is to prepare a better generalized representation of words which can be used across languages in different NLP tasks." ]
[ "abstain", "abstain", "abstain", "objective", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "result", "result", "abstain", "result", "result", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "method", "objective", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "result", "other", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "result", "method", "method", "abstain", "result", "abstain", "method", "objective", "abstain" ]
[ "Automated methods have been widely used to identify and analyze mental health conditions (e.g., depression) from various sources of information, including social media.", "Yet, deployment of such models in real-world healthcare applications faces challenges including poor out-of-domain generalization and lack of trust in black box models.", "In this work, we propose approaches for depression detection that are constrained to different degrees by the presence of symptoms described in PHQ9, a questionnaire used by clinicians in the depression screening process.", "In dataset-transfer experiments on three social media datasets, we find that grounding the model in PHQ9's symptoms substantially improves its ability to generalize to out-of-distribution data compared to a standard BERT-based approach.", "Furthermore, this approach can still perform competitively on in-domain data.", "These results and our qualitative analyses suggest that grounding model predictions in clinically-relevant symptoms can improve generalizability while producing a model that is easier to inspect.", "Given the significance of mental health as a pub-lic health challenge (Brdvik, 2018), much work has investigated approaches for detecting mental health conditions using social media text (Yates et al., 2017; Coppersmith et al., 2018; Shing et al., 2020; Harrigian et al., 2021).", "Such approaches could be used by at-risk users and their clinicians to monitor behavioral changes (e.g., by monitoring changes in the presence of symptoms related to depression as treatment progresses.", "These approaches generally rely on datasets consisting of users with self-reported diagnoses (e.g., based on a statement like I was just diagnosed with de-pression ) for training and evaluation (e.g., Yates et al., 2017; Cohan et al., 2018).", "Despite promising results on these tasks, related work argues that assessing depression and suicidal behavior is difficult in practical settings and even experienced clinicians frequently struggle to correctly interpret signals (Coppersmith et al., 2018).", "Furthermore, recent work has found that models trained on particular mental health datasets do not always generalize to others.", "Harrigian et al. (2020); Ernala et al. (2019) find that systematic, spurious differences between diagnosed and control users can prevent trained models from generalizing to even other, similar social media data.", "Similarly, outside the mental health domain, recent work reports that neural models often struggle to generalize to data outside their training distribution (Geirhos et al., 2020; D'Amour et al., 2020; Harrigian et al., 2020).", "In this work, we explore approaches for constraining the behavior of depression detection methods by the presence of symptoms known to be related to depression, like mood and sleep issues.", "To do so, we develop nine symptom detection models that correspond to questions present in PHQ9, a screening questionnaire that has been clinically validated and commonly used in practical setting (Kroenke et al., 2001).", "These questions ask how often the patient has experienced symptoms from nine symptom groups (e.g., how often have you had little interest/pleasure in doing things? ).", "Grounding depression detection in a trusted diagnostic tool produces several benefits.", "From the perspective of mental health professionals, the output of such model is inherently more reliable than a black-box model, because classification decisions are based on the presence of specific symptoms in specific posts that can be inspected in order to assess the quality of evidence for a diagnosis.", "Further, we find this improves the model's ability to generalize, which may be due to limiting its ability to use spurious shortcuts.", "This strategy is complementary to strategies for reducing temporal and topical artifacts (Harrigian et al., 2020).", "ple yet effective models: a questionnaire model that detects symptoms from PHQ9 and a depression detection model.", "We instantiate both with a range of methods that are progressively less constrained.", "At one end of the spectrum, the questionnaire model uses only manually-defined patterns and the depression model makes classification decisions by counting how many times these patterns appear in a user's posts.", "At the opposite end of the spectrum, there is no explicit questionnaire model and BERT (Devlin et al., 2019a) serves as an unconstrained depression detection model.", "In between, we relax the questionnaire model by training BERT-based symptom classifiers using the manually-defined patterns, by considering symptom representations rather than counts, and by adding an extra trainable other ' symptom.", "We find that our constrained models perform competitively compared to a standard unconstrained BERT classifier when trained and evaluated on the same dataset, while additionally providing a model whose behavior can be understood in terms of relevant symptoms in specific posts.", "However, dataset-transfer evaluations demonstrate substantial degradation in BERT's effectiveness.", "In this setting, our constrained models outperform the unconstrained BERT and show improved generalizability, even across similar datasets.", "Our contributions are: (1) comprehensive pattern sets for identifying the symptoms in PHQ9 and heuristics for using them to train weakly-supervised symptom classifiers, (2) a range of progressively less constrained methods for performing depression detection based on these symptoms, and (3) an extensive evaluation of depression detection methods.", "Our implementation is available online 1 .", "Natural language processing methods have been widely used for automatic mental health assessment.", "To support automated analyses of mental health related language, a variety of datasets have been proposed.", "Coppersmith et al. (2014) focused on predicting depressed and PTSD users in Twitter, whereas Milne et al. (2016); Shing et al. (2018) and Zirikly et al. (2019) aimed to detect high risk and suicidal users from their ReachOut and Reddit posts, respectively.", "Yates et al. (RSDD; 2017), Cohan et al. (SMHD; 2018), and Wolohan 1 https://github.com/thongnt99/ acl22-depression-phq9 et al. (2018) investigated identifying depression and other mental health conditions from Reddit.", "Rich bodies of work in this area focused on studying language use and linguistic styles in depressed users.", "LIWC (Tausczik and Pennebaker, 2010) has been one of the most popular tools to characterize depression language (Ramirez-Esparza et al., 2008; De Choudhury et al., 2013).", "Similarly to other NLP domains, the use of contextualized embeddings has improved the performance of classifiers (Jiang et al., 2020; Matero et al., 2019).", "Recent work shows that while such NLP models achieve promising results, they have poor generalization to new data platforms and user groups; For example, Harrigian et al. (2020) investigated various factors, including sample size, class imbalance, temporal misalignment (e.g., language dynamic, linguistic norms), deployment latency, and self-disclosure bias that may cause performance degradation when a model is transferred to a new dataset or domain.", "The issues can occur even when datasets appear to be similar, such as when Reddit-based datasets employ different rules for selecting diagnosed and control users.", "Another problem is the black-box nature of model predictions which is a major hurdle in deploying AI models in clinical practice (Mullenbach et al., 2018).", "In this work we aim at reducing this problem by proposing to ground depression assessment in a clinical questionnaire for measuring severity of depression.", "Others have considered making predictions more explainable in the mental health domain.", "Amini and Kosseim (2020) focused on leveraging a user-level attention mechanism for detecting signs of anorexia in social media profiles.", "Our method differs from theirs in that the explanations are the results of the analysis of the attention weights, while our approaches ground model predictions in a well-established clinical instrument.", "Outside of our work, we are aware of two datasets that incorporate questionnaire information such as PHQ9 for identifying depression.", "The most recent eRisk shared task (Losada et al., 2019) relies on the Beck Depression Inventory (BDI), a 21-item questionnaire that assesses level of depression based on the presence of feelings like sadness, pessimism, etc.", "Models are built to estimate the user-level BDI score at given time frames.", "Our approach differs in that we use PHQ9 and evaluate item scores at the post level, which grounds predictions in the presence of clinically-relevant 8447 WEAK SUPERVISION METHODS Patterns & dictionaries I.*kill(ed) myself, no motivation Sentiment models Pronoun I found him in ,lying in bed all-day Negation I rarely have problem with ...", "symptoms.", "In eRisk, a sum of BDI scores is the modeled outcome (corresponding to our baseline pattern-based (threshold) classifier).", "We use user-level labels for evaluating depression status and evaluate how constraining on PHQ9 symptoms affects the user-level classification performance.", "Delahunty et al. (2019) used a deep neural network to predict PHQ4 scores using clinical data that contains patients' PHQ4 scores (Gratch et al., 2014).", "Our work does not require access to PHQ labeled clinical data, which can be hard to obtain at scale.", "Furthermore, Delahunty's approach generalizes poorly to social media data.", "Rinaldi et al. (2020) predict depression based on screening interviews that rely on PHQ9 categories.", "In their setting, PHQ9 is a channel to retrieve the depressed label, but is not used for explainability.", "Yadav et al. (2020) propose a multitask learning framework that uses PHQ-9 and figurative language detection as auxiliary tasks.", "Lee et al. (2021) contemporaneously propose a micromodel architecture that they apply to mental health assessment tasks.", "Our work shares several similarities with this approach, which uses micromodels that are similar to our symptom classifiers (questionnaire models).", "Our most straightforward and constrained methods are pattern-based classifiers that make classification decisions based on the presence of positive symptom patterns.", "This method could be decomposed into two components: a questionnaire model and a depression model.", "Questionnaire model.", "The questionnaire model of pattern-based methods is simply a pattern matcher that matches each user post against symptom patterns.", "It produces a binary pattern matching matrix of size ( num _ post 9) whose entry at ( i, j ) is 1 if a match is found between the i th post and any pattern of the j th symptom (question).", "Depression model.", "We implement two variations of the depression model whose input is the pattern matching matrix generated by the previous questionnaire model: a count-based approach and a CNN approach.", "The count-based approach simply considers whether the number of patterns found in the pattern matrix exceeds a threshold.", "The CNN approach applies CNN kernels cascaded with a linear layer over the pattern matrix.", "This approach allows consecutive posts to be weighted differently.", "In pilot experiments, we also tested a variant that closely mirrors PHQ9 by summing scores over a two-week window; this variant performed worse due to the new temporal requirement that often creates data sparsity within windows.", "One drawback of pattern-based classifiers is the inflexibility of pattern matching.", "The classifier-constrained methods relax the pattern-matching requirement by training a questionnaire model on the weakly-supervised data described in Section 4.2.", "This results in models that remain grounded in the clinical questionnaire but are capable of generalizing beyond the pattern sets.", "The PHQ9 architecture is also comprised of a questionnaire model and a depression model.", "Questionnaire model.", "The questionnaire model receives BERT (Devlin et al., 2019b) token embeddings of every post and is trained to predict the answer (positive or negative) for each of the questions in the PHQ9 instrument.", "This model consists of 9 symptom classifiers, anhedonia , concentration , eating , fatigue , mood , psychomotor , self-esteem , self-harm and sleep , corresponding to the questionnaire's 9 questions.", "Each symptom classifier is a CNN classifier with a linear layer on top.", "As illustrated in Figure 1, all symptom clas-8448 sifiers were separately trained on weakly-labeled data, which we describe in Section 4.2.", "The questionnaire model's ability to generalize to unseen patterns comes from two sources: BERT embeddings and weakly-labeled symptom data.", "First, BERT embeddings have been successfully used to transfer knowledge across domains in many NLP applications (Rietzler et al., 2020; Peng et al., 2019; Houlsby et al.).", "Second, in weakly-labeled data, the background or contextual text around the matched patterns could provide relevant cues, which is a means of generalization.", "For example, in the text now I don't want to do anything. I can't do more than sleep, eat, and watch tv. , background phrases, such as I can't do more... , are as useful as the underlined pattern for identifying the symptom anhedonia .", "Depression model.", "The depression model predicts whether a user is depressed based on the questionnaire model's output for each post.", "The questionnaire model's output can be either the final question scores (i.e., symptom scores) or the hidden layers (i.e., symptom vectors) of the 9 sub-models.", "The former represents each post with a single vector of size 9 , which is compact but less informative, while the latter is a larger matrix of size hidden _ size 9 that preserves more information.", "Any classification architecture could be used for this depression model.", "For simplicity, we use a linear classifier on top of features extracted by CNN kernels of various sizes.", "CNN kernels help summarize symptoms within a sliding window of consecutive posts sorted by timestamp and therefore are a relaxation of the two-week windows considered by the PHQ9 instrument.", "This relaxation allows more posts to be considered by each CNN kernel, which mitigates the data sparsity problem of the hard two-week window approach.", "This depression model is trained using user-level depression labels, and while this model is being trained, the encoder and questionnaire components are frozen.", "The frozen weights ensure that each questionnaire model does not drift away from its original purpose of detecting symptoms.", "PHQ9Plus extends the PHQ9 method by appending an additional symptom (neuron) to the PHQ9 symptoms that form the questionnaire model.", "This neuron is connected to post embedding and produces a score for every post.", "Furthermore, we make this additional neuron trainable end-to-end to learn other signals similarly to PHQ9 symptoms.", "Doing that allows PHQ9Plus to learn from training data other depressive signals in addition to the PHQ9 symptoms.", "However, in return, it also risks incorporating undesirable shortcuts that harm the model's generalizability.", "In the previous classifier-constrained methods, depression classifications are constrained by a questionnaire model that determines the presence of symptoms.", "This is an information bottleneck intended to make the model generalize better.", "In order to quantify the impact of this bottleneck, we also consider an unconstrained model that replace the questionnaire model in the previous methods by only a BERT encoder.", "This gives a loose upper bound on the classifier-constrained methods's performance since this approach has access to the raw BERT embeddings and thus can utilize more signals (even spurious ones) than those captured by the questionnaire model.", "We conduct experiments on three datasets; all consist of Reddit social media data but follow different construction methodologies (e.g., identifying depressed users based on a self-report statement vs. based on starting a thread in a support subreddit).", "In addition to evaluating methods on each dataset, dataset-transfer evaluation allows us to evaluate how well methods generalize to similar datasets with different construction methodologies.", "The three datasets selected for experiments are RSDD (Yates et al., 2017), eRisk2018 (Losada and Crestani, 2016) and TRT (Wolohan et al., 2018).", "The RSDD (Reddit Self-reported Depression Diagnosis) dataset was constructed from Reddit posts and contains approximately 9 , 000 self-reported diagnosed users and 107 , 000 matched control users.", "eRisk2018 is a smaller dataset of 214 depressed users and 1 , 493 control users curated to evaluate the effectiveness of early risk detection on the Internet.", "Similar to RSDD, the depression group in eRisk2018 was collected based on user self-reports; Here, posts from mental health subreddits were not excluded like in RSDD.", "Due to the small size of the original training set, which makes the deep learning 8449 approaches unstable, we re-partitioned this dataset to allow more data for training.", "Different from RSDD and eRisk2018, TRT (Topic-Restricted-Text) was constructed based on community participation.", "Specifically, the depressed users were drawn from members of the /r/depression subreddit, and control users were sampled from the /r/AskReddit subreddit.", "Following the construction guideline described in (Wolohan et al., 2018) and discussion with the authors, we re-generated a version of TRT containing 6 , 805 depressed users and 57 , 155 control users.", "On all datasets, we report the F1 score of the positive (i.e., diagnosed) class, and the area under the receiver operating characteristic curve (AUC).", "The questionnaire model is tasked with classifying if a given post contains a PHQ9 symptom (positive) or not (negative).", "Given the lack of training data for this task, we collected regular expression patterns and heuristics to construct weakly-supervised training data for each of the symptoms.", "We describe the process succinctly here and provide additional details in the Appendix.", "We note that this weakly-supervised data is used only for training.", "For each question, we prepare a set of positive symptom patterns (e.g., can'?t sleep ).", "Each pattern set is then matched against a post collection crawled from 127 mental-health subreddits 2 .", "In addition, we also include posts from the SMHD dataset (Cohan et al., 2018), which excludes posts from mental-health subreddits, to diversify the training data.", "In the labeling step, we select posts containing symptom patterns as positive examples.", "While being fast and transparent, pattern matching may produce many false positives (FPs).", "We used additional heuristics to remove instances of the four most common types of FPs we observed: Positive sentiment.", "Posts containing symptom patterns but conveying a positive/happy sentiment.", "Conditional clause.", "Posts describing a symptom hypothesis rather than an experience.", "Third-person pronouns.", "Posts discussing symptoms of other people (e.g., friends, relatives) rather symptoms the user is experiencing.", "Negation.", "Posts containing symptom patterns with negation words (e.g., not , never ) preceding.", "Identifying hard negative samples is crucial for the quality of the trained classifiers.", "We use five heuristics to identify and synthesize negative examples for each symptom: Keywords.", "Posts that contain keywords (e.g., sleep ) related to positive patterns (e.g., can't sleep ) but do not match any positive pattern. Pronouns. Posts synthesized by replacing first-person pronouns (e.g., I ) from the positive examples with third-person pronouns (e.g., She ). Other symptoms. Posts sampled randomly from positive examples of other symptoms (without matching a pattern for the current symptom). Negation. Posts synthesized by negating symptom patterns in positive examples using hand-crafted mappings (e.g., tired to never tired ). Positive sentiment. Posts sampled from neutral or positive classes in the Sentiment140 sentiment analysis corpus (Go et al., 2009). 4.3 Experimental setup We designed experiments to analyze our two main component: the questionnaire and depression models. The setup for these experiments is summarized in Table 1 and specific hyperparameters are described in the Appendix. Method Encoder Symptom REP(*) DM(*) Pattern (threshold) Pattern matrix Pattern (CNN) Pattern matrix CNN PHQ9 (scores) BERT Scores CNN PHQ9 (vectors) BERT Vectors CNN PHQ9Plus BERT Scores + other CNN Unconstrained (BERT) BERT CNN Table 1: Experimental variations. (*) REP: representation; DM: Depression model 4.4 Depression detection results The results from prior work and our methods on RSDD, TRT, and eRisk are shown in Table 2. Depression detection results in dataset-transfer evaluation are shown in non-gray blocks in Table 2. Unlike standard within-dataset evaluations, this scenario requires methods trained on one dataset to generalize to other ( highly similar ) datasets. While all datasets consider the same social media platform, their dataset construction methodologies differ, and thus, they are likely to contain different dataset artifacts. Unconstraind methods have the flexibility to learn shortcuts induced by these artifacts, which can lead to poor generalization beyond the training corpus. This effect is observed 8450 Figure 2: Relative comparison between PHQ9 methods vs. BERT and PHQ9Plus in dataset-transfer settings. Win (or Loss): PHQ9 performs significantly better (or worse); Draw: not significant different. (T-test, = 0 . 05 ) Train Method Test: eRisk Test: RSDD Test: TRTAUC F1 AUC F1 AUC F1 TRT LIWC+ngram (Wolohan et al., 2018) --0 . 79 0 . 73 Pattern (threshold) -0.38 0.00 -0.35 0.00 -0.46 0.00 Pattern (CNN) 0.79 0.01 0.40 0.02 0.71 0.00 0.26 0.01 0.80 0.01 0.51 0.02 PHQ9 (scores) 0.85 0.01 0.41 0.01 0.78 0.03 0.35 0.03 0.92 0.01 0.64 0.02 PHQ9 (vectors) 0.86 0.00 0.31 0.01 0.73 0.00 0.31 0.00 0.96 0.00 0.77 0.00 PHQ9Plus 0.80 0.01 0.40 0.07 0.59 0.03 0.21 0.02 0.95 0.00 0.79 0.00 Unconstrained (BERT) 0.84 0.01 0.15 0.02 0 . 66 0 . 03 0.22 0.03 0.98 0.00 0.82 0.00 RSDD CNN(400) (Yates et al., 2017) -0 . 51 -Pattern (threshold) -0.38 0.00 -0.35 0.00 -0.46 0.00 Pattern (CNN) 0.79 0.01 0.47 0.00 0.74 0.01 0.36 0.02 0.79 0.00 0.39 0.01 PHQ9 (scores) 0.80 0.01 0.43 0.01 0.85 0.00 0.47 0.01 0.82 0.00 0.46 0.00 PHQ9 (vectors) 0.81 0.01 0.46 0.01 0.85 0.00 0.49 0.01 0.86 0.00 0.52 0.00 PHQ9Plus 0.81 0.03 0.49 0.00 0.86 0.02 0.55 0.00 0.82 0.00 0.49 0.00 Unconstrained (BERT) 0.84 0.01 0.44 0.02 0.86 0.00 0.53 0.01 0.82 0.00 0.47 0.00 eRisk Pattern (threshold) -0.40 0.00 -0.32 0.00 -0.44 0.00 Pattern (CNN) 0.80 0.00 0.43 0.01 0.73 0.01 0.31 0.01 0.79 0.00 0.47 0.01 PHQ9 (scores) 0.87 0.00 0.54 0.02 0.81 0.01 0.38 0.00 0.90 0.00 0.56 0.01 PHQ9 (vectors) 0.88 0.00 0.55 0.00 0.82 0.00 0.39 0.01 0.89 0.00 0.56 0.04 PHQ9Plus 0.94 0.00 0.73 0.03 0.79 0.01 0.35 0.01 0.84 0.01 0.54 0.02 Unconstrained (BERT) 0.95 0.01 0.71 0.03 0.81 0.02 0.36 0.02 0.83 0.01 0.50 0.02 Table 2: Depression detection on RSDD, eRisk and TRT datasets (first lines are prior work's results). Highest scores marked in bold. All of our methods and CNN(400) use only a user's first 400 posts, while other baselines use all posts. Summary with statistical test in Figure 2. Model Post 1 Post 2 PHQ9(scores) Its too late to improve myself [...] I'll graduate soon, but I feel depressed, I'm overweight, and have low confidence and self-esteem [...] now there's nothing left but working for the rest of my life. no friends, no social life, nothing fun just work. anhedonia mood self-esteem I'm so tired of living this life [...] I just want it to end. maybe life is just so unfair and there's no explanation for why things are unfair. all the unfair things just get me frustrated [...] im overweight and I never succeed in losing weight, I fuck up every time I try [...] anhedonia self-harm mood fatigue PHQ9PlusandBERT I don't like the way my life is going [...] everyday is pretty much entirely spent in my room, except for several hours at the gym [...] That's also why I want people to like me, so that I'd have people to do cool things with and my days would be less lonely and boring. So what's life like after college? I have to admit that I'm scared as fuck about it. I'm afraid that there will be no time for fun or socializing, and that I'll always have to act all grown up and professional. Table 3: A depressed user's two most informative posts found by PHQ9 (scores), PHQ9Plus, and Unconstrained (BERT) models. All posts are paraphrased for anonymity. in the results of the unconstrained model: as summarized in Figure 2, BERT is outperformed by our PHQ9 (scores, vectors) or even pattern-based methods in many dataset-transfer settings. In terms of F1 and AUC, our two PHQ9 variants generalize better than BERT with only 1 Loss at most 8451 over 6 dataset-transfer settings, and the number of Win always dominates.", "Compared to BERT, our PHQ9 (vectors) obtains 5 Win, 1 Draw, and no Loss in terms of F1.", "Regarding AUC, we only observe 1 Loss replacing a Win.", "The method using PHQ9 scores generalizes slightly worse than the one using vectors but still performs better than the unconstrained models.", "For example, when trained on TRT and tested on RSDD, our PHQ9 (scores) method improves over BERT by roughly 59% F1 score and 18% AUC.", "This behavior may reflect the unusual selection of control users in TRT, where control users are sampled from r/AskReddit.", "This may introduce shortcuts (e.g., specific topics or styles) that make BERT vulnerable to the change of testing environment.", "Our classifier-constrained methods with scores and vectors are designed to avoid the spurious shortcuts present in this setting.", "For similar reasons, the extra neuron gives the PHQ9Plus model more freedom to learn shortcuts, leading to inferior generalization than PHQ9 (with both scores and vectors).", "In addition to generalizing better, the methods with symptom scores can be used to identify evidence in the form of specific posts related to the symptoms in PHQ9, which makes them more trustworthy from the perspective of mental health professionals who can examine the posts to verify that symptoms are present.", "On standard within-dataset evaluations (in gray cells), when models are trained and tested on the same corpus, we find that F1 and AUC increase as the models become less constrained, with the standalone BERT model and PH9Plus performing the best on all datasets.", "However, as previously shown, this performance does not transfer to more realistic dataset-transfer settings.", "The two pattern-based methods perform worse than the best prior method on each dataset, though they are the easiest to interpret due to the PHQ9 symptom scores associated with each post.", "When the patterns are used to train a PHQ9 (scores) model, both F1 and AUC increase substantially, with the largest improvement of 0.13 F1 and 0.12 AUC in TRT.", "Methods using PHQ9 (vec-tors) perform slightly better than those using scores, but the latter is easier to interpret since each post is associated with a symptom score.", "Both perform well in comparison with the baselines despite the fact that they are constrained by the PHQ9 symptoms.", "The add-on neuron contributes significantly to the in-domain effectiveness of PHQ9Plus, which even outperforms BERT in several settings.", "To quantify the performance of our weakly-supervised questionnaire (symptom) models, we additionally prepared a dataset of 900 samples manually labeled by three annotators.", "The annotation procedures are described in the Appendix B. The results of our symptom classifiers evaluated on the test sets are shown in Table 4.", "Overall, our symptom classifiers perform well despite being trained on weak labels.", "The concentration , eating and self-harm classifiers show strong performance, while a lower F1 is observed with the anhedonia , mood and fatigue classifiers.", "Interestingly, we find that the F1 scores of symptom classifiers tend to positively correlate with the annotator's agreement (Pearson > 0 . 5 ).", "This suggests the low F1 score in some symptom classifiers, such as anhedonia and fatigue , might partly be due to the ambiguity of texts.", "For example, it is challenging to distinguish between an ordinary bad mood versus a depressive mood.", "Additionally, in our analysis, we find many wrong predictions where posts use symptom-like language in a more specific context, such as I completely lost my interest in him or I can't concentrate on that movie .", "These alone might not indicate a symptom, but recurrence of them might be significant.", "To examine whether our symptom classifiers can generalize beyond pattern matching, we split each pattern set into two non-overlapping groups ( g1 , g2 ), which split the original dataset into two exclusive subsets.", "Because pattern distribution are uneven, the resulting subsets are sometimes imbal-anced.", "We then evaluate our symptom classifiers on two settings (i.e., train on g1 & test on g2, and 8452 train on g2 & test on g1).", "The results shown in Table 5 show that our symptom classifiers still achieve fairly high F1 scores in both settings.", "Note that, on some symptoms (e.g., concentration, fatigue), given a small coverage of patterns in g2 , our models could still achieve good performance compared to models trained on the much larger data covered by g1 .", "This suggests that the symptom classifiers can generalize beyond the specific patterns they were trained with.", "In Figure 3, we visualize the effect of various data-construction factors on the performance of symptom classifiers.", "Regarding the data source , data obtained from mental health subreddits has more influence on effectiveness than the more general posts in SMHD.", "Discarding data from mental health subreddits leads to an average drop of nearly 0.23 F1 score in all symptoms, while the decrease after removing SMHD is 0.02.", "We attribute the immense contribution of data from mental health subreddits to the fact that mental health is the main topic of discussion in those forums; therefore, pattern matching returns fewer false-positive cases and denser symptoms, resulting in better quality training data.", "We further investigate the role of each method to remove FP matches in the positive class .", "For that purpose, we put filtered-out FP examples back into the training data and observe the variation of F1 score on the manually labeled test sets.", "In general, adding back FP examples filtered by our methods causes a total drop of nearly 0.12 in the averaged F1 score.", "Among them, instances with positive sentiment cause the highest decrease of roughly 0.04.", "Posts with third-person pronouns contribute around 0.03, while conditional clause and negation contribute modestly at around 0.02 F1.", "Similarly, we analyze the effectiveness of methods to weakly annotate the negative class by removing each of them from the training data and record the change in F1 score.", "We find that removing three methods, including keywords, pronouns, and other symptoms, causes a similar drop of roughly 0.06 each.", "Interestingly, eliminating data with positive sentiment from the negative class has a similar effect to adding them to the positive class , causing a drop of almost 0.04 F1.", "The method that changes positive examples to negative examples has the smallest impact on the F1 score (roughly 0.01).", "Overall, except for the data sources, no single labeling method has a superior impact on the quality of symptom classifiers than other methods.", "To measure the contribution of a symptom to detecting depression, we remove the corresponding symptom from the model and observe the drop in the F1 score.", "The results are reported in Table 6.", "On average, we could see that self-harm , fatigue , and anhedonia are the strongest indicators of depression.", "Removing them causes a 0.13-0.17 drop in the F1 score.", "This is in line with the prior finding that suicidal ideation or self-harm is highly correlated with depression (Brdvik, 2018).", "Mood , psychomotor , and self-esteem contribute moderately to depression detection, with roughly a 0.09 drop in F1 score for each.", "The remaining three symptoms, including concentration, eating, and sleep, play a less important role in detecting depression, with each contributing around 0.05 to the F1 score.", "Recent work has demonstrated that GPT-3 is a strong few-shot learner (Brown et al., 2020).", "Herein, we are interested in how well our classifier-constrained methods compare to the GPT-3 with 8453 Symptom Contribution Symptom Contribution Anhedonia 0.13 Psychomotor 0.10 Concentration 0.05 Self-esteem 0.09 Eating 0.05 Self-harm 0.17 Fatigue 0.13 Sleep 0.04 Mood 0.09 Table 6: Contribution of symptoms to depression detection.", "prompted examples.", "We prompt GPT-3 with four examples for each ( positive , negative ) class from one dataset (e.g., TRT) and evaluate on other datasets (e.g., RSDD, eRisk).", "Due to the high computational cost of GPT-3, we only evaluate on 100 positive samples and 100 negative samples from each dataset.", "We can see in Table 7 that prompted GPT-3 is consistently outperformed by our classifier-constrained methods, and the margin is often large.", "For example, among models trained on RSDD, the classifier-constrained model with CNN vectors achieves the highest F1 of 0.79 and 0.64 when tested on TRT and eRisk, respectively.", "GPT-3 performs worse with at least a 0.12 drop in F1.", "This result demonstrates that depression detection is still challenging for large few-shot learners, further highlighting our contributions of generalizable methods.", "However, we note that this setting has several limitations that prevent a completely fair comparison.", "Our methods have access to hundreds of posts, while GPT-3 has a limitation on the prompt length.", "In addition, prompt examples, which have high influence on the GPT-3 few-shot performance, need to be carefully selected and tuned.", "It is possible that we were unable to identify near-optimal prompts.", "Furthermore, it is difficult to know which posts or users should be prompted to GPT-3, so we opted to select randomly.", "Lifting this limitation would require a separate model to identify which posts should be used as input.", "In Table 3, we demonstrate approaches trained on TRT using text from an anonymized and paraphrased depressed user from the eRisk2018 dataset.", "We show the top two posts ranked by the drop in depression score when excluding each post.", "All models were able to produce correct labels with very high confidence.", "However, there is a clear difference in the posts that models rely primarily on for prediction.", "The PHQ9 (scores) model found highly relevant posts with convincing associ-Prompt/Train RSDD Test TRT Test eRisk PHQ9 (scores) 0.64 0.62 PHQ9 (vectors) 0.79 0.64 GPT-3 0.59 0.52 Prompt/Train TRT Test RSDD Test eRisk PHQ9 (scores) 0.71 0.72 PHQ9 (vectors) 0.69 0.78 GPT-3 0.61 0.54 Prompt/Train eRisk Test RSDD Test TRT PHQ9 (scores) 0.85 0.74 PHQ9 (vectors) 0.83 0.71 GPT-3 0.54 0.49 Table 7: F1 scores of PHQ9 models vs. GPT-3 ated symptoms.", "For example, in the first post, the PHQ9 models found 3 symptoms, including anhedonia , mood and self-esteem .", "By looking at those posts and symptoms, mental health professionals could quickly understand the patient's circumstances and make further decisions.", "The two most important posts for PHQ9Plus and BERT are more about daily life concerns or complaints, which may be less useful to explain a high depression score than the top posts used to explain the PHQ9 (scores) model.", "While these posts are relevant, they are more difficult to interpret than posts directly mentioning symptoms that are known to be relevant.", "Furthermore, in the TRT training dataset, due to the biased selection of control users, those life concerns/complaints may form a shortcut that effectively differentiates depressed users from control users.", "However, in more realistic deployment scenarios (i.e. dataset-transfer settings), the fact that such shortcuts do not generalize makes PHQ9Plus and BERT more unreliable and fragile.", "In this work, we propose a spectrum of methods for depression detection that are constrained by the presence of PHQ9 symptoms.", "In our experiments on the three datasets, we find these methods to perform well compared to strong baselines while generalizing better to similar datasets.", "This can be viewed as a proof-of-concept demonstrating that grounding depression predictions in PHQ9 can improve the generalizability of depression detection and the interpretability of the model.", "While this research focuses only on depression detection, the idea of constraining models to consider only relevant causes may be applied to a wider range of tasks, including detection of other mental health conditions with diagnostic questionnaires.", "Due to the sensitivity of the mental health related data, additional consideration needs to be taken into account when accessing and analyzing such data, as highlighted by Benton et al. (2017).", "All datasets used in this research were obtained according to each dataset's respective data usage policy.", "We did not interact with users in any way, and we refrained from showing any direct excerpts of the data in this manuscript to prevent risks from identifying users' pseudonyms.", "(All excerpts have been paraphrased.)", "Similarly, we made no attempt to identify, deanonymize, or link users to other social media accounts.", "These precautions ensure we do not draw attention to specific users who may be suffering from depression.", "All models proposed in this research were trained on social media data.", "Thus, they are likely to fail on data coming from other sources (e.g., clinical notes), and there are no accuracy guarantees even within social media data.", "Our models are not intended to replace clinicians.", "Instead, we envision the approaches we describe being used as assistive tools by mental health professionals." ]
[ "abstain", "abstain", "objective", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "result", "abstain", "result", "objective", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "other", "objective", "method", "other", "other", "objective", "other", "abstain", "method", "other", "abstain", "other", "other", "other", "other", "other", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "result", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "method", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method" ]
[ "CommonsenseQA ( CQA ) (Talmor et al., 2019) dataset was recently released to advance the research on common-sense question answering (QA) task.", "Whereas the prior work has mostly focused on proposing QA models for this dataset, our aim is to retrieve as well as generate explanation for a given (ques-tion, correct answer choice, incorrect answer choices) tuple from this dataset.", "Our explanation definition is based on certain desiderata, and translates an explanation into a set of positive and negative common-sense properties (aka facts) which not only explain the correct answer choice but also refute the incorrect ones .", "We human-annotate a first-of-its-kind dataset (called ECQA ) of positive and negative properties, as well as free-flow explanations, for 11 KQA pairs taken from the CQA dataset.", "We propose a latent representation based property retrieval model as well as a GPT-2 based property generation model with a novel two step fine-tuning procedure.", "We also propose a free-flow explanation generation model.", "Extensive experiments show that our retrieval model beats BM25 baseline by a relative gain of 100% in F 1 score, property generation model achieves a respectable F 1 score of 36 .", "4 , and free-flow generation model achieves a similarity score of 61 .", "9 , where last two scores are based on a human correlated semantic similarity metric.", "The field of automated question answering (QA) has witnessed a rapid progress in the past few years, sometimes beating even human performance (Zhang et al., 2020).", "The reasons behind this trend include", "(i) emergence of large-sized QA datasets such as SQuAD (Ra-jpurkar et al., 2016), HotpotQA (Yang et al., 2018), CommonsenseQA (Talmor et al., 2019), NaturalQA (Kwiatkowski et al., 2019), etc., and", "(ii) emergence of powerful, large scale, preQuestion: Where is a frisbee in play likely to be?", "Answer Choices: outside park roof tree air Our Explanation: Positives Properties 1) A frisbee is a concave plastic disc designed for skimming through the air as an outdoor game.", "Negative Properties 1) A frisbee can be outside anytime, even while not in play.", "2) A frisbee can be in a park anytime, even while not in play.", "3) A frisbee can be on a roof after play.", "4) A frisbee can be in a tree after play.", "Free-Flow (FF) Explanation A frisbee is a concave plastic disc designed for skimming through the air as an outdoor game, so while in play it is most likely to be in the air.", "A frisbee can be outside or in a park anytime, and other options are possible only after play.", "trained, neural language models such as Transformer (Vaswani et al., 2017), BERT (Devlin et al., 2019), GPT (Brown et al., 2020), etc.", "Much of the prior work in QA has focused on building models for only predicting the correct answer.", "In this paper, we tackle the problem of generating an explanation for the answer of a question.", "While existing work has looked at explaining the answer predicted by a model (Amini et al., 2019), we take up the task of explaining the given gold (correct) answer in a model oblivious fashion (Jansen et al., 2018).", "We do this in the context of common-sense QA task and work with CommonsenseQA dataset.", "Explaining the known gold answers for common-sense QA is an important research problem and is far from being solved (Rajani et al., 2019).", "Two major hurdles in solving this problem include", "(i) lack of any desiderata for what constitutes an explanation (Horacek, 2017) and", "(ii) unavailability of QA datasets comprising high quality human-annotated explanations.", "In this work, we address the entire stack of automatically generating explanations for the CommonsenseQA task.", "This includes setting up a desiderata for the explanation, curation of a dataset in accordance with the desiderata, proposing baselines models, and careful experimentation.", "Our overall contributions can be summarized as: 1. We present a set of characteristics ( refutation complete, comprehensive, minimal, and coherent ) for what constitutes an explanation.", "For any given (question, correct answer choice, incorrect answer choices) tuple, our explanation constitutes a set of positive properties to justify the correct answer choice and a set of negative properties to refute the incorrect ones.", "2. We human annotate positive and negative properties for 11 KQA pairs from the recently released CommonsenseQA ( CQA ) dataset (Tal-mor et al., 2019).", "We also curate a free-flow explanation for each QA pair.", "An example of our human annotated explanation is shown in Table 1 1 .", "We call our dataset as ECQA (Ex-planations for CommonsenseQA) and publicly release 2 it for future research.", "3. We propose a set of models for the task of retrieval as well as generation of explanations.", "Our retrieval system, called as eXplanation Retriever ( XR ) , represents properties in a latent space, and retrieves the facts against a CQA example from a given common-sense knowledge corpus.", "Our generation system, called 1 An additional example is given in Appendix A.1.", "as eXplanation Generator ( XG ) , comprises a novel two step fine-tuned property generation model ( XGP ) to generate common-sense properties and a free-flow explanation generation", "generation model ( XGF ).", "4. We perform extensive experiments to demonstrate the effectiveness of XR and XG systems.", "We use an F 1 based evaluation, calculated via exact property match when retrieving using gold corpus of facts.", "For property generation, and retrieval using a silver corpus in the absence of gold facts, F 1 is computed using a semantic similarity metric carefully picked to have a high correlation with human judgment.", "XR outperforms BM25 by a relative gain of 100% for the gold corpus, and 70% for the sliver corpus.", "XGP achieves a F 1 score of 36 .", "4 , while XGF achieves a semantic similarity score of 61 .", "9 .", "We publicly release our code and trained models 3 .", "Bulk of the recent literature on automated QA is focused on either", "(i) proposing a new kind of dataset (Unger et al., 2014; Rajpurkar et al., 2016; Ling et al., 2017; Joshi et al., 2017; Trivedi et al., 2017; Welbl et al., 2017; Yang et al., 2018; Kwiatkowski et al., 2019; Talmor et al., 2019; Miao et al., 2020), or", "(ii) proposing a model with improved answer accuracy (Amini et al., 2019; Bhar-gav et al., 2020; Chen et al., 2020).", "As far as explanation in QA is concerned, we can either", "(i) explain the model's predicted answer, or", "(ii) explain the given gold answer without worrying about the model.", "For certain QA tasks (e.g. KBQA , MathQA , VQA ), former explanation task is more meaningful.", "For other QA tasks (e.g. Common-sense QA , ScienceQA ), the later form of explanation may be more meaningful.", "In both, one of the key challenge is to ground the definition of explanation.", "Knowledge-Base QA task (Berant et al., 2013) requires the QA model to output a logical query (e.g. SPARQL or SQL ) which is then executed over the underlying KB to get the answer.", "This logical query itself serves as an explanation.", "The MathQA task (Ling et al., 2017; Amini et al., 2019) requires the model to output a theorem-like proof, program, or algebraic construct which is executed to get the answer.", "Again, such a theorem serves as an explanation.", "For ScienceQA task, an expla-3 https://github.com/dair-iitd/ECQA Datasets Reasoning Type Reasoning Steps Refutation Knowledge Base of Facts Free Flow Explanation WorldTree V2 Scientific Multi-hop N Y NCOS-E Common-sense Single-hop N N YQASC Scientific Two-hop N Y N OpenBookQA Scientific Multi-hop N Y NECQA Common-sense Multi-hop Y Y Y Table 2: Comparison of various properties of the different multi-choice QA explanation datasets.", "nation naturally comprises relevant scientific facts coming from a given corpus.", "WorldTree (Jansen et al., 2018) and WorldTree V2 (Xie et al., 2020) are corpora of elementary multiple-choice science questions with gold explanations for correct answer choice.", "OpenBookQA (Mihaylov et al., 2018) is a ScienceQA dataset built over the WorldTree corpus.", "QASC (Khot et al., 2020) is a middle school level multiple-choice ScienceQA dataset.", "For other QA tasks, such as common-sense QA, reading comprehension QA (RCQA), visual QA (VQA), grounding the definition of explanation is not so obvious (Horacek, 2017) and hence, they lack labeled data as well.", "In the case of RCQA and VQA (Ghosh et al., 2018), there have been attempts to explain the predicted answers.", "Clark et al. (2020) studied the logical reasoning capacity of transformer based language models on various RCQA tasks.", "Bhagavatula et al. (2019) have proposed an NLI dataset for abductive reasoning.", "Wang et al. (2019) introduced the task of sense-making where given a pair of natural language statements, the goal is to pick the more sensible statement in the pair.", "Kotonya and Toni (2020) have proposed a dataset of explainable fact-checking in the public health domain and defined coherence properties to evaluate explanation quality.", "As far as common-sense QA is concerned, we are not aware of much prior work on generating human understandable natural language explanations either for the predicted answer or for the given gold answer .", "CQA (Talmor et al., 2019) is a popular, multiple choice, common-sense QA dataset.", "The goal behind original CQA task is confined only till answering the questions and hence almost all the submissions (Ma et al., 2019; Khashabi et al., 2020; Zhu et al., 2020; Yang et al., 2020) to the leader-board of the CQA dataset focus just on answering the question and not generating explanations.", "As far as explaining the gold answers of CQA questions are concerned, except for the works by Rajani et al. (2019), the literature is quite slim both from the perspective of the explanation annotated datasets and models.", "Rajani et al. (2019) recently annotated explanations for the CQA dataset and called those explanations as CoS explanation ( CoS-E for short).", "CoS-E are much shorter than our ECQA explanations (refer Table 1) and their aim was to leverage them in training a QA model so as to boost its answering accuracy.", "Their QA model first predicts CoS-E followed by leveraging the same to answer the question.", "Also, it is designed to generate only single-hop explanation which justifies only the correct answer choice and does not refute any incorrect answer choice.", "Table 2 compares our ECQA dataset with other relevant explanation datasets.", "To the best of our knowledge, both our ECQA annotation and XR , XG systems for explaining the CQA dataset are first-of-a-kind.", "The broad idea behind explaining common-sense QA is to capture how humans would justify if a QA pair is presented to them.", "However, grounding a precise definition for this human justification is still hard due to subjectivity (Horacek, 2017).", "Furthermore, depending on the type of reasoning involved in the QA task, form and shape of an explanation may vary.", "Though, it is hard to give a single definition of the explanation for QA pairs coming from the CQA dataset, we believe one can still approach this by means of putting forward desiderata or desired characteristics of a well-formed explanation: Comprehensive: Any information or reasoning, which is necessary to explain the answer should be present.", "This requires writing common-sense facts that are not present in the question but are essential for explanation.", "Refutation Complete: While it should explain why an answer choice is correct, it should also explain why rest of the choices are incorrect or not best suited as answer.", "Minimal: It should not contain any irrelevant or redundant information, especially the ones which are already present in the question.", "Coherent: All the facts and statements should be written in a coherent and free-flow form to get a meaningful and natural explanation.", "The next question is how to translate above desiderata into a right format of the explanation for the purpose of machine generation.", "A nave approach would be to consider it as a sequence of tokens or words, but it is unclear how to define metrics for deciding whether such a sequence satisfies the desiderata or not.", "So, we alternatively suggest two different formats for the explanations.", "1. Property Set Format: Given a CQA tuple ( q, a, I ) where, q is the question, a is the correct answer choice, I is the list of incorrect choices, this format suggests compiling a set S of commonsense atomic facts (aka properties) such that each property in S is required to either justify the correct answer choice or refute an incorrect answer choice.", "Furthermore, this format also requires the set S to be minimal in the sense that dropping any property from S may fail to either justify correct answer choice or refute one or more incorrect answer choices.", "Also, it's good to ensure that each property statement in S is atomic in the sense that it is confined to a single fact and can't be further broken down into two independent facts.", "In summary, S contains all those atomic properties that are needed for the explanation and nothing more.", "Conceptually, we further partition this set S into S + and S and call the respective properties as positive and negative , respectively.", "Positive properties justify the correct answer choice and negative properties refute the incorrect answer choices.", "Our ECQA dataset has precisely annotated these sets for the QA pairs in CQA dataset.", "An example of such S + and S sets is given in the Table 1. 2. Free Flow (FF) Format: This format essentially converts the question , the answer choices , and the knowledge fact statements from the sets S + and S into a well-formed, coherent, free-flow style paragraph.", "This is important since this is how a human might perceive an explanation to be.", "We partnered with a private firm to crowdsource the annotations in property set ( S ) format for the CQA dataset.", "The firm utilized their in-house annotation and quality control teams for this purpose.", "For each question in the CQA dataset, an annotator was shown the question, its target concept (as given in CQA ), all five answer choices, and the correct answer choice.", "As described earlier, the annotators were then asked to write the following: A set S + of positive properties, another set S of negative properties and a free-flowing English explanation using the facts encapsulated in sets S + and S .", "Each question in the CQA dataset comes with a label called target concept .", "We sorted all the questions according to their target concepts and provided questions of the same target concept to a single annotator.", "This prevented from conflicting statements appearing in positive and negative properties, and also helped speed up the annotation.", "An outcome of this exercise is shown in Table 1. While it is difficult to guarantee that annotated property set is comprehensive , we tried to ensure it by asking annotators writing at least one property for each answer choice.", "We also asked them to write simple sentences by breaking down the complex sentences into two or more so that it helps in maintaining minimality .", "For the comprehensiveness and minimality of the final free-flow explanation, we explicitly asked them to include everything that appear in properties and avoid introducing anything from question and answer choices.", "The dataset quality at the ground level was ensured by a separate team of the partner firm, and random checks were performed by the authors as well.", "In this section, we highlight various insights regarding our ECQA dataset.", "There are a total of 10962 questions in the train and validation sets of CQA , and we get annotations for all of them.", "Top 3 rows of Table 3 gives the average count and the word length of properties per question.", "We also give the average word length of ECQA free-flow (FF) and CoS-E free-flow explanation for comparison.", "In order to measure how much information ECQA free-flow annotations provide, we calculated number of distinct words (nouns, verbs, adjectives, and adverbs based on POS tagging) and report their average numbers in Table 4. The first three rows compare the information content in CQA , CoS-E Statistic Avg.", "and ECQA , while fourth and fifth rows tell what extra is present in a single annotation of the two explanation datasets w.r.t to CQA .", "This gives us a rough idea that the annotation introduces new entities and relations required for the explanation.", "Comparison using word-overlap metrics and additional data insights are presented in the Appendix A.9.", "We performed two human validation experiments to assess the absolute (and relative to CoS-E ) quality of our ECQA dataset.", "In the first experiment, we asked three human judges to validate 100 samples each from our ECQA dataset.", "Out of 100 samples, 50 samples were common across judges (for normalization and correlation analysis) and 50 were different.", "Both S + and S property sets were judged on a 3-points 4 scale to capture how well (negative)positive properties are justifying (in)correctness of (in)correct answer choice(s).", "Table 5 lists down the mean ( ), standard deviation ( ), standard error ( e ), and average Pearson's correlation coefficient ( ) for both positive and negative properties.", "83 .", "33% of the samples were rated a perfect 2 score for positive properties and 66 .", "67% were rated perfect 2 for negative properties.", "We computed Pearson's correlation coefficient as follows.", "For each of the 50 commonly labeled samples, we first computed the average score across all the judges.", "Then, we computed 4 0: complete garbage, 1: partial but incomplete reasoning, 2: satisfactory reasoning.", "Pearson's coefficient between scores of an individual judge and the corresponding average scores.", "Finally, we took the average of these individual coefficients across all judges (Gaona, 2014; Agirre et al., 2012).", "In the second experiment, we asked a set of three different human judges to compare the ECQA explanations with CoS explanations for the same 100 samples as in previous validation experiment.", "For each question, both explanations were randomly shuffled and resulting pair of explanations was called as ( E 1 , E 2) .", "The judges were asked to compare E 1 with E 2 on each of the following aspects: comprehensiveness, refutation completeness, minimality/non-redundancy , and overall quality .", "The comparison was logged on a 4-point scale 5 .", "Column 2 of Table 6 lists down the % times our explanation stood better than CoS-E .", "In all the four aspects, ECQA is judged to be outperforming CoS-E by a huge margin.", "Pearson's coefficient can be computed for each quality measure (column) and property (row) in Table 6, giving a 4 4 matrix of coefficient values with an average value of 0 .", "774 .", "The detailed coefficient matrix is given in Appendix A.7.", "In such scenarios, Kappa 5 1: E 1 better than E 2 , 2: E 2 better than E 1 , 3: Both good, 4: Both bad score can be low (and misleading) despite very high inter-annotator agreement due to the high chances of random agreement between the annotators.", "This is true in our case since ECQA explanations are highly preferred over CoS-E ones, by the judges.", "This section describes our proposed eXplanation Retriever ( XR ) system to retrieve S + and S property sets from a given property corpus for a given question.", "XR consists of two modules -", "(i) property ranker , and", "(ii) property selector .", "The experimentation code and trained models for this and the following section are publicly released.", "6 5.1 Property Ranker Input to property ranker is a tuple ( q, a, c ) , where q is a question (in natural language), a is one of the answer choices (natural language) for the question q , and c is token 'not' if the answer choice a is incorrect and empty string otherwise.", "Property ranker ranks the properties in the given corpus based on the given tuple ( q, a, c ) .", "The architecture of property ranker comprises two parameter shared sub-modules, namely QA Encoder ( E 1 ) and Property Encoder ( E 2 ).", "Module E 1 takes a tuple ( q, a, c ) as input and outputs a vector z qac in a 512 -dimensional latent space Z .", "Design of module E 1 is inspired by sentence transformers (SBERT) (Reimers and Gurevych, 2019) and comprises a BERT layer followed by single mean-pooling and a fully connected layer.", "We picked dimensions of the latent space through hyperpa-rameter tuning on validation set.", "Module E 2 takes a property statement p (in natural language) as input and returns a vector z p in the same latent space Z .", "E 2 's architecture is identical to the E 1 , with parameter shared at every layer level.", "Training: For training property ranker, we use SBERT library.", "7 We initialize the BERT with pre-trained bert-base-uncased (Devlin et al., 2019).", "Weights of the fully connected layer are initialized randomly.", "In ECQA dataset, multiple properties from the corresponding sets S + or S could form the relevant properties (each referred as p ) for a given ( q, a, c ) .", "For the correct answer choice, all properties from the corresponding S + set are valid p .", "In case of incorrect choice, we first match the 6 https://github.com/dair-iitd/ECQA 7 https://www.sbert.net/ stemmed answer choice with the annotated properties from the set S and pick all the matches as valid properties p , and remove all those tuples from the dataset where we cannot map to any property.", "Approximately 2% ( q, a, c ) tuples get dropped from our experiments in this manner.", "Additionally, 32 questions in the original CQA dataset were marked as ambiguous by our annotators, and hence, we drop them from all our experiments.", "So there are multiple training examples for a query ( q, a, c ) corresponding to each matched relevant property ( p ).", "Input part of each training example comprises a pair of ( q, a, c ) and a relevant commonsense property p .", "Output part of each training example comprises vector representations z qac and z p .", "The model is trained using a loss function, which forces z qac and z p to come closer in the latent space Z .", "We use multiple negatives ranking (MNR) (Henderson et al., 2017) as the loss, which is negative log-softmax over similarity of z qac and z p .", "8 Inference: For inference, we first start with a given property corpus S and encode all of them in the latent space using property encoder E 2 .", "Now, we pass any given tuple ( q, a, c ) through E 1 and obtain its latent vector representation z qac .", "Finally, we output a ranked list of the properties in the set S w.r.t to their cosine similarity with vector z qac .", "The candidate properties retrieved by the property ranker are passed to this property selection module along with the query ( q, a, c ) .", "This property selector module then filters out a smaller size relevant properties set from the given larger size retrieved properties set .", "We experiment with two variants of this module -", "(i) Topk , and", "(ii) Alignment-based Iterative Retriever (AIR) (Yadav et al., 2020).", "Topk module picks topk properties from the ranked list returned by property ranker module.", "Topk is a nave yet effective property selection module.", "We use ECQA dataset statistics to decide value for k .", "Based on Table 3, we select top3 properties for the correct answer choice and top1 property for an incorrect answer choice.", "AIR (Yadav et al., 2020) is a state-of-the-art unsupervised explanation retrieval algorithm.", "It iteratively subselects multi-hop explanations from a given set by measuring the alignment between question, answer, and explanation sentences using 8 Cosine similarity and MSE losses did not perform well.", "GloVe embeddings (Pennington et al., 2014).", "We use AIR to select the relevant set of properties from the top 50 properties given by the property ranker.", "Dataset: We first randomly split our annotated ECQA dataset into a 70 : 10 : 20 partition to form train , val , and test sets, respectively.", "For all our experiments, we train the proposed property ranker using the ECQA train set and validate it using the ECQA val set.", "We experiment with both gold and silver corpus of properties during inference.", "The gold corpus consists of properties in the ECQA dataset (including training, val, and test sets).", "Similarly, the silver corpus is the set of train and val set of ECQA dataset and an additional large size corpus of common-sense facts, called as Open Mind Common Sense (OMCS) corpus (SINGH, 2002) 9 .", "The sizes of gold and silver corpus are 63975 and 901202 , respectively.", "Metrics: We use F 1 score between the sets of gold and retrieved properties to compare the performance for retrieval from the gold corpus.", "Retrieval from the silver corpus can never fetch us the ground-truth properties for a tuple ( q, a, c ) , since they are not contained in that corpus.", "One way to overcome this is to align the retrieved properties set to the ground truth properties set.", "We propose using a maximum unweighted bipartite matching based metric to find such an alignment score.", "For this, we first create a complete bipartite graph between the ground truth and the retrieved set of properties.", "To each edge in the graph, we assign a score based on the semantic similarity of the corresponding property sentences.", "For this we use lexical and semantic similarity metrics such as STS-BERT score 10 , SPICE (Anderson et al., 2016), CIDEr (Vedantam et al., 2015), METEOR (Banerjee and Lavie, 2005), and ROUGE (Lin, 2004).", "We prune the edges in bipartite graph that have semantic similarity score less than some threshold value ( ).", "We then apply a maximum unweighted bipartite matching algorithm (Kuhn, 1955) on the pruned graph to obtain a matching of predicted silver properties with ground-truth gold properties.", "We then calculate usual F 1 score assuming the matched properties as the correctly retrieved ones.", "In Table 8 we report STS-BERT and SPICE based F 1 scores as these 9 The OMCS corpus has around 800,000 common-sense facts and was used to build ConceptNet.", "two metrics are the most correlated with human judgment.", "Results on other metrics are reported in Appendix A.8.", "Details regarding our experiment to discover correlation between the five semantic similarity metrics and the human judgment, and the procedure to obtain metric-specific thresholds ( ) is given in the Appendix A.6.", "Hyperparameters: We tune hyperparameters of property ranker by maximizing the average cosine similarity over the validation set.", "Table 7 shows the best hyperparameters for our proposed property ranker obtained using grid search over validation set, where the parameters were searched in the given range.", "We use the model which achieves the best results on validation set in 5 epochs.", "We set warm-up steps and BERT hidden layer dimension to default values of 100 11 and 768 , respectively.", "Results: We have also considered the popular information retrieval method BM25 (Robertson and Zaragoza, 2009) as another choice for the property ranker module.", "We have used the publicly available implementation of BM25 12 .", "Table 8 shows the performance comparison of XR system on gold and silver corpus for different choices of the property ranker and property selector modules.", "Our proposed property ranker with topk as property selector outperforms all other combinations with a significant margin.", "In Appendix A.3, we report some anecdotal examples of retrieved properties.", "In this section we will describe our proposed GPT-2 (Radford et al., 2019) based explanation generation system called eXplanation Generator ( XG ) .", "Note that XG does not use any corpus of common-sense properties at the inference time to generate explanations.", "XG has two variants", "(i) 11 Default value taken from SBERT documentation 12 https://pypi.org/project/rank-bm25/ F 1 Score (%) XR System Gold Corpus Silver Corpus Exact STS-BERT SPICE BM25 + AIR 22 .", "XGP to generate common-sense properties, and", "(ii) XGF to generate the free-flow explanations across all the answer choices.", "In all our experiments, we use random sampling to generate the output tokens using GPT-2 and report average numbers over 3 different runs.", "Input to the XGP is a tuple ( q, a, c ) and it generates a set of properties to justify/refute the given answer choice for the given question.", "The architecture for XGP is the same as GPT-2 but we fine-tune it in a customized manner as described below.", "Training: We do a novel two-step fine-tuning of GPT-2 and refer to this model as XGP .", "In the first step, we fine-tune GPT-2 to ensure that it can generate sentences that resemble common-sense properties.", "For this, we fine-tune GPT-2 on language modeling task using a corpus of commonsense properties: ECQA train set plus OMCS corpus.", "We use perplexity to evaluate the quality of language model on the val set and save the model which achieves the lowest perplexity in 5 epochs.", "The input to our model is: (cid:104) BOP (cid:105) property (cid:104) EOP (cid:105) , where property is word-pieces tokens of property and (cid:104) BOP (cid:105) and (cid:104) EOP (cid:105) are special tokens to mark the beginning and end of a property.", "In the second step, we fine-tune it to learn how to generate a set of properties.", "Given a query tuple ( q, a, c ) and a sequence of gold properties, say ( p 1 , ..., p k ) , we create input to GPT-2 as: (cid:104) BOS (cid:105) question: q a is c the answer because (cid:104) BOP (cid:105) p 1 (cid:104) EOP (cid:105) ... (cid:104) BOP (cid:105) p k (cid:104) EOP (cid:105) (cid:104) EOS (cid:105) In this input template, the following set of strings are always constant: question: , is , and the answer because .", "Tokens (cid:104) BOS (cid:105) and (cid:104) EOS (cid:105) denotes the beginning and end of the sequence.", "We use train set of ECQA , preserving the ordering of properties from the annotation, so as to generate the fine-tuning data in the above template for the second fine-tuning step.", "We fine-tune for 5 epochs and save the model that achieves the lowest perplexity on the ECQA val set.", "In order to establish the novelty of this 2 step fine-tuning, we create another model ( XGP-W ) by performing only 2nd step fine-tuning on pre-trained GPT-2 and compare it with XGP .", "Inference: We use test set of ECQA to test XGP .", "The input to model is: (cid:104) BOS (cid:105) question: q a is c the answer because (cid:104) BOP (cid:105) .", "The model generates tokens until it generates (cid:104) EOS (cid:105) token.", "We parse output and collect a set of multiple properties between consecutive (cid:104) BOP (cid:105) and (cid:104) EOP (cid:105) tokens.", "Experiments: Table 9 shows the comparison of XGP and XGP-W using the bipartite graph based metric discussed in section 5. Note that we have also included the best retrieval model on the silver corpus from Table 8 to show that our generation models perform significantly better than it.", "The maximum output token limit of GPT-2 in both the models is set to 150 .", "We report some anecdotal examples of generated properties in Appendix A.4.", "We now discuss models to generate the free-flow natural language explanations, given a question, all answer choices , and the correct answer choice .", "There are two different variants of XGF with different training strategies and inference prompts.", "We use GPT-2 to directly output the free-flow explanation f given an input tuple ( q, o, ca ) , where q is question, o is sequence of all the answer choices for the question q , and ca is the correct answer.", "Training: We fine-tune GPT-2 for 5 epochs on train set of ECQA using standard language modeling objective.", "The input to GPT-2 during training is: (cid:104) BOS (cid:105) question: q The options are o .", "The best answer is ca because f (cid:104) EOS (cid:105) .", "Validation is done on val set of ECQA using perplexity measure.", "Inference: During inference on ECQA test set, the prompt is given till because token and generation is done until (cid:104) EOS (cid:105) token.", "Here we generate the free-flow explanations in a two-step manner.", "In the first step, we generate the properties for each answer choice of a question using the trained XGP (section 6.1) model.", "After generating all the properties, we feed them in conjunction with question, all the choices , and correct answer to our GPT-2 based system XGF-II so as to generate the free-flow explanation.", "Training: The fine-tuning of pre-trained GPT-2 proceeds in two-steps.", "First, we fine-tune on gold properties from the ECQA dataset.", "We take the model that achieves lowest perplexity on val set in 5 epochs.", "After fine-tuning on gold properties, we now fine-tune XGF-II for 5 epochs on the properties generated by XGP .", "Inference: At inference time, we first generate the properties for each answer choice using XGP .", "Using these properties, XGF-II generate the free-flow explanation.", "Experiments: Table 10 shows STS-BERT and SPICE scores between ground-truth and generated explanations by XGF .", "Both XGF variants give similar results.", "Note that we set the maximum output token limit of GPT-2 to 250 13 .", "We also tried free-flow generation with bare pre-trained GPT-2 but it resulted in complete garbage output.", "We report an anecdotal example of generated free-flow explanations in Appendix A.5.", "13 As free-flow explanations are longer than properties, we set the maximum output token limit of GPT-2 to 250 for XGF models compared to 150 used for XGP models.", "We have presented desiderata of what constitutes an explanation in the case of common-sense QA.", "Based on it, we generated a human-annotated explanation dataset ECQA for CommonsenseQA .", "We have also proposed models to retrieve and generate common-sense facts required to justify the answer choice.", "We have publicly released our crowdsourced ECQA dataset and code/models.", "In future work, we plan to explore directions to design RL-based schemes for joint training of property ranker and property selector components in the XR system and joint training of XGP and XGF-II to generate free-flow explanation.", "Another direction is to improve the accuracy and interpretability of the existing models for CommonsenseQA using the ECQA dataset.", "This work was supported by an IBM AI Horizons Network (AIHN) grant.", "Parag Singla is being supported by IBM SUR awards and Visvesvaraya young faculty fellowship by Govt.", "of India.", "We would like to acknowledge the use of IIT Delhi HPC facility, IBM cloud facility, and IBM Cognitive Computing Cluster (CCC) for carrying out various experiments.", "We thank Avaljot Singh and Koyal Mukherjee for helpful inputs and discussions during the early stages, and Mausam for his critical comments which helped improve the paper.", "We would also like to thank Yatin Nandwani and Ke-shavi Sai Kolluru who gave useful comments on the initial draft.", "We would like to thank the members of IBM-AIHN team for their support and suggestions.", "Finally, we thank anonymous reviewers for their insightful comments in improving the final version of the paper.", "This paper is concerned about proposing a brand new dataset on explanations of common-sense question answers.", "The dataset was crowdsourced through a private firm and all the ethical consideration were taken into account including proper remuneration to the human annotators as well as their consent to use the dataset for our research purposes.", "We have also ensured that there are no personally identifiable information or offensive content in our annotations.", "We also sought permission from authors of the CQA dataset to add our annotation on top of that dataset.", "As far as external libraries used in our code base is concerned, we have sought appropriate permissions from authors of all those external libraries which are available in public domain but do not have any license speci-fied.", "As far as implications of our research contributions is concerned, it can advance the state-of-the-art research on automated question answering requiring common-sense knowledge.", "This research can also advance technologies in the areas such as automated dialog , machine debate , etc.", "In fact, generating an explanation for the correct answer choice of a question help design fair and unbiased QA and dialog systems.", "These systems could offer huge value in sectors such as customer support, e-commerce, online education, home automation, etc." ]
[ "abstain", "objective", "abstain", "objective", "objective", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "objective", "objective", "objective", "objective", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "objective", "other", "other", "method", "objective", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "objective", "abstain", "objective", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain" ]
[ "Neural Topic Modeling with Bidirectional Adversarial Training Rui Wang Xuemeng Hu Deyu Zhou Yulan He Yuxuan Xiong Chenchen Ye Haiyang Xu School of Computer Science and Engineering, Key Laboratory of Computer Network and Information Integration, Ministry of Education, Southeast University, China Department of Computer Science, University of Warwick, UK AI Labs Didi Chuxing Co., Ltd.", "Abstract Recent years have witnessed a surge of interests of using neural topic models for automatic topic extraction from text, since they avoid the complicated mathematical derivations for model inference as in traditional topic models such as Latent Dirichlet Allocation (LDA).", "However, these models either typically assume improper prior (e.g. Gaussian or Logistic Normal) over latent topic space or could not infer topic distribution for a given document.", "To address these limitations, we propose a neural topic modeling approach, called Bidirectional Adversarial Topic (BAT) model, which represents the first attempt of applying bidirectional adversarial training for neural topic modeling.", "The proposed BAT builds a two-way projection between the document-topic distribution and the document-word distribution.", "It uses a generator to capture the semantic patterns from texts and an encoder for topic inference.", "Furthermore, to incorporate word relatedness information, the Bidirectional Adversarial Topic model with Gaussian (Gaussian-BAT) is extended from BAT.", "To verify the effectiveness of BAT and Gaussian-BAT, three benchmark corpora are used in our experiments.", "The experimental results show that BAT and Gaussian-BAT obtain more coherent topics, outperforming several competitive baselines.", "Moreover, when performing text clustering based on the extracted topics, our models outperform all the baselines, with more significant improvements achieved by Gaussian-BAT where an increase of near 6% is observed in accuracy.", "Topic models have been extensively explored in the Natural Language Processing (NLP) community for unsupervised knowledge discovery.", "Latent Dirichlet Allocation (LDA) (Blei et al., 2003), the corresponding author Logistic-Normal Dirichlet + Figure 1: Illustrated probability simplex with Logistic-Normal distribution and Dirichlet distribution.", "most popular topic model, has been extended (Lin and He, 2009; Zhou et al., 2014; Cheng et al., 2014) for various extraction tasks.", "Due to the difficulty of exact inference, most LDA variants require approximate inference methods, such as mean-field methods and collapsed Gibbs sampling.", "However, these approximate approaches have the drawback that small changes to the modeling assumptions result in a re-derivation of the inference algorithm, which can be mathematically arduous.", "One possible way in addressing this limitation is through neural topic models which employ black-box inference mechanism with neural networks.", "Inspired by variational autoencoder (VAE) (Kingma and Welling, 2013), Srivastava and Sutton (2017) used the Logistic-Normal prior to mimic the simplex in latent topic space and proposed the Neural Variational LDA (NVLDA).", "Moreover, they replaced the word-level mixture in NVLDA with a weighted product of experts and proposed the ProdLDA (Srivastava and Sutton, 2017) to further enhance the topic quality.", "Although Srivastava and Sutton (2017) used the Logistic-Normal distribution to approximate the Dirichlet distribution, they are not exactly the same.", "An illustration of these two distributions is shown in Figure 1 in which the Logistic-Normal distribution does not exhibit multiple peaks at the vertices of the simplex as that in the Dirichlet distribution and as such, it is less capable to capture the multi-modality which is crucial in topic modeling (Wallach et al., 2009).", "To deal with the limitation, Wang et al. (2019a) proposed the Adversarial-neural Topic Model (ATM) based on adversarial training, it uses a generator network to capture the semantic patterns lying behind the documents.", "However, given a document, ATM is not able to infer the document-topic distribution which is useful for downstream applications, such as text clustering.", "Moreover, ATM take the bag-of-words assumption and do not utilize any word relatedness information captured in word embeddings which have been proved to be crucial for better performance in many NLP tasks (Liu et al., 2018; Lei et al., 2018).", "To address these limitations, we model topics with Dirichlet prior and propose a novel Bidirectional Adversarial Topic model (BAT) based on bidirectional adversarial training.", "The proposed BAT employs a generator network to learn the projection function from randomly-sampled document-topic distribution to document-word distribution.", "Moreover, an encoder network is used to learn the inverse projection, transforming a document-word distribution into a document-topic distribution.", "Different from traditional models that often resort to analytic approximations, BAT employs a discriminator which aims to discriminate between real distribution pair and fake distribution pair, thereby helps the networks (generator and encoder) to learn the two-way projections better.", "During the adversarial training phase, the supervision signal provided by the discriminator will guide the generator to construct a more realistic document and thus better capture the semantic patterns in text.", "Meanwhile, the encoder network is also guided to generate a more reasonable topic distribution conditioned on specific document-word distributions.", "Finally, to incorporate the word relatedness information captured by word embeddings, we extend the BAT by modeling each topic with a multivariate Gaussian in the generator and propose the Bidirectional Adversarial Topic model with Gaussian (Gaussian-BAT).", "We propose a novel Bidirectional Adversarial Topic (BAT) model, which is, to our best knowledge, the first attempt of using bidirectional adversarial training in neural topic modeling; We extend BAT to incorporate the word relatedness", "relatedness information into the modeling process and propose the Bidirectional Adversarial Topic model with Gaussian (Gaussian-BAT); Experimental results on three public datasets show that BAT and Gaussian-BAT outperform the state-of-the-art approaches in terms of topic coherence measures.", "The effectiveness of BAT and Gaussian-BAT is further verified in text clustering.", "Our work is related to two lines of research, which are adversarial training and neural topic modeling.", "Adversarial training, first employed in Generative Adversarial Network (GAN) (Goodfellow et al., 2014), has been extensively studied from both theoretical and practical perspectives.", "Theoretically, Arjovsky (2017) and Gulra-jani (2017) proposed the Wasserstein GAN which employed the Wasserstein distance between data distribution and generated distribution as the training objective.", "To address the limitation that most GANs (Goodfellow et al., 2014; Radford et al., 2015) could not project data into a latent space, Bidirectional Generative Adversarial Nets (Bi-GAN) (Donahue et al., 2016) and Adversarially Learned Inference (ALI) (Dumoulin et al., 2016) were proposed.", "Adversarial training has also been extensively used for text generation.", "For example, Seq-GAN (Yu et al., 2017) incorporated a policy gradient strategy for text generation.", "RankGAN (Lin et al., 2017) ranked a collection of human-written sentences to capture the language structure for improving the quality of text generation.", "To avoid mode collapse when dealing with discrete data, MaskGAN (Fedus et al., 2018) used an actor-critic conditional GAN to fill in missing text conditioned on the context.", "To overcome the challenging exact inference of topic models based on directed graph, a replicated softmax model (RSM), based on the Restricted Boltzmann Machines was proposed in (Hinton and Salakhutdinov, 2009).", "Inspired by VAE, Miao et al. (2016) used the multivariate Gaussian as the prior distribution of latent space and proposed the fake distribution pair S-dim ~d f Generator Network (G) Discriminator Network (D) D in D out ~ f Dir ( ~ f j ~ ) (V+K)-dim Encoder Network E Representation layer Document-topic distribution layer Document-worddistribution layer ~d r ~ r real distribution pair ~p r ~p f Representation layer Joint distributions layer Representation layer V-dim S-dim K-dim V-dim S-dim K-dim Document-topic distribution layer Document-worddistribution layer Figure 2: The framework of the Bidirectional Adversarial Topic (BAT) model.", "Neural Variational Document Model (NVDM) for text modeling.", "To model topic properly, the Gaussian Softmax Model (GSM) (Miao et al., 2017) which constructs the topic distribution using a Gaussian distribution followed by a softmax transformation was proposed based on the NVDM.", "Likewise, to deal with the inappropriate Gaussian prior of NVDM, Srivastava and Sutton (2017) proposed the NVLDA which approximates the Dirichlet prior using a Logistic-Normal distribution.", "Recently, the Adversarial-neural Topic Model (ATM) (Wang et al., 2019a) is proposed based on adversarial training, it models topics with Dirichlet prior which is able to capture the multi-modality compared with logistic-normal prior and obtains better topics.", "Besides, the Adversarial-neural Event (AEM) (Wang et al., 2019b) model is also proposed for open event extraction by representing each event as an entity distribution, a location distribution, a keyword distribution and a date distribution.", "Despite the extensive exploration of this research field, scarce work has been done to incorporate Dirichlet prior, word embeddings and bidirectional adversarial training into neural topic modeling.", "In this paper, we propose two novel topic modeling approaches, called BAT and Gaussian-BAT, which are different from existing approaches in the following aspects: (1) Unlike NVDM, GSM, NVLDA and ProdLDA which model latent topic with Gaussian or logistic-normal prior, BAT and Gaussian-BAT explicitly employ Dirichlet prior to model topics; (2) Unlike ATM which could not infer topic distribution of a given document, BAT and Gaussian-BAT uses a encoder to generate the topic distribution corresponding to the document; (3) Unlike neural topic models that only utilize word co-occurrence information, Gaussian-BAT models topic with multivariate Gaussian and incorporates the word relatedness into modeling process.", "Our proposed neural topic models are based on bidirectional adversarial training (Donahue et al., 2016) and aim to learn the two-way non-linear projection between two high-dimensional distributions.", "In this section, we first introduce the Bidirectional Adversarial Topic (BAT) model that only employs the word co-occurrence information.", "Then, built on BAT, we model topics with multivariate Gaussian in the generator of BAT and propose the Bidirectional Adversarial Topic model with Gaussian (Gaussian-BAT), which naturally incorporates word relatedness information captured in word embeddings into modeling process.", "As depicted in Figure 2, the proposed BAT consists of three components: (1) The Encoder E takes the V -dimensional document representation (cid:126) d r sampled from text corpus C as input and transforms it into the corresponding K -dimensional topic distribution (cid:126) r ; (2) The Generator G takes a random", "topic distribution (cid:126) f drawn from a Dirichlet prior as input and generates a V -dimensional fake word distribution (cid:126)d f ; (3) The Discriminator D takes the real distribution pair (cid:126)p r = [ (cid:126) r ; (cid:126)d r ] and fake distribution pair (cid:126)p f = [ (cid:126) f ; (cid:126)d f ] as input and discriminates the real distribution pairs from the fake ones.", "The outputs of the discriminator are used as supervision signals to learn E , G and D during adversarial training.", "In what follows, we describe each component in more details.", "The encoder learns a mapping function to transform document-word distribution to document-topic distribution.", "As shown in the top-left panel of Figure 2, it contains a V -dimensional document-word distribution layer, an S -dimensional representation layer and a K -dimensional document-topic distribution layer, where V and K denote vocabulary size and topic number respectively.", "More concretely, for each document d in text corpus, E takes the document representation (cid:126)d r as input, where (cid:126)d r is the representation weighted by TF-IDF, and it is calculated by: tf i,d = n i,d (cid:80) v n v,d , idf i = log | C | | C i | tf idf i,d = tf i,d idf i , d ir = tf idf i,d (cid:80) v tf idf v,d where n i,d denotes the number of i -th word appeared in document d , | C | represents the number of documents in the corpus, and | C i | means the number of documents that contain i -th word in the corpus.", "Thus, each document could be represented as a V -dimensional multinomial distribution and the i -th dimension denotes the semantic consistency between i -th word and the document.", "With (cid:126)d r as input, E firstly projects it into an S -dimensional semantic space through the representation layer as follows: (cid:126)h es = BN( W es (cid:126)d r + (cid:126)b es ) (1) (cid:126)o es = max( (cid:126)h es , leak (cid:126)h es ) (2) where W es RS V and (cid:126)b es are weight matrix and bias term of the representation layer, (cid:126)h es is the state vector normalized by batch normalization BN( ) , leak denotes the parameter of LeakyReLU activation and (cid:126)o es represents the output of representation layer.", "Then, the encoder transforms (cid:126)o es into a K dimensional topic space based on the equation below: (cid:126) r = softmax( W et (cid:126)o es + (cid:126)b et ) (3) where W et RK S is the weight matrix of topic distribution layer, (cid:126)b et represents the bias term, (cid:126) r denotes the corresponding topic distribution of the input (cid:126)d r and the k -th ( k { 1 , 2 , ..., K } ) dimension kr represents the proportion of k -th topic in document d .", "The generator G is shown in the bottom-left panel of Figure", "2. Contrary to encoder, it provides an inverse projection from document-topic distribution to document-word distribution and contains a K -dimensional document-topic layer, an S -dimensional representation layer and a V dimensional document-word distribution layer.", "As pointed out in (Wallach et al., 2009), the choice of Dirichlet prior over topic distribution is important to obtain interpretable topics.", "Thus, BAT employs the Dirichlet prior parameterized with (cid:126) to mimic the multi-variate simplex over topic distribution (cid:126) f .", "It can be drawn randomly based on the equation below: p ( (cid:126) f | (cid:126) ) = Dir ( (cid:126) f | (cid:126) ) (cid:44) 1 ( (cid:126) ) K (cid:89) k =1 (cid:104) kf (cid:105) k 1 (4) where (cid:126) is the K -dimensional hyper-parameter of Dirichlet prior, K is the topic number that should be set in BAT, kf [0 , 1] , follows the constrain that (cid:80) Kk =1 kf = 1 , represents the proportion of the k -th topic in the document, and normalization term ( (cid:126) ) is defined as (cid:81) Kk =1 ( k ) ( (cid:80) Kk =1 k ) .", "To learn the transformation from document-topic distribution to document-word distribution, G firstly projects (cid:126) f into an S -dimensional representation space based on equations: (cid:126)h gs = BN( W gs (cid:126) f + (cid:126)b gs ) (5) (cid:126)o gs = max( (cid:126)h gs , leak (cid:126)h gs ) (6) where W gs RS K is weight matrix of the representation layer, (cid:126)b gs represents bias term, (cid:126)h gs is the state vector normalized by batch normalization, Eq.", "6 represents the LeakyReLU activation parameterized with leak , and (cid:126)o gs is the output of the representation layer.", "where W gw RV S and (cid:126)b gw are weight matrix and bias of word distribution layer, (cid:126)d f is the word distribution correspond to (cid:126) f .", "For each v { 1 , 2 , ..., V } , the v -th dimension d vf is the probability of the v -th word in fake document (cid:126)d f .", "The discriminator D is constituted by three layers (a V + K -dimensional joint distribution layer, an S -dimensional representation layer and an output layer) as shown in the right panel of Figure", "2. It employs real distribution pair (cid:126)p r and fake distribution pair (cid:126)p f as input and then outputs D out to identify the input sources (fake or real).", "Concretely, a higher value of D out represents that D is more prone to predict the input as real and vice versa.", "In BAT, the generator models topics based on the bag-of-words assumption as in most other neural topic models.", "To incorporate the word relatedness information captured in word embeddings (Mikolov et al., 2013a,b; Pennington et al., 2014; Joulin et al., 2017; Athiwaratkun et al., 2018) into the inference process, we modify the generator of BAT and propose Gaussian-BAT, in which G models each topic with a multivariate Gaussian as shown in Figure", "3. Gaussian distributions Word Embedding fake distribution pair ~d f ~ f Dir ( ~ f j~ ) Document-topic distribution Document-worddistribution ~p f V-dim K-dim Topic-worddistributions ... ...", "Concretely, Gaussian-BAT employs the multivariate Gaussian N ( (cid:126) k , k ) to model the k -th topic.", "Here, (cid:126) k and k are trainable parameters, they represent mean and covariance matrix respectively.", "Following its probability density, for each word v { 1 , 2 , ..., V } , the probability in the k -th topic k,v is calculated by: p ( (cid:126)e v | topic = k ) = N ( (cid:126)e v ; (cid:126) k , k ) = exp( 12 ( (cid:126)e v (cid:126) k ) T 1 k ( (cid:126)e v (cid:126) k )) (cid:112) (2 ) D e | k | (8) k,v = p ( (cid:126)e v | topic = k ) (cid:80) Vv =1 p ( (cid:126)e v | topic = k ) (9) where (cid:126)e v means the word embedding of v -th word, V is the vocabulary size, | k | = det k is the determinant of covariance matrix k , D e is the dimension of word embeddings, p ( (cid:126)e v | topic = k ) is the probability calculated by density, and (cid:126) k is the normalized word distribution of k -th topic.", "With randomly sampled topic distribution (cid:126) f and the calculated topic-word distributions { (cid:126) 1 , (cid:126) 2 , ..., (cid:126) K } , the fake word distribution (cid:126)d f corresponding to (cid:126) f can be obtained by: (cid:126)d f = K (cid:88) k =1 (cid:126) k k (10) where k is the topic proportion of the k -th topic.", "Then, (cid:126) f and (cid:126)d f are concatenated to form the fake distribution pair (cid:126)p f as shown in Figure", "3. And encoder and discriminator of Gaussian-BAT are same as BAT, shown as Figure", "2. In our experiments, the pre-trained 300-dimensional Glove (Penning-ton et al., 2014) embedding is used.", "In Figure 2, the real distribution pair (cid:126)p r = [ (cid:126) r ; (cid:126)d r ] and the fake distribution pair (cid:126)p f = [ (cid:126) f ; (cid:126)d f ] can be viewed as random samples drawn from two ( K + V ) -dimensional joint distributions P r and P f , each of them comprising of a K -dimensional Dirichlet distribution and a V -dimensional Dirichlet distribution.", "The training objective of BAT and Gaussian-BAT is to make the generated joint distribution P f close to the real joint distribution P r as much as possible.", "In this way, a two-way projection between document-topic distribution and document-word distribution could be built by the learned encoder and generator.", "To measure the distance between P r and P f , we use the Wasserstein-distance as the optimization objective, since it was shown to be more effective compared to Jensen-Shannon divergence (Arjovsky et al., 2017): Loss = E (cid:126)p f P f [ D ( (cid:126)p f )] E (cid:126)p r P r [ D ( (cid:126)p r )] (11) where D ( ) represents the output signal of the discriminator.", "A higher value denotes that the discriminator is more prone to consider the input as a real distribution pair and vice versa.", "In addition, we use weight clipping which was proposed to ensure the Lipschitz continuity (Arjovsky et al., 2017) of D .", "The training procedure of BAT and Gaussian-BAT is given in Algorithm.", "1.", "Here, c is the clipping parameter, n d represents the number of discriminator iterations per generator iteration, m is the batch size, 1 is the learning rate, 1 and 2 are hyper-parameters of Adam (Kingma and Ba, 2014), and p a represents { 1 , 1 , 2 } .", "In our experiments, we set the n d = 5 , m = 64 , 1 = 1 e 4 , c = 0 .", "01 , 1 = 0 .", "5 and 2 = 0 .", "999 .", "After model training, learned G and E will build a two-way projection between document-topic distribution and document-word distribution.", "Thus, G and E could be used for topic generation and cluster inference.", "To generate the word distribution of each topic, we use (cid:126)ts ( k ) , a K -dimensional vector, as the one-hot encoding of the k -th topic.", "For example, (cid:126)ts 2 = [0 , 1 , 0 , 0 , 0 , 0] T in a six topic setting.", "And the word distribution of the k -th topic is obtained by: (cid:126) k = G ( (cid:126)ts ( k ) ) (12) Likewise, given the document representation (cid:126)d r , topic distribution (cid:126) r obtained by BAT/Gaussian-BAT could be used for cluster inference based on: (cid:126) r = E ( (cid:126)d r ); c r = arg max (cid:126) r (13) where c r denotes the inferred cluster of (cid:126)d r .", "In this section, we first present the experimental setup which includes the datasets used and the baselines, followed by the experimental results.", "We evaluate BAT and Gaussian-BAT on three datasets for topic extraction and text clustering, 20Newsgroups 1 , Grolier 2 and NYTimes 3 .", "Details are summarized below: 20Newsgroups (Lang, 1995) is a collection of approximately 20,000 newsgroup articles, partitioned evenly across 20 different newsgroups.", "Grolier is built from Grolier Multimedia Encycope-dia, which covers almost all the fields in the world.", "NYTimes is a collection of news articles published between 1987 and 2007, and contains a wide range of topics, such as sports, politics, education, etc.", "We use the full datasets of 20Newsgroups 1 and Grolier 2 .", "For the NYTimes dataset, we randomly select 100,000 articles and remove the low frequency words.", "The final statistics are shown in Table 1: Dataset #Doc (Train) #Doc (Test) #Words 20Newsgroups 11,259 7,488 1,995 Grolier 29,762 -15,276 NYtimes 99,992 -12,604 Table 1: The statistics of datasets.", "We choose the following models as baselines: LDA (Blei et al., 2003) extracts topics based on word co-occurrence patterns from documents.", "We implement LDA following the parameter setting suggested in (Griffiths and Steyvers, 2004).", "NVDM (Miao et al., 2016) is an unsupervised text modeling approach based on VAE.", "We use the original implementation of the paper 4 .", "GSM (Miao et al., 2017) is an enhanced topic model based on NVDM, we use the original implementation in our experiments 5 .", "NVLDA (Srivastava and Sutton, 2017), also built on VAE but with the logistic-normal prior.", "We use the implementation provided by the author 6 .", "ProdLDA (Srivastava and Sutton, 2017), is a variant of NVLDA, in which the distribution over individual words is a product of experts.", "The original implementation is used.", "ATM (Wang et al., 2019a), is a neural topic modeling approach based on adversarial training, we implement the ATM following the parameter setting suggested in the original paper.", "Topic models are typically evaluated with the likelihood of held-out documents and topic coherence.", "However, Chang et al. (2009) showed that a higher likelihood of held-out documents does not correspond to human judgment of topic coherence.", "Thus, we follow (Roder et al., 2015) and employ four topic coherence metrics (C P, C A, NPMI and UCI) to evaluate the topics generated by various models.", "In all experiments, each topic is represented by the top 10 words according to the topic-word probabilities, and all the topic coherence values are calculated using the Palmetto library 7 .", "We firstly make a comparison of topic coherence vs. different topic proportions.", "Experiments are 5 https://github.com/linkstrife/NVDM-GSM 6 https://github.com/akashgit/autoencoding vi for topic models 7 https://github.com/dice-group/Palmetto Dataset Model C P C A NPMI UCI 20Newsgroups NVDM -0.2558 0.1286 -0.0984 -2.9496 GSM -0.2318 0.1067 -0.0400 -1.6083 NVLDA 0.1205 0.1763 -0.0207 -1.3466 ProdLDA 0.1858 0.2155 -0.0083 -1.5044 LDA 0.2361 0.1769 0.0523 0.3399 ATM 0.1914 0.1720 0.0207 -0.3871 BAT 0.2597 0.1976 0.0472 0.0969 Gaussian-BAT 0.3758 0.2251 0.0819 0.5925 Grolier NVDM -0.1877 0.1456 -0.0619 -2.1149 GSM 0.1974 0.1966 0.0491 -0.0410 NVLDA -0.2205 0.1504 -0.0653 -2.4797 ProdLDA -0.0374 0.1733 -0.0193 -1.6398 LDA 0.1908 0.2009 0.0497 -0.0503 ATM 0.2105 0.2188 0.0582 0.1051 BAT 0.2312 0.2108 0.0608 0.1709 Gaussian-BAT 0.2606 0.2142 0.0724 0.2836 NYtimes NVDM -0.4130 0.1341 -0.1437 -4.3072 GSM 0.3426 0.2232 0.0848 0.6224 NVLDA -0.1575 0.1482 -0.0614 -2.4208 ProdLDA -0.0034 0.1963 -0.0282 -1.9173 LDA 0.3083 0.2127 0.0772 0.5165 ATM 0.3568 0.2375 0.0899 0.6582 BAT 0.3749 0.2355 0.0951 0.7073 Gaussian-BAT 0.4163 0.2479 0.1079 0.9215 Table 2: Average topic coherence on three datasets with five topic settings [20, 30, 50, 75, 100].", "conducted on the datasets with five topic number settings [20, 30, 50, 75, 100].", "We calculate the average topic coherence values among topics whose coherence values are ranked at the top 50 % , 70 % , 90 % , 100 % positions.", "For example, to calculate the average C P value of BAT @90% , we first compute the average C P coherence with the selected topics whose C P values are ranked at the top 90% for each topic number setting, and then average the five coherence values with each corresponding to a particular topic number setting.", "The detailed comparison is shown in Figure 4.", "It can be observed that BAT outperforms the baselines on all the coherence metrics for NYTimes datasets.", "For Grolier dataset, BAT outperforms all the baselines on C P, NPMI and UCI metrics, but 20 30 50 75 100 0.3 0.2 0.1 0.0 0.1 0.2 0.3 0.4 C _ P o n 20 N e w s g r o u p s 20 30 50 75 100 0.10 0.12 0.14 0.16 0.18 0.20 0.22 0.24 C _ A o n 20 N e w s g r o u p s 20 30 50 75 100 0.10 0.05 0.00 0.05 0.10 NPMI o n 20 N e w s g r o u p s 20 30 50 75 100 3 2 1 0 1 UCI o n 20 N e w s g r o u p s 20 30 50 75 100 0.4 0.2 0.0 0.2 0.4 C _ P o n NYT i m e s 20 30 50 75 100 0.100 0.125 0.150 0.175 0.200 0.225 0.250 C _ A o n NYT i m e s 20 30 50 75 100 0.15 0.10 0.05 0.00 0.05 0.10 NPMI o n NYT i m e s 20 30 50 75 100 5 4 3 2 1 0 1 UCI o n NYT i m e s 20 30 50 75 100 0.3 0.2 0.1 0.0 0.1 0.2 0.3 C _ P o n G r o li e r 20 30 50 75 100 0.14 0.16 0.18 0.20 0.22 C _ A o n G r o li e r 20 30 50 75 100 0.075 0.050 0.025 0.000 0.025 0.050 0.075 NPMI o n G r o li e r 20 30 50 75 100 3.0 2.5 2.0 1.5 1.0 0.5 0.0 0.5 UCI o n G r o li e r Gaussian-BAT BAT ATM LDA GSM ProdLDA NVLDA NVDM Figure 5: The comparison of average topic coherence vs. different topic number on 20Newsgroups, Grolier and NYTimes.", "gives slightly worse results compared to ATM on C A. For 20Newsgroups dataset, BAT performs the best on C P and NPMI, but gives slightly worse results compared to ProdLDA on C A, and LDA on UCI.", "By incorporating word embeddings through trainable Gaussian distribution, Gaussian-BAT outperforms all the baselines and BAT on four coherence metrics, often by a large margin, across all the three datasets except for Grolier dataset on C A when considering 100% topics.", "This may be attribute to the following factors: (1) The Dirichlet prior employed in BAT and Gaussian-BAT could exhibit a multi-modal distribution in latent space and is more suitable for discovering semantic patterns from text; (2) ATM does not consider the relationship between topic distribution and word distribution since it only carry out adversarial training in word distribution space; (3) The incorporation of word embeddings in Gaussian-BAT helps generating more coherent topics.", "We also compare the average topic coherence values (all topics taken into account) numerically to show the effectiveness of proposed BAT and Gaussian-BAT.", "The results of numerical topic coherence comparison are listed in Table 2 and each value is calculated by averaging the average topic coherences over five topic number settings.", "The best coherence value on each metric is highlighted in bold.", "It can be observed that Gaussian-BAT gives the best overall results across all metrics and on all the datasets except for Grolier dataset on C A. To make the comparison of topics more intuitive, we provide four topic examples extracted by models in Table", "3. It can be observed that the proposed BAT and Gaussian-BAT can generate more coherent topics.", "Moreover, to explore how topic coherence varies with different topic numbers, we also provide the comparison of average topic coherence vs. different topic number on 20newsgroups, Grolier and NYTimes (all topics taken into account).", "The detailed comparison is shown in Figure 5.", "It could be observed that Gaussian-BAT outperforms the baselines with 20, 30, 50 and 75 topics except for Grolier dataset on C A metric.", "However, when the topic number is set to 100, Gaussian-BAT performs slightly worse than LDA (e.g., UCI for 20Newsgroups and C A for NYTimes).", "This may be caused by the increased model complexity due to the larger topic number settings.", "Likewise, BAT can achieve at least the second-best results among all the approaches in most cases for NYTimes dataset.", "For Grolier, BAT also performs the second-best except on C A metric.", "However, for 20newsgroups, the results obtained by BAT are worse than ProdLDA (C A) and LDA (UCI) due to the limited training documents in the dataset, though it still largely outperforms other baselines.", "We further compare our proposed models with baselines on text clustering.", "Due to the lack of document label information in Grolier and NYTimes, we only use 20Newsgroups dataset in our experiments.", "The topic number is set to 20 (ground-truth categories) and the performance is evaluated by accuracy ( ACC ) : ACC = max map (cid:80) N t i =1 ind( l i = map( c i )) N t (14) where N t is the number of documents in the test set, ind( ) is the indicator function, l i is the ground-truth label of i -th document, c i is the category assignment, and map ranges over all possible one-to-one mappings between labels and clusters.", "The optimal map function can be obtained by the Kuhn-Munkres algorithm (Kuhn, 1955).", "A larger accuracy value indicates a better text clustering results.", "The comparison of text clustering results on 20Newsgroups is shown in Table 4.", "Due to the poor performance of NVDM in topic coherence evaluation, its result is excluded here.", "Not surprisingly, NVLDA and ProdLDA perform worse than BAT and Gaussian-BAT that model topics with the Dirichlet prior.", "This might be caused by the fact that Logistic-Normal prior does not exhibit multiple peaks at the vertices of the simplex, as depicted in Figure 1.", "Compared with LDA, BAT achieves a comparable result in accuracy since both models have the same Dirichlet prior assumption over topics and only employ the word co-occurrence information.", "Gaussian-BAT outperforms the second best model, BAT, by nearly 6% in accuracy.", "This shows that the incorporation of word embeddings is important to improve the semantic coherence of topics and thus results in better consistency between cluster assignments and ground-truth labels.", "In this paper, we have explored the use of bidirectional adversarial training in neural topic models and proposed two novel approaches: the Bidirectional Adversarial Topic (BAT) model and the Bidirectional Adversarial Topic model with Gaussian (Gaussian-BAT).", "BAT models topics with the Dirichlet prior and builds a two-way transformation between document-topic distribution and document-word distribution via bidirectional adversarial training.", "Gaussian-BAT extends from BAT by incorporating word embeddings into the modeling process, thereby naturally considers the word relatedness information captured in word embeddings.", "The experimental comparison on three widely used benchmark text corpus with the existing neural topic models shows that BAT and Gaussian-BAT achieve improved topic coherence results.", "In the future, we would like to devise a nonparametric neural topic model based on adversarial training.", "Besides, developing correlated topic modelsis another promising direction.", "We would like to thank anonymous reviewers for their valuable comments and helpful suggestions.", "This work was funded by the National Key Research and Development Program of China(2017YFB1002801) and the National Natural Science Foundation of China (61772132).", "And YH is partially supported by EPSRC (grant no. EP/T017112/1)." ]
[ "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "objective", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "other", "abstain", "other", "other", "abstain", "abstain", "abstain", "abstain", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "abstain", "abstain", "method", "other", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other" ]
[ "Virtual adversarial training (VAT) is a powerful technique to improve model robustness in both supervised and semi-supervised settings.", "It is effective and can be easily adopted on lots of image classification and text classification tasks.", "However, its benefits to sequence labeling tasks such as named entity recognition (NER) have not been shown as significant, mostly, because the previous approach can not combine VAT with the conditional random field (CRF).", "CRF can significantly boost accuracy for sequence models by putting constraints on label transitions, which makes it an essential component in most state-of-the-art sequence labeling model architectures.", "In this paper, we propose SeqVAT, a method which naturally applies VAT to sequence labeling models with CRF.", "Empirical studies show that SeqVAT not only significantly improves the sequence labeling performance over baselines under supervised settings, but also outperforms state-of-the-art approaches under semi-supervised settings.", "While having achieved great success on various computer vision and natural language processing tasks, deep neural networks, even state-of-the-art models, are usually vulnerable to tiny input perturbations (Szegedy et al., 2014; Goodfellow et al., 2015).", "To improve the model robustness against perturbations, Goodfellow et al. (2015) proposed to train neural networks on both original training examples and adversarial examples (examples generated by adding small but worst-case perturbations to the original examples).", "This approach, named adversarial training (AT), has been reported to be highly effective on image classification (Goodfel-low et al., 2015), text classification (Miyato et al., 2017), as well as sequence labeling (Yasunaga et al., 2018).", "which uses the labels to compute adversarial losses.", "To make use of unlabeled data, virtual adversarial training (VAT) was proposed to extend AT to semi-supervised settings (Miyato et al., 2019).", "Unlike AT which treats adversarial examples as new training instances that have the same labels as original examples, VAT minimizes the KL divergence between estimated label distribution of original examples and that of adversarial examples.", "In this manner, both labeled and unlabeled data can be used in training to improve accuracy and robustness.", "As a semi-supervised learning algorithm, VAT was reported to be effective on both image (Goodfellow et al., 2015; Miyato et al., 2019) and text classifications (Miyato et al., 2017).", "Moreover, a recent study (Oliver et al., 2018) conducted comprehensive comparisons on various popular semi-supervised learning algorithms.", "VAT turned out to be the most effective one.", "Despite its success in classification tasks, VAT has not shown similar effectiveness in sequence labeling tasks.", "In the conventional classification task, the model learns a mapping between a sentence (sequence of tokens) and a label.", "Nevertheless, in sequence labeling task, the target function becomes a mapping from a sequence of tokens to a sequence of labels.", "To apply VAT on sequence labeling, Clark et al. (2018) proposed to use a softmax layer on the top of token representations to obtain label probability distributions for each token.", "In this fashion, VAT could take KL divergence between tokens at the same position of the original sequence and the adversarial sequence as the adversarial losses.", "This approach shows marginal improvements over baseline models on several benchmarks, but fails to achieve comparable performance as other state-of-the-art models (Clark et al., 2018; Akbik et al., 2018; Peters et al., 2018; Devlin et al., 2019).", "Although the approach above applies VAT on the entire sequence, it locally normalizes the label probability per token and assumes all transitions between labels have equal possibilities.", "But in sequence labeling tasks, label transition probabilities are not always the same.", "For example, a song name is more likely to appear after a singer name, compared to a travel company.", "To incorporate label transitions into sequence models, Lafferty et al. (2001) proposed conditional random field (CRF).", "CRF models the probability distribution of the whole label sequence given the input sequence, instead of yielding a label probability distribution for each token.", "It takes account of both token features and transition features.", "Most state-of-the-art sequence labeling models apply a CRF on top of token representations as a decoder.", "Such neural-CRF models usually outperform models without CRF (Ma and Hovy, 2016; Akbik et al., 2018; Peters et al., 2018; Yasunaga et al., 2018).", "To apply the conventional VAT on a model with CRF, one can calculate the KL divergence on the label distribution of each token between the original examples and adversarial examples.", "However, it is sub-optimal because the transition probabilities are not taken into account.", "To better address these issues, we proposed SeqVAT, a variant of VAT that can be used along with CRF.", "Our evaluation demonstrates that SeqVAT brings significant improvements in supervised settings, rather than marginal improvements reported from previous VAT-based approaches Clark et al..", "In the semi-supervised settings, SeqVAT also outperforms many widely used methods such as self-training (ST) (Yarowsky, 1995) and entropy minimization (EM) (Grandvalet and Ben-gio, 2004), as well as the state-of-the-art semi-supervised sequence labeling algorithm, cross-view training (CVT) (Clark et al., 2018).", "Sequence labeling is a series of common natural language processing tasks that predicts a label for each token within a sequence, rather than a label for the whole sequence.", "Such tasks include named entity recognition, chunking and part-of-speech (POS) tagging etc.", "Most state-of-the-art sequence labeling models are based on a neural-CRF architecture (Ma and Hovy, 2016; Akbik et al., 2018; Peters et al., 2018; Yasunaga et al., 2018).", "More precisely, the general design is to use bidirectional recurrent neural network (RNN) layers for encoding and a CRF layer for decoding.", "In addition, usually one or more convolutional neural network (CNN) or RNN layers are applied before the neural-CRF architecture to encode character-level information as part of the input.", "In this paper, we adapt the neural-CRF architecture by a CNN-LSTM-CRF model, which consists of one CNN layer to generate character embeddings, two layers of bidirectional long short-term memory (LSTM) as the encoder and a CRF layer as the decoder.", "Semi-supervised learning is an important approach to improve model performance without enough labeled data.", "It utilizes unlabeled data to get more information which might be beneficial for supervised tasks.", "For semi-supervised learning, two robust and widely used approaches are self-training (ST) (Yarowsky, 1995) and entropy minimization (EM) (Grandvalet and Bengio, 2004).", "In natural language processing, ST has been successfully applied to word sense disambiguation (Yarowsky, 1995) and parsing (McClosky et al., 2006), and EM also has successful application in text classification (Sachan et al., 2019).", "Recently, a powerful semi-supervised approach, cross-view training (CVT), has achieved state-of-the-art on several semi-supervised language tasks, including dependency parsing, machine translation and chunking (Clark et al., 2018).", "CVT forces the model to make consistent predictions when using the full input or partial input.", "Hence, it does not require label information and can be used for semi-supervised learning.", "In order to validate the effectiveness of our approach on semi-supervised sequence labeling, we make fair comparisons to those three semi-supervised learning methods in the experiments.", "Adversarial training (Goodfellow et al., 2015) is a regularization method that enhances model robustness against input perturbations.", "It generates adversarial examples by injecting worst-case perturbations bounded by a small norm into the original examples, and adds them into training.", "As a consequence, model predictions would be consistent regardless of the perturbations.", "Prior to AT, several papers investigated various ways of perturbations (Xie et al., 2017).", "Adversarial training was demonstrated to be more effective since it introduces the perturbations which leading to the largest increase on model loss, respective to a constrained I went to Massachusetts Word Embeddings Character CNN Input: Word Char Predictions CRF Layer Bi-LSTMs Figure 1: Sequence Labeling Model Architecture.", "size (Goodfellow et al., 2015).", "Goodfellow et al. (2015) proved the effect of adversarial training in enhancing model robustness especially towards unseen samples for image classification.", "In addition to computer vision tasks, adversarial training also demonstrated its effectiveness on language tasks, such as text classification, POS tagging, named entity recognition and chunking (Miyato et al., 2017; Yasunaga et al., 2018).", "To extend AT to semi-supervised settings, Miyato et al. (2019) proposed virtual adversarial training (VAT).", "Virtual means label information is not required in this new adversarial training approach and consequently it could be applied to both labeled or unlabeled training instances.", "VAT achieved state-of-the-art performance for image classification tasks (Miyato et al., 2019), and proved to be more efficient than traditional semi-supervised approaches, such as entropy minimization (Grandvalet and Bengio, 2004) and self-training (Yarowsky, 1995), from a recent study (Oliver et al., 2018).", "However, despite the successful applications on text classification (Miyato et al., 2017), VAT has not shown great benefits to semi-supervised sequence labeling tasks, due to its incompatibility with CRF.", "In this paper, SeqVAT is proposed to make VAT compatible with CRF, and achieves significant improvements in sequence labeling.", "Our baseline model architecture is illustrated in Fig.1.", "It adopts the basic architecture for several state-of-the-art sequence labeling models (Ma and Hovy, 2016; Peters et al., 2017; Akbik et al., 2018; Peters et al., 2018), called CNN-LSTM-CRF (CLC) in this paper.", "We apply a CNN layer to extract character information and concatenate its output with word embeddings as input features.", "Then, we feed the input features into LSTM layers, and decode with a CRF layer.", "300-dimension randomly initialized word embeddings serve as word-level input.", "However, the model could learn embeddings with large norm, which makes the effects of adversarial perturbations with small norm insignificant (Miyato et al., 2017).", "To avoid such effect, we normalize the word embeddings at the beginning of each epoch.", "Denote v = { v i | i = 1 , 2 , ..., n } as the embeddings set, where n is vocabulary size, a specific embedding v i is normalized by: v i = v i E ( v ) (cid:112) D ( v ) (1) where E ( v ) = 1 n n (cid:88) i =1 v i and D ( v ) = 1 n n (cid:88) i =1 ( v i E ( v )) 2 After normalization, word embeddings have zero mean and unit variance.", "Character-level information has proved to help improve the sequence labeling accuracy by capturing morphological features (Ma and Hovy, 2016).", "In this paper, 32-dimension embeddings are randomly initialized for each character.", "To ensure that adversarial perturbations have significant effects, character embeddings are also normalized at the beginning of each epoch in the same way as word embeddings.", "Suppose u = { u i | i = 1 , 2 , ..., m } where m is the number of unique characters show up in the dataset, a specific embedding u i is randomly initialized and normalized by: u i = u i E ( u ) (cid:112) D ( u ) (2) where E ( u ) = 1 m m (cid:88) i =1 u i and D ( u ) = 1 m m (cid:88) i =1 ( u i E ( u )) 2 A CNN layer with 16 unigram, 16 bigram and 32 trigram filters is applied on top of all 32-dimension embeddings for one word.", "Hence, each word has 64-dimension character embeddings which are the output of CNN layer.", "After concatenating character embeddings and word embeddings as input, all those features pass through two bidirectional LSTM layers with 256 neurons per direction to encode information for the whole sequence.", "To incorporate the probabilities of label transitions, the outputs of LSTM layers are fed into a linear-chain CRF decoder (Lafferty et al., 2001).", "Negative log-likelihood is computed as the training loss and Viterbi algorithm (Viterbi, 1967) is used for decoding.", "Adversarial training (Goodfellow et al., 2015) is an effective method to improve model robustness over input perturbations.", "AT first generates adversarial examples, which are close to the original examples but model is not likely to correctly predict their labels (i.e. leading to most significant loss increase).", "Then, the model is trained with both original examples and adversarial examples.", "The loss on adversarial examples are treated as adversarial loss.", "In this paper, adversarial perturbations are added to word and character embeddings respectively.", "To prevent vanishing effects of adversarial perturbations explained in section 3.1.1 and 3.1.2, embeddings are normalized at the beginning of each epoch.", "Denote w and c as normalized word and character embeddings of the whole input sequence, is parameter of model, y is a vector of labels for all tokens in the sequence, and Loss is the loss (i.e. negative log-likelihood) for the whole sequence.", "Given the bounded norms w and c respectively, the worst-case perturbations d w and d c for w and c are: d w = argmax (cid:15), || (cid:15) || 2 w Loss ( y ; w + (cid:15), c, ) (3) d c = argmax , || || 2 c Loss ( y ; w, c + , ) (4) Note that all variables, y , w , c , d w and d c here are vectors for the whole sequence, since the last layer, CRF, is modeling the whole label sequence.", "In addition, is current estimation of .", "The purpose for using constant value instead of is to emphasize that the gradient should not propagate during generation of adversarial examples.", "Hence, the worst-case perturbations d w and d c against current model can be calculated through (3) and (4) at each training step, and model can be trained on examples plus those perturbations to improve robustness against them.", "Yet, computing exact value of those perturbations with maximization is intractable for complex DNN models.", "As proposed by Goodfellow et al. (2015), first order approximation is applied to approximate the value of d w and d c .", "With this approximation, d w and d c can be calculated by: d w = g w || g w || 2 w (5) d c = g c || g c || 2 c (6) where g w = w Loss ( y ; w, c, ) , and g c = c Loss ( y ; w, c, ) Then, the adversarial loss L adv is formed by: L adv = Loss ( y ; w + d w , c + d c , ) (7) 3.3 Virtual Adversarial Training Nevertheless, adversarial training cannot be applied to unlabeled data since label information is required to generate adversarial examples and compute adversarial loss.", "Virtual adversarial training is proposed (Miyato et al., 2019) to adapt adversarial training to semi-supervised settings.", "In VAT, instead of using the regular loss on perturbed examples as adversarial loss, the discrepancy (KL divergence) between predictions of original examples and those of adversarial examples acts as the adversarial loss.", "With this modification, label information is not needed in the computation of adversarial loss.", "Indeed, the adversarial loss for VAT is written as: L adv = KL ( P ori || P adv ) (8) where P ori = P ( y ; w, c, ) , and P adv = P ( y ; w + d w , c + d c , ) Here, y is to emphasize that the computation of KL divergence takes current estimation of distribution over y , so that label information is not required.", "P ori and P adv are the estimated probability distributions of labels on original examples and adversarial examples respectively.", "As explained in section 1, VAT is not compatible with CRF.", "Hence, P ori and P adv here stand for sets of label distributions for tokens, computed by applying a softmax on top of LSTM output representations.", "As a consequence, the function P to estimate probability distributions of labels here is: P ( y ; w, c, ) = CLS ( w, c, ) (9) where CLS means applying softmax on top of CNN-LSTM encoder.", "However, to compute worst-case perturbations d w and d c , label information y is still needed, as in equation (3), (4), (5) and (6).", "To get rid of the label information, the worst-case perturbations are now computed based on KL divergence between P ori and P adv , given the bounded norms w and c .", "So word perturbation d w is now defined by: argmax (cid:15), || (cid:15) || 2 w KL ( P ( y ; w, c, ) || P ( y ; w + (cid:15), c, )) (10) While character perturbation d c is: argmax , || || 2 c KL ( P ( y ; w, c, ) || P ( y ; w, c + , )) (11) Those two computations are still intractable for gradient descent.", "By applying second-order approximation and a single iteration of power method, as in (Miyato et al., 2019), the word perturbation and character perturbation can be estimated with: d w = g w || g w || 2 w (12) d c = g c || g c || 2 c (13) where g w = (cid:15) KL ( P ( y ; w, c, ) || P ( y ; w + (cid:15), c, )) , g c = KL ( P ( y ; w, c, ) || P ( y ; w, c + , )) 3.4 SeqVAT Because of its incompatibility with CRF, adapting VAT to sequence labeling is not yet successful (Clark et al., 2018).", "To fully release the power of VAT to sequence labeling models with CRF, we propose a CRF-friendly VAT, named SeqVAT.", "CRF models the conditional probability of the whole label sequence given the whole input sequence.", "Consequently, instead of using the label distribution over individual token, we could use the probability distribution for the whole label sequence, to compute KL divergence.", "The probability distribution can be denoted by: P ( y ; w, c, ) = CLC ( w, c, ) (14) where y is the whole label sequence, and CLC indicates the full CLC model.", "Nevertheless, given a sequence with t tokens and l possible labels for each token, the total number of possible label sequences is l t .", "Considering the substantial number of possible label sequences, it is not possible to compute the full probability distribution over all possible label sequences.", "To make the computation of such distribution possible, we estimate the full distribution by only considering the probabilities of k most possible label sequences, with one additional dimension to represent all the rest label sequences.", "Thus, the estimation of the probability distribution is ( k + 1) dimensions and feasible to compute.", "To get the most possible label sequences, we apply a k-best Viterbi decoding (Huang and Chiang, 2005) on the original sequence in each training step.", "Denote S = ( s 1 , s 2 , .., s k ) as the k-best label sequences of current input embeddings w and c , and p crf as the function to get probability of a label sequence.", "Given the current parameters , the probability distribution estimation P (cid:48) can be written as: P (cid:48) ( S ; w, c, ) = ( p (cid:48) 1 , p (cid:48) 2 ,", ".., p (cid:48) k , 1 k (cid:88) i =1 p (cid:48) i ) , (15) where p (cid:48) i = p crf ( s i ; w, c, ) , i [1 , k ] Then, P ori and P adv can be denoted as: P ori = P (cid:48) ( S ; w, c, ) (16) P adv = P (cid:48) ( S ; w + d w , c + d c , ) (17) Here, d w and d c can be computed using the same approximation as VAT by: d w = g w || g w || 2 w (18) d c = g c || g c || 2 c (19) where: g w = (cid:15) KL ( P (cid:48) ( S ; w, c, ) || P (cid:48) ( S ; w + (cid:15), c, )) , g c = KL ( P (cid:48) ( S ; w, c, ) || P (cid:48) ( S ; w, c + , )) The adversarial loss for SeqVAT can be computed by: L adv = KL ( P ori || P adv ) (20) 3.5 Training with Adversarial Loss Regardless of the adversarial training method we use (AT, VAT or SeqVAT), sequence labeling loss is computed for all labeled data at each training step: L label = Loss ( y ; w, c, , ) (21) In addition, in every training step, adversarial examples are generated and adversarial loss L adv is calculated based on the corresponding adversarial training algorithm.", "To combine the sequence labeling loss and adversarial loss, the total loss is a summation of those two loss: L total = L label + L adv (22) Here, weight is introduced to balance the model accuracy (sequence labeling loss) and robustness (adversarial loss).", "This objective function is optimized with respect to .", "Note, unlabeled data might be leveraged in VAT and SeqVAT, and they do not have sequence labeling loss due to lack of annotation.", "Hence, the sequence labeling loss L label would be set to 0 for unlabeled data.", "Our proposed method is evaluated on three datasets: CoNLL 2000 (Sang and Buchholz, 2000) for chunking, CoNLL 2003 (Sang and Meulder, 2003) for named entity recognition (NER) and an internal natural language understanding (NLU) dataset for slot filling.", "used as unlabeled data pool for semi-supervised learning.", "Considering the relatively small size of those two datasets, we randomly sampled 1% of the benchmark as the unlabeled dataset.", "We still have 20 times more data than training sets of CoNLL 2000 and 2003.", "For slot filling, our NLU dataset contains labeled and unlabeled sentences for 6 domains (detailed information is shown in Table.1).", "We directly use the unlabeled data for semi-supervised experiments.", "All parameters are randomly initialized.", "All hyper-parameters are chosen by grid search on the development set.", "Variational dropout (Blum et al., 2015) with rate 0.2 is applied to the input and output of each LSTM layer.", "The perturbation sizes for word and character embeddings, w and c , are 0.4 and 0.2 respectively.", "The weight for adversarial loss (i.e. ) is set to 0.6.", "k is set to 3 for CoNLL datasets and 9 for our NLU dataset.", "Sequence labeling model is optimized by Adam optimizer (Kingma and Ba, 2015) with batch size 64, learning rate 0.0006 and decay rate 0.992.", "Early stopping is applied based on model performance on the development set.", "All sequence labeling tasks are evaluated with slot-F1 metric, which is used in CoNLL 2000 and CoNLL 2003 shared tasks (Sang and Buchholz, 2000; Sang and Meulder, 2003).", "We evaluate our proposed SeqVAT technique in supervised settings and compare the results with other techniques designed to improve model robustness, including AT (Miyato et al., 2017), VAT (Miyato et al., 2019) and CVT (Clark et al., 2018).", "To demonstrate the effectiveness of CRF, we compare results from models with or without CRF using each training technique mentioned above.", "In Table.2, the first set of results corresponds to models without CRF, while the second utilizes CRF.", "Note, based on the characteristics of each training technique, the added adversarial loss varies.", "Since AT is compatible with CRF, and thus its adversarial loss is computed on top of CRF.", "But as explained in Sec.1, the adversarial loss of conventional VAT cannot be calculated on top of CRF.", "Consequently, VAT in the second set of Table.2 only applies CRF for label loss.", "It uses adversarial loss without CRF.", "As shown in Table.2, regardless of the training techniques, models with CRF consistently perform better than those without it.", "This demonstrates that CRF is a crucial component in sequence labeling.", "Hence, we conduct the rest of our evaluation only on models with CRF.", "Moreover, except that AT performs slightly better than SeqVAT in Cook domain, SeqVAT can outperform all approaches in all the other do-mains/datasets.", "All improvements of SeqVAT over other approaches are statistically significant (with p-value < 0.05 in t-test).", "Compared with VAT used by Clark et al. (2018), SeqVAT consistently shows more significant improvements, which indicates that SeqVAT is a better way of adopting virtual adversarial loss to sequence labeling.", "VAT has been proved to be very effective in semi-supervised learning (Oliver et al., 2018).", "Our proposed SeqVAT preserves the ability of utilizing unlabeled data.", "In this work, we also compare SeqVAT with two widely used semi-supervised learning algorithms: self-training (ST) (Yarowsky, 1995), entropy minimization (EM) (Grandvalet and Bengio, 2004), and one state-of-the-art semi-supervised sequence labeling approach, cross-view training (CVT) (Clark et al., 2018).", "Detailed results are tabulated in the third set of Table.2.", "From this comparison, SeqVAT consistently outperforms conventional VAT, ST, EM, and CVT.", "The improvements over other approaches are also statistically significant with p-value < 0.05.", "These results suggest that SeqVAT is also highly effective at utilizing unlabeled data.", "To choose the optimal k in k-best decoding, we conduct experiments with different k s on supervised sequence labeling.", "The F1 score from each k is plotted in Fig.2.", "From these plots, we observe that each dataset has its own optimal k for SeqVAT, and there is no unique k that gives the best results across datasets.", "To get a better generalization over all datasets and tasks, we avoid selecting the optimal k for each dataset/domain.", "However, different sources of language have different characteristics, including vocabulary, sentence length, syntax etc.", "Using the same k for different types of text might limit the effects of SeqVAT.", "To make a balance between generalization and effectiveness, we use different k for different types of text, but the same k for all datasets/domains with the same source.", "We use k = 3 for CoNLL 2000 and 2003 (news), and k = 9 for our internal NLU dataset (spoken language).", "lation between the amount of augmented unlabeled data and model performance on both CoNLL 2000 and 2003 datasets.", "For this analysis, we specifically focus ourselves on CVT and SeqVAT, which show the best accuracy across all datasets in Table.2.", "As shown in Fig.3, the amount of unlabeled data is a crucial factor for the performance of those two approaches.", "More specifically, the performance of those two approaches increases with more unlabeled data.", "For the CoNLL 2000 dataset, CVT has better performance when the unlabeled data is limited while SeqVAT gradually outperforms with more unlabeled data.", "As for the CoNLL 2003 dataset, SeqVAT shows consistently superior performance.", "This experiment shows that both approaches can provide significant benefits with a large amount of unlabeled data.", "In addition, SeqVAT has better utilization of unlabeled data, especially when having substantial unlabeled data.", "ST utilizes the unlabeled data by augmenting training data with the teacher model predictions, while EM makes the model more confident on the predictions for unlabeled data.", "Hence, both approaches are trying to force the model to trust predictions from the teacher model.", "If the teacher initially makes wrong predictions, the error would propagate to the student model.", "Unlike them, CVT and VAT/SeqVAT construct similar sentences which might have the same labels, and force the model to make consistent predictions on them.", "If the model makes incorrect prediction for the original sentence, CVT and VAT/SeqVAT can form a discussion to reach an agreement among the prediction of the original sentence and that of the similar sentences.", "If the model can make correct predictions for some similar utterances, it would have a chance to fix the error.", "Consequently, CVT and VAT/SeqVAT are generally expected to be more effective than ST and EM on the use of unlabeled data.", "The major difference between CVT and VAT is the mechanism of selecting similar sentences.", "CVT takes segments of the original sentence while VAT/SeqVAT generates new sentences by replacing tokens in the original sentence with their neighbors in the embedding space.", "Each approach has its own benefits and problems: 1) CVT can handle different tokens in the similar context, but would produce noise when the key words for meaning are not in the segments; 2) VAT generates truly similar sentences, but it might not be able to cover synonyms which have large distances in the embedding space.", "Hence, the effectiveness of them highly depends on the data.", "As in Table.2, CVT and VAT might outperform each other on different domains/datasets.", "The improvements of SeqVAT over CVT and VAT can be explained by its compatibility with CRF, because CRF is a critical component for some sequence labeling tasks (including the three in this paper).", "The compatibility with CRF would largely affect the effectiveness of semi-supervised approaches.", "In other tasks where label transitions are important, we might not see significant gains from SeqVAT over VAT or CVT.", "To make VAT compatible with CRF, we propose an idea to estimate the label sequence distribution using k-best estimation.", "This idea provides a view to optimize the label sequence level distribution directly rather than work on the label distribution per token.", "This idea could be beneficial for tasks needing distribution transfer on sequence models, such as knowledge distillation, multi-source transfer learning.", "In this paper, we propose a CRF compatible VAT training algorithm and demonstrate that sequence labeling tasks can greatly benefit from it.", "Our proposed method, SeqVAT, has strong effects to improve model robustness and accuracy on supervised sequence labeling tasks.", "In addition, SeqVAT is also highly effective in semi-supervised settings and outperforms traditional semi-supervised algorithms (ST and EM) as well as a state-of-the-art approach (CVT).", "Overall, our approach is highly effective for chunking, NER and slot filling, and can be easily extended to solve other sequence labeling problems in both supervised and semi-supervised settings.", "We want to thank all the anonymous reviewers for the helpful feedback and suggestions on this work." ]
[ "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "method", "method", "method", "method", "abstain", "abstain", "method", "other", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "objective", "other" ]
[ "We investigate video-aided grammar induction, which learns a constituency parser from both unlabeled text and its corresponding video.", "Existing methods of multi-modal grammar induction focus on learning syntactic grammars from text-image pairs, with promising results showing that the information from static images is useful in induction.", "However, videos provide even richer information, including not only static objects but also actions and state changes useful for inducing verb phrases.", "In this paper, we explore rich features ( e.g. action, object, scene, audio, face, OCR and speech) from videos, taking the recent Compound PCFG model (Kim et al., 2019) as the baseline.", "We further propose a Multi-Modal Compound PCFG model (MMC-PCFG) to effectively aggregate these rich features from different modalities.", "Our proposed MMC-PCFG is trained end-to-end and outperforms each individual modality and previous state-of-the-art systems on three benchmarks, i.e. DiDeMo, YouCook2 and MSRVTT, confirming the effectiveness of leveraging video information for unsupervised grammar induction.", "Constituency parsing is an important task in natural language processing, which aims to capture syntactic information in sentences in the form of constituency parsing trees.", "Many conventional approaches learn constituency parser from human-annotated datasets such as Penn Treebank (Marcus et al., 1993).", "However, annotating syntactic trees by human language experts is expensive and time-consuming, while the supervised approaches are limited to several major languages.", "In addition, the treebanks for training these supervised parsers are small in size and restricted to the newswire domain, thus their performances tend to be worse This work was done when Songyang Zhang was an intern at Tencent AI Lab.", "when applying to other domains (Fried et al., 2019).", "To address these issues, recent approaches (Shen et al., 2018b; Jin et al., 2018; Drozdov et al., 2019; Kim et al., 2019) design unsupervised constituency parsers and grammar inducers, since they can be trained on large-scale unlabeled data.", "In particular, there has been growing interests in exploiting visual information for unsupervised grammar induction because visual information can capture important knowledge required for language learning that is ignored by text (Gleitman, 1990; Pinker and MacWhinney, 1987; Tomasello, 2003).", "This task aims to learn a constituency parser from raw unlabeled text aided by its visual context.", "Previous methods (Shi et al., 2019; Kojima et al., 2020; Zhao and Titov, 2020; Jin and Schuler, 2020) learn to parse sentences by exploiting object information from images.", "However, images are static and cannot present the dynamic interactions among visual objects, which usually correspond to verb phrases that carry important information.", "Therefore, images and their descriptions may not be fully-representative of all linguistic phenomena encountered in learning, especially when action verbs are involved.", "For example, as shown in Figure", "1(a), when parsing a sentence A squirrel jumps on stump , a single image cannot present the verb phrase jumps on stump accurately.", "Moreover, as shown in Figure", "1(b), the guitar sound and the moving fingers clearly indicate the speed of music playing, while it is impossible to present only with a static image as well.", "Therefore, it is difficult for previous methods to learn these constituents, as static images they consider lack dynamic visual and audio information.", "In this paper, we address this problem by leveraging video content to improve an unsupervised grammar induction model.", "In particular, we exploit the current state-of-the-art techniques in both video and audio understanding, domains of which include object, motion, scene, face, optical character, sound, and speech recognition.", "We extract features from their corresponding state-of-the-art models and analyze their usefulness with the VC-PCFG model (Zhao and Titov, 2020).", "Since different modalities may correlate with each other, independently modeling each of them may be sub-optimal.", "We also propose a novel model, Multi-Modal Compound Probabilistic Context-Free Grammars (MMC-PCFG), to better model the correlation among these modalities.", "Experiments on three benchmarks show substantial improvements when using each modality of the video content.", "Moreover, our MMC-PCFG model that integrates information from different modalities further improves the overall performance.", "Our code is available at https://github.com/ Sy-Zhang/MMC-PCFG .", "The main contributions of this paper are: We are the first to address video aided unsupervised grammar induction and demonstrate that verb related features extracted from videos are beneficial to parsing.", "We perform a thorough analysis on different modalities of video content and propose a model to effectively integrate these important modalities to train better constituency parsers.", "Experiments results demonstrate the effectiveness of our model over the previous state-of-the-art methods.", "Our model is motivated by C-PCFG (Kim et al., 2019) and its variant of the image-aided unsupervised grammar induction model, VC-PCFG (Zhao and Titov, 2020).", "We will first review the evolution of these two frameworks in Sections 2.12.2, and then discuss their limitations in Section 2.3.", "A probabilistic context-free grammar (PCFG) in Chomsky normal form can be defined as a 6-tuple ( S, N , P , , R , ) , where S is the start symbol, N , P and are the set of nonterminals, preter-minals and terminals, respectively.", "R is a set of production rules with their probabilities stored in , where the rules include binary nonterminal expansions and unary terminal expansions.", "Given a certain number of nonterminal and preterminal categories, a PCFG induction model tries to estimate rule probabilities.", "By imposing a sentence-specific prior on the distribution of possible PCFGs, the compound PCFG model (Kim et al., 2019) uses a mixture of PCFGs to model individual sentences in contrast to previous models (Jin et al., 2018) where a corpus-level prior is used.", "Specifically in the generative story, the rule probability r is estimated by the model g with a latent representation z for each sentence , which is in turn drawn from a prior p ( z ) : r = g r ( z ; ) , z p ( z ) .", "The probabilities for the CFG initial expansion rules S A , nonterminal expansion rules A B C and preterminal expansion rules T w can be estimated by calculating scores of each combination of a parent category in the left hand side of a rule and all possible child categories in the right hand side of a rule:", "S A = exp( u (cid:62) A f s ([ w S ; z ])) (cid:80) A (cid:48) N exp( u A (cid:48) f s ([ w S ; z ])) , A BC = exp( u (cid:62) BC [ w A ; z ]) (cid:80) B (cid:48) ,C (cid:48) NP exp( u (cid:62) B (cid:48) C (cid:48) [ w A ; z ])) , T w = exp( u (cid:62) w f t ([ w T ; z ])) (cid:80) w (cid:48) exp( u Tw (cid:48) f t ([ w T ; z ])) , (2)", "where A, B, C N , T P , w , w and u vectorial representations of words and categories, and f t and f s are encoding functions such as neural networks.", "Optimization of the PCFG induction model usually involves maximizing the marginal likelihood of a training sentence p ( ) for all sentences in a corpus.", "In the case of compound PCFGs: log p ( ) = log (cid:90) z (cid:88) t T G ( ) p ( t | z ) p ( z ) d z , (3) where t is a possible binary branching parse tree of among all possible trees T under a grammar G .", "Since computing the integral over z is intractable, log p ( ) can be optimized by maximizing its evidence lower bound ELBO( ; , ) : ELBO( ; , ) = E q ( z | ) [log p ( | z )] KL[ q ( z | ) || p ( z )] , (4) where q ( z | ) is a variational posterior, a neural network parameterized with .", "The sample log likelihood can be computed with the inside algorithm, while the KL term can be computed analytically when both prior p ( z ) and the posterior approximation q ( z | ) are Gaussian (Kingma and Welling, 2014).", "The visually grounded compound PCFGs (VC-PCFG) extends the compound PCFG model (C-PCFG) by including a matching model between images and text.", "The goal of the vision model is to match the representation of an image v to the representation of a span c in a parse tree t of a sentence .", "The word representation h i for the i th word is calculated by a BiLSTM network.", "Given a particular span c = w i , . . . , w j (0 < i < j n )] , we then compute its representation c .", "We first compute the probabilities of its phrasal labels { p ( k | c, ) | 1 k K, K = |N |} , as described in Section 2.1.", "The representation c is the sum of all label-specific span representations weighted by the probabilities we predicted: c = K (cid:88) k =1 p ( k | c, ) f k ( 1 j i + 1 j (cid:88) l = i h l ) , (5) Finally, the matching loss between a sentence and an image representation v can be calculated as a sum over all matching losses between a span and the image representation, weighted by the marginal of a span from the parser: s img ( v , ) = (cid:88) c p ( c | ) h img ( c , v ) , (6) where h img ( c , v ) is a hinge loss between the distances from the image representation v to the matching and unmatching ( i.e. sampled from a different sentence) spans c and c (cid:48) , and the distances from the span c to the matching and unmatching ( i.e. sampled from a different image) image representations v and v (cid:48) : h img ( c , v ) = E c (cid:48) [cos( c (cid:48) , v ) cos( c , v )) + (cid:15) ] + + E v (cid:48) [cos( c , v (cid:48) ) cos( c , v ) + (cid:15) ] + , (7) where (cid:15) is a positive margin, and the expectations are approximated with one sample drawn from the training data.", "During training, ELBO and the image-text matching loss are jointly optimized.", "VC-PCFG improves C-PCFG by leveraging the visual information from paired images.", "In their experiments (Zhao and Titov, 2020), comparing to C-PCFG, the largest improvement comes from NPs ( +11 . 9% recall), while recall values of other frequent phrase types (VP, PP, SBAR, ADJP and ADVP) are fairly similar.", "The performance gain on NPs is also observed with another multi-modal induction model, VG-NSL (Shi et al., 2019; Kojima et al., 2020).", "Intuitively, image representations from image encoders trained on classification tasks very likely contain accurate information about objects in images, which is most relevant to identifying NPs 1 .", "However, they provide limited information for phrase types that mainly involve action and change, such as verb phrases.", "Representations of dynamic scenes may help the induction model to identify verbs, and also contain information about the argument structure of the verbs and nouns based on features of actions and participants extracted from videos.", "Therefore, we propose a model that induces PCFGs from raw text aided by the multi-modal information extracted from videos, and expect to see accuracy gains on such places in comparison to the baseline systems.", "In this section, we introduce the proposed multimodal compound PCFGs (MMC-PCFG).", "Instead 1 Jin and Schuler (2020) reports no improvement on English when incorporating visual information into a similar neural network-based PCFG induction model, which may be because Zhao and Titov (2020) removes punctuation from the training data, which removes a reliable source of phrasal boundary information.", "This loss is compensated by the induction model with image representations.", "We leave the study of evaluation configuration on induction results for future work.", "of purely relying on object information from images, we generalize VC-PCFG into the video do-main, where multi-modal video information is considered.", "We first introduce the video representation in Section 3.1.", "We then describe the procedure for matching the multi-modal video representation with each span in Section 3.2.", "After that we introduce the training and inference details in Section 3.3.", "A video contains a sequence of frames, denoted as V = { v i } L 0 i =1 , where v i represents a frame in a video and L 0 indicates the total number of frames.", "We extract video representation from M models trained on different tasks, which are called experts .", "Each expert focuses on extracting a sequence of features of one type.", "In order to project different expert features into the same dimension, their feature sequences are feed into linear layers (one per expert) with same output dimension.", "We denote the outputs of the m th expert after projection as F m = { f mi } L m i =1 , where f mi and L m represent the i th feature and the total number of features of the m th expert, respectively.", "A simple method would average each feature along the temporal dimension and then concatenating them together.", "However, this would ignore the relations among different modalities and the temporal ordering within each modality.", "In this paper, we use a multi-modal transformer to collect video representations (Gabeur et al., 2020; Lei et al., 2020).", "The multi-modal transformer expects a sequence as input, hence we concatenate all feature sequences together and take the form: X = [ f 1 avg , f 11 , ..., f 1 L 1 , ... f Mavg , f M 1 , ..., f MLM ] , (8) where f mavg is the averaged feature of { f mi } L m i =1 .", "Each transformer layer has a standard architecture and consists of multi-head self-attention module and a feed forward network (FFN).", "Since this architecture is permutation-invariant, we supplement it with expert type embeddings E and positional encoding P that are added to the input of each attention layer.", "The expert type embeddings indicate the expert type for input features and take the form: E = [ e 1 , e 1 , ..., e 1 , ..., e M , e M , ..., e M ] , (9) where e m is a learned embedding for the m th expert.", "of each feature within the video and take the form:", "where fixed encodings are used (Vaswani et al., 2017).", "After that, we collect the output of transformer that corresponds to the averaged features as the final video representation, i.e. , = { iavg } Mi =1 .", "In this way, we can learn more effective video representation by modeling the correlations of features from different modalities and different timestamps.", "To compute the similarity between a video V and a particular span c , a span representation c is obtained following Section 2.2 and projected to M separate expert embeddings via gated embedding modules (one per expert) (Miech et al., 2018):", "i 1 = W i 1 c + b i 1 , i 2 = i 1 sigmoid ( W i 2 i 1 + b i 2 ) , i = i 2 (cid:107) i 2 (cid:107) 2 , (11)", "where i is the index of expert, W i 1 , W i 2 , b i 1 , b i 2 are learnable parameters, sigmoid is an element-wise sigmoid activation and is the element-wise multiplication.", "We denote the set of expert embeddings as = { i } Mi =1 .", "The video-span similarity is computed as following, i ( c ) = exp( u (cid:62) i c ) (cid:80) Mj =1 exp( u (cid:62) j c ) , o ( , ) = M (cid:88) i =1 i ( c )cos( i , i ) , (12) where { u i } Mi =1 are learned weights.", "Given (cid:48) , an unmatched span expert embeddings of , and (cid:48) , an unmatched video representation of , the hinge loss for video is given by: h vid ( , ) = E c (cid:48) [ o ( (cid:48) , ) o ( , )) + (cid:15) ] + + E (cid:48) [ o ( , (cid:48) ) o ( , ) + (cid:15) ] + , (13) where (cid:15) is a positive margin.", "Finally the video-text matching loss is defined as: s vid ( V, ) = (cid:88) c p ( c | ) h vid ( , ) .", "(14)", "Noted that s vid can be regarded as a generalized form of s img in Equation 6, where features from different timestamps and modalities are considered.", "During training, our model is optimized by the ELBO and the video-text matching loss:", "where is a hyper-parameter balancing these two loss terms and is a video-sentence pair.", "During inference, we predict the most likely tree t given a sentence without accessing videos.", "Since computing the integral over z is intractable, t is estimated with the following approximation, t = arg max t (cid:90) z p ( t | z ) p ( z | ) d z arg max t p ( t | , ( )) , (16) where ( ) is the mean vector of the variational posterior q ( z | ) and t can be obtained using the CYK algorithm (Cocke, 1969; Younger, 1967; Kasami, 1966).", "DiDeMo (Hendricks et al., 2017) collects 10 K unedited, personal videos from Flickr with roughly 3 5 pairs of descriptions and distinct moments per video.", "There are 32 994 , 4 180 and 4 021 video-sentence pairs, validation and testing split.", "YouCook2 (Zhou et al., 2018) includes 2 K long untrimmed videos from 89 cooking recipes.", "On average, each video has 6 procedure steps described by imperative sentences.", "There are 8 713 , 969 and 3 310 video-sentence pairs in the training, validation and testing sets.", "MSRVTT (Xu et al., 2016) contains 10 K videos sourced from YouTube which are accompanied by 200 K descriptive captions.", "There are 130 260 , 9 940 and 59 794 video-sentence pairs in the training, validation and testing sets.", "Following the evaluation practice in Zhao and Titov (2020), we discard punctuation and ignore trivial single-word and sentence-level spans at test time.", "The gold parse trees are obtained by applying a state-of-the-art constituency parser, Benepar (Ki-taev and Klein, 2018), on the testing set.", "All models are run 4 times for 10 epochs with different random seeds.", "We evaluate both averaged corpus-level F1 (C-F1) and averaged sentence-level F1 (S-F1) numbers as well as their standard deviations.", "In order to capture the rich content from videos, we extract features from the state-of-the-art models of different tasks, including object, action, scene, sound, face, speech, and optical character recognition (OCR).", "For object and action recognition, we explore multiple models with different architectures and pre-trained dataset.", "Details are as follows: Object features are extracted by two models: ResNeXt101 (Xie et al., 2017), pre-trained on In-stagram hashtags (Mahajan et al., 2018) and fine-tuned on ImageNet (Krizhevsky et al., 2012), and SENet154 (Hu et al., 2018), trained on ImageNet.", "These datasets include images of common objects, such as, cock, kite, and goose, etc.", "We use the predicted logits as object features for both models, where the dimension is 1000 .", "Action features are extracted by three models: I3D trained on Kinetics400 (Carreira and Zisserman, 2017), R2P1D (Tran et al., 2018) trained on IG-65M (Ghadiyaram et al., 2019) and S3DG (Miech et al., 2020) trained on HowTo100M (Miech et al., 2019).", "These datasets include videos of human actions, such as playing guitar, ski jumping, and jogging, etc.", "Following the same processing steps in their original work, we extract the predicted logits as action features, where the dimension is 400 (I3D), 359 (R2P1D) and 512 (S3DG), respectively.", "Scene features are extracted by DenseNet-161 (Huang et al., 2017) trained on Places 365 (Zhou et al., 2017).", "Places 365 contains images of different scenes, such as library, valley, and rainforest, etc.", "The predicted logits are used as scene features, where the feature dimension is 365 .", "Audio features are extracted by VGGish trained on YouTube8 M (Hershey et al., 2017), where the feature dimension is 128 .", "YouTube8 M is a video dataset where different types of sound are involved, such as piano, drum, and violin.", "OCR features are extracted by two steps: characters are first recognized by combining text detector Pixel Link (Deng et al., 2018) and text recognizer SSFL (Liu et al., 2018).", "The characters are then converted to word embeddings through word 2 vec (Mikolov et al., 2013) as the final OCR features, where the feature dimension is 300 .", "detector SSD (Liu et al., 2016) and face recognizer ResNet50 (He et al., 2016).", "The feature dimension is 512 .", "Speech features are extracted by two steps: transcripts are first obtained via Google Cloud Speech to Text API.", "The transcripts are then converted to word embeddings through word 2 vec (Mikolov et al., 2013) as the final speech features, where the dimension is 300 .", "We keep sentences with fewer than 20 words in the training set due to the computational limitation.", "After filtering, the training sets cover 99 .", "4% , 98 .", "5% and 97 .", "1% samples of their original splits in DiDeMo, YouCook2 and MSRVTT.", "We train baseline models, C-PCFG and VC-PCFG, with same hyper parameters suggested in Kim et al. (2019); Zhao and Titov (2020).", "Our MMC-PCFG is composed of a parsing model and a video-text matching model.", "The parsing model has the same parameters as VC-PCFG (please refer to their paper for details).", "For video-text matching model, all extracted expert features are projected to 512 -dimensional vectors.", "The transformer has 2 layers, a dropout probability of 10% , a hidden size of 512 and an intermediate size of 2048 .", "We select the top2000 most common words as vocabulary for all datasets.", "All the baseline methods and our models are optimized using Adam (Kingma and Ba, 2015) with the learning rate set to 0 .", "001 , 1 = 0 .", "75 and 2 = 0 .", "999 .", "All parameters are initialized with Xavier uniform initializer (Glorot and Bengio, 2010).", "The batch size is set to 16 .", "Due to the long video durations, it is infeasible to feed all features into the multi-modal transformer.", "Therefore, each feature from object, motion and scene categories is partitioned into 8 chunks and then average-pooled within each chunk.", "For features from other categories, global average pooling is applied.", "In this way, the coarse-grained temporal information is preserved.", "Noted that some videos do not have audio and some videos do not have detected faces or text characters.", "For these missing features, we pad them with zeros.", "All the aforementioned expert features are obtained from Albanie et al. (2020).", "We evaluate the proposed MMC-PCFG approach on three datasets, and compare it with recently proposed state-of-the-art methods, C-PCFG (Kim", "et al., 2019) and VC-PCFG (Zhao and Titov, 2020).", "The results are summarized in Table", "1. The values high-lighted by bold and italic fonts indicate the top2 methods, respectively.", "All results are reported in percentage ( % ).", "LBranch, RBranch and Random represent left branching trees, right branching trees and random trees, respectively.", "Since VC-PCFG is originally designed for images, it is not directly comparable with our method.", "In order to allow VC-PCFG to accept videos as input, we average video features in the temporal dimension first and then feed them into the model.", "We evaluate VC-PCFG with 10 , 7 , and 10 expert features for DiDeMo, YouCook2 and MSRVTT, respectively.", "In addition, we also include the concatenated averaged features (Concat).", "Since object and action categories involve more than one expert, we directly use experts' names instead of their categories in Table", "1. Overall performance comparison.", "We first compare the overall performance, i.e. , C-F1 and S-F1, among all models, as shown in Table", "1. The right branching model serves as a strong baseline, since English is a largely right-branching language.", "C-PCFG learns parsing purely based on text.", "Compared to C-PCFG, the better overall performance of VC-PCFG demonstrates the effectiveness of leveraging video information.", "Compared within VC-PCFG, concatenating all features together may not even outperform a model trained on a single expert (R2P1D v.s. Concat in DiDeMo and MSRVTT).", "The reason is that each expert is learned independently, where their correlations are not considered.", "In contrast, our MMC-PCFG outperforms all baselines on C-F1 and S-F1 in all datasets.", "The superior performance indicates that our model can leverage the benefits from all the experts 2 .", "Moreover, the superior performance over Concat demonstrates the importance of modeling relations among different experts and different timestamps.", "Performance comparison among different phrase types.", "We compare the models' recalls on top3 frequent phrase types (NP, VP and PP).", "These three types cover 77 .", "4% , 80 .", "1% and 82 .", "4% spans of gold trees on DiDeMo, YouCook2 and MSRVTT, respectively.", "In the following, we compare their performance on DiDeMo, as shown in Table", "1. Comparing VC-PCFG trained 2 The larger improvement on DiDeMo may be caused by the diversity of the video content.", "Videos in DiDeMo are more diverse in scenes, actions and objects, which provide a great opportunity for leveraging video information.", "with a single expert, we find that object features (ResNeXt and SENet) achieve top2 recalls on NPs, while action features (I3D, R2P1D and S3DG) achieve the top3 recalls on VPs and PPs.", "It indicates that different experts help parser learn syntactic structures from different aspects.", "Meanwhile, action features improve C-PCFG 3 on VPs and PPs by a large margin, which once again verifies the benefits of using video information.", "Comparing our MMC-PCFG with VC-PCFG, our model achieves the top2 recall and is smaller in variance in NP, VP and PP.", "It demonstrates that our model can take the advantages of different experts and learn consistent grammar induction.", "3 The low performance of C-PCFG on DiDeMo in terms of VP recall may be caused by it attaching a high attaching PP to the rest of the sentence instead of the rest of the verb phrase, which breaks the whole VP.", "For PPs, C-PCFG attaches prepositions to the word in front, which may be caused by confusion between prepositions in PPs and phrasal verbs.", "2 3 4 5 6 7 8 9 10 11 12 13 All ConstituentLength NP VP PP All 26 .", "5 9 .", "1 3 .", "6 2 .", "1 1 .", "9 0 .", "8 0 .", "4 0 .", "2 0 .", "1 0 .", "1 0 .", "1 0 .", "0 45 .", "0 5 .", "9 5 .", "8 6 .", "4 4 .", "4 3 .", "2 2 .", "1 1 .", "6 1 .", "0 0 .", "8 0 .", "3 0 .", "2 0 .", "2 32 .", "0 6 .", "9 9 .", "3 3 .", "8 1 .", "2 0 .", "8 0 .", "5 0 .", "1 0 .", "1 0 .", "0 0 .", "0 0 .", "0 0 .", "0 22 .", "8 39 .", "3 24 .", "2 13 .", "9 7 .", "8 5 .", "8 3 .", "4 2 .", "2 1 .", "4 0 .", "9 0 .", "4 0 .", "3 0 .", "2 100 .", "0 Figure 3: Label distributions over the constituent length on DiDeMo.", "In this section, we conduct several ablation studies on DiDeMo, shown in Figures 24.", "All results are reported in percentage ( % ).", "Performance comparison over constituent length.", "We first demonstrate the model performance for constituents at different lengths in Figure", "2. As constituent length becomes longer, the recall of all models (except RBranch) decreases as expected (Kim et al., 2019; Zhao and Titov, R e s N e X t SEN e t I 3 DR 2 P 1 DS 3 DGS ce n e A ud i o OCRF ace S p eec h MMC-PCFG ResNeXt SENet I3D R2P1D S3DG Scene Audio OCR Face Speech MMC-PCFG 57 . 0 66 . 5 57 . 5 59 . 1 60 . 7 60 . 2 61 . 8 61 . 6 60 . 6 62 . 1 58 . 2 63 . 9 59 . 2 60 . 0 62 . 4 61 . 7 64 . 2 63 . 2 62 . 2 63 . 9 59 . 4 58 . 3 61 . 6 60 . 6 61 . 2 55 . 0 57 . 7 60 . 4 58 . 1 64 . 0 69 . 0 65 . 4 63 . 4 56 . 9 61 . 2 65 . 8 61 . 5 67 . 8 59 . 7 62 . 1 59 . 7 61 . 2 65 . 6 61 . 6 64 . 6 59 . 2 58 . 9 60 . 2 61 . 7 61 . 5 62 . 8 57 . 9 60 . 4 59 . 5 60 . 9 55 . 9 57 . 7 62 . 0 64 . 2 60 . 0 59 . 5 62 . 0 64 . 6 58 . 1 59 . 6 67 . 7 Figure 4: Consistency scores for different models on DiDeMo. 2020).", "MMC-PCFG outperforms C-PCFG and VC-PCFG under all constituent lengths.", "We further illustrate the label distribution over constituent length in Figure", "3. We find that approximately 98 .", "1% of the constituents have fewer than 9 words and most of them are NPs, VPs and PPs.", "This suggests that the improvement on NPs, VPs and PPs can strongly affect the overall performance.", "Consistency between different models.", "Next, we analyze the consistency of these different models.", "The consistency between two models is measured by averaging sentence-level F1 scores over all possible pairings of different runs 4 (Williams et al., 2018).", "We plot the consistency for each pair of models in Figure 4 and call it consistency matrix.", "Comparing the self F1 of all the models (the diagonal in the matrix), R2P1D has the highest score, suggesting that R2P1D is the most reliable feature that can help parser to converge to a specific grammar.", "Comparing the models trained with different single experts, ResNeXt v.s. SENet reaches the highest non-self F1, since they are both object features trained on ImageNet and have similar effects to the parser.", "We also find that the lowest non-self F1 comes from Audio v.s. I3D, since they are extracted from different modalities (video v.s. sound).", "Compared with other models, our model is most consistent with R2P1D, indicating that R2P1D contributes most to our final prediction.", "Contribution of different modalities.", "We also evaluate how different modalities contribute to the performance of MMC-PCFG.", "We divide current experts into three groups, video (objects, action, scene and face), audio (audio) and text (OCR and ASR).", "By ablating one group during training, we find that the model without video experts has the 4 Different runs represent models trained with different seeds.", "largest performance drops (see Table 2).", "Therefore, videos contribute most to the performance among all modalities.", "In Figure 5, we visualize a parse tree predicted by the best run of SENet154, I3D and MMC-PCFG.", "We can observe that SENet identifies all NPs but fails at the VP.", "I3D correctly predicts the VP but fails at recognizing a NP, the man.", "Our MMC-PCFG can take advantages of all experts and produce the correct prediction.", "Grammar Induction Grammar induction and unsupervised parsing has been a long-standing problem in computational linguistics (Carroll and Char-niak, 1992).", "Recent work utilized neural networks in predicting constituency structures with no supervision (Shen et al., 2018a; Drozdov et al., 2019; Shen et al., 2018b; Kim et al., 2019; Jin et al., 2019a) and showed promising results.", "In addition to learning purely from text, there is a growing interest to use image information to improve accuracy of induced constituency trees (Shi et al., 2019; Kojima et al., 2020; Zhao and Titov, 2020; Jin and Schuler, 2020).", "Different from previous work, our work improves the constituency parser by using videos containing richer information than images.", "Video-Text Matching Video-text matching has been widely studied in various tasks, such as video retrieval (Liu et al., 2019; Gabeur et al., 2020), moment localization with natural language (Zhang et al., 2019, 2020) and video question and answering (Xu et al., 2017; Jin et al., 2019b).", "It aims to learn video-semantic representation in a joint embedding space.", "Recent works (Liu et al., 2019; Gabeur et al., 2020; Chen et al., 2020) focus on learning video's multi-modal representation to match with text.", "In this work, we borrow this idea to match video and textual representations.", "In this work, we have presented a new task referred to as video-aided unsupervised grammar induction.", "This task aims to improve grammar induction models by using aligned video-sentence pairs as an effective way to address the limitation of current image-based methods where only object information from static images is considered and important verb related information from vision is missing.", "Moreover, we present Multi-Modal Compound Probabilistic Context-Free Grammars (MMC-PCFG) to effectively integrate video features extracted from different modalities to induce more accurate grammars.", "Experiments on three datasets demonstrate the effectiveness of our method.", "We thank the support of NSF awards IIS-1704337, IIS-1722847, IIS-1813709, and the generous gift from our corporate sponsors." ]
[ "objective", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "method", "abstain", "objective", "abstain", "result", "other", "objective", "objective", "objective", "abstain", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "objective", "method", "other", "other", "other", "other", "other", "other", "other", "objective", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "abstain", "abstain", "abstain", "method", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "other", "other", "other", "abstain", "other", "other", "other", "abstain", "objective", "abstain", "method", "objective", "other" ]