gem_id
stringlengths 37
41
| paper_id
stringlengths 3
4
| paper_title
stringlengths 19
183
| paper_abstract
stringlengths 168
1.38k
| paper_content
sequence | paper_headers
sequence | slide_id
stringlengths 37
41
| slide_title
stringlengths 2
85
| slide_content_text
stringlengths 11
2.55k
| target
stringlengths 11
2.55k
| references
list |
---|---|---|---|---|---|---|---|---|---|---|
GEM-SciDuet-train-131#paper-1354#slide-17 | 1354 | Neural Argument Generation Augmented with Externally Retrieved Evidence | High quality arguments are essential elements for human reasoning and decision-making processes. However, effective argument construction is a challenging task for both human and machines. In this work, we study a novel task on automatically generating arguments of a different stance for a given statement. We propose an encoder-decoder style neural network-based argument generation model enriched with externally retrieved evidence from Wikipedia. Our model first generates a set of talking point phrases as intermediate representation, followed by a separate decoder producing the final argument based on both input and the keyphrases. Experiments on a large-scale dataset collected from Reddit show that our model constructs arguments with more topicrelevant content than a popular sequence-tosequence generation model according to both automatic evaluation and human assessments. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237,
238,
239,
240,
241,
242,
243,
244,
245,
246,
247,
248,
249,
250,
251,
252,
253,
254,
255,
256,
257,
258,
259,
260,
261,
262,
263,
264,
265,
266,
267,
268,
269,
270,
271,
272,
273,
274,
275,
276,
277,
278,
279
],
"paper_content_text": [
"Introduction Generating high quality arguments plays a crucial role in decision-making and reasoning processes (Bonet and Geffner, 1996; Byrnes, 2013) .",
"A multitude of arguments and counter-arguments are constructed on a daily basis, both online and offline, to persuade and inform us on a wide range of issues.",
"For instance, debates are often conducted in legislative bodies to secure enough votes for bills to pass.",
"In another example, online deliberation has become a popular way of soliciting public opinions on new policies' pros and cons (Albrecht, 2006; Park et al., 2012) .",
"Nonetheless, constructing persuasive arguments is a daunting task, for both human and computers.",
"We believe that developing effective argument generation models will enable a broad range of compelling applications, including debate coaching, improving students' essay writing skills, and pro- viding context of controversial issues from different perspectives.",
"As a consequence, there exists a pressing need for automating the argument construction process.",
"To date, progress made in argument generation has been limited to retrieval-based methodsarguments are ranked based on relevance to a given topic, then the top ones are selected for inclusion in the output (Rinott et al., 2015; Wachsmuth et al., 2017; Hua and Wang, 2017) .",
"Although sentence ordering algorithms are developed for information structuring (Sato et al., 2015; Reisert et al., 2015) , existing methods lack the ability of synthesizing information from different resources, leading to redundancy and incoherence in the output.",
"In general, the task of argument generation presents numerous challenges, ranging from aggregating supporting evidence to generating text with coherent logical structure.",
"One particular hurdle comes from the underlying natural language generation (NLG) stack, whose success has been limited to a small set of domains.",
"Especially, most previous NLG systems rely on tem-plates that are either constructed by rules (Hovy, 1993; Belz, 2008; Bouayad-Agha et al., 2011) , or acquired from a domain-specific corpus (Angeli et al., 2010) to enhance grammaticality and coherence.",
"This makes them unwieldy to be adapted for new domains.",
"In this work, we study the following novel problem: given a statement on a controversial issue, generate an argument of an alternative stance.",
"To address the above challenges, we present a neural network-based argument generation framework augmented with externally retrieved evidence.",
"Our model is inspired by the observation that when humans construct arguments, they often collect references from external sources, e.g., Wikipedia or research papers, and then write their own arguments by synthesizing talking points from the references.",
"Figure 1 displays sample arguments by users from Reddit subcommunity /r/ChangeMyView 1 who argue against the motion that \"government should be allowed to view private emails\".",
"Both replies leverage information drawn from Wikipedia, such as \"political corruption\" and \"Fourth Amendment on protections of personal privacy\".",
"Concretely, our neural argument generation model adopts the popular encoder-decoderbased sequence-to-sequence (seq2seq) framework (Sutskever et al., 2014) , which has achieved significant success in various text generation tasks (Bahdanau et al., 2015; Wen et al., 2015; Wang and Ling, 2016; Mei et al., 2016; Wiseman et al., 2017) .",
"Our encoder takes as input a statement on a disputed issue, and a set of relevant evidence automatically retrieved from English Wikipedia 2 .",
"Our decoder consists of two separate parts, one of which first generates keyphrases as intermediate representation of \"talking points\", and the other then generates an argument based on both input and keyphrases.",
"Automatic evaluation based on BLEU (Papineni et al., 2002) shows that our framework generates better arguments than directly using retrieved sentences or popular seq2seq-based generation models (Bahdanau et al., 2015) that are also trained with retrieved evidence.",
"We further design a novel evaluation procedure to measure whether the arguments are on-topic by predicting their relevance to the given statement based on a separately trained relevance estimation model.",
"Results suggest that our model generated arguments are more likely to be predicted as on-topic, compared to other seq2seq-based generations models.",
"The rest of this paper is organized as follows.",
"Section 2 highlights the roadmap of our system.",
"The dataset used for our study is introduced in Section 3.",
"The model formulation and retrieval methods are detailed in Sections 4 and 5.",
"We then describe the experimental setup and results in Sections 6 and 7, followed by further analysis and future directions in Section 8.",
"Related work is discussed in Section 9.",
"Finally, we conclude in Section 10.",
"Framework Our argument generation pipeline, consisting of evidence retrieval and argument construction, is depicted in Figure 2 .",
"Given a statement, a set of queries are constructed based on its topic signature words (e.g., \"government\" and \"national security\") to retrieve a list of relevant articles from Wikipedia.",
"A reranking component further extracts sentences that may contain supporting evidence, which are used as additional input information for the neural argument generation model.",
"The generation model then encodes the statement and the evidence with a shared encoder in sequence.",
"Two decoders are designed: the keyphrase decoder first generates an intermediate representation of talking points in the form of keyphrases (e.g., \"right to privacy\", \"political corruption\"), followed by a separate argument decoder which produces the final argument.",
"Data Collection and Processing We draw data from Reddit subcommunity /r/ChangeMyView (henceforth CMV), which focuses on facilitating open discussions on a wide range of disputed issues.",
"Specifically, CMV is structured as discussion threads, where the original post (OP) starts with a viewpoint on a controversial topic, followed with detailed reasons, then other users reply with counter-arguments.",
"Importantly, when a user believes his view has been changed by an argument, a delta is often awarded to the reply.",
"In total, 26,761 threads from CMV are downloaded, dating from January 2013 to June 2017 3 .",
"Figure 2: Overview of our system pipeline (best viewed in color).",
"Given a statement, relevant articles are retrieved from Wikipedia with topic signatures from statement as queries (marked in red and boldface).",
"A reranking module then outputs top sentences as evidence.",
"The statement and the evidence (encoder states in gray panel) are concatenated and encoded as input for our argument generation model.",
"During decoding, the keyphrase decoder first generates talking points as phrases, followed by the argument decoder which constructs the argument by attending both input and keyphrases.",
"Only root replies (i.e., replies directly addressing OP) that meet all of the following requirements are included: (1) longer than 5 words, (2) without offensive language 4 , (3) awarded with delta or with more upvotes than downvotes, and (4) not generated by system moderators.",
"After filtering, the resultant dataset contains 26,525 OPs along with 305,475 relatively high quality root replies.",
"We treat each OP as the input statement, and the corresponding root replies as target arguments, on which our model is trained and evaluated.",
"A Focused Domain Dataset.",
"The current dataset contains diverse domains with unbalanced numbers of arguments.",
"We therefore choose samples from the politics domain due to its large volume of discussions and good coverage of popular arguments in the domain.",
"However, topic labels are not available for the discussions.",
"We thus construct a domain classifier for politics vs. non-politics posts based on a logistic regression model with unigram features, trained from our heuristically labeled Wikipedia abstracts 5 .",
"Concretely, we manually collect two lists of keywords that are indicative of politics and non-politics.",
"Each abstract is labeled as politics or non-politics if its title only matches keywords from one category.",
"6 In total, 264,670 politics abstracts and 827,437 of non-politics are labeled.",
"Starting from this dataset, our domain classifier is trained in a bootstrapping manner by gradually adding OPs predicted as politics or non-politics.",
"7 Finally, 12,549 OPs are labeled as politics, each of which is paired with 9.4 high-quality target arguments on average.",
"The average length for OPs is 16.1 sentences of 356.4 words, and 7.7 sentences of 161.1 words for arguments.",
"Model In this section, we present our argument generation model, which jointly learns to generate talking points in the form of keyphrases and produce arguments based on the input and keyphrases.",
"Extended from the successful seq2seq attentional model (Bahdanau et al., 2015) , our proposed model is novel in the following ways.",
"First, two separate decoders are designed, one for generating keyphrases, the other for argument construction.",
"By sharing the encoder with keyphrase generation, our argument decoder is better aware of salient talking points in the input.",
"Second, a novel attention mechanism is designed for argument decoding by attending both input and the previously generated keyphrases.",
"Finally, a reranking-based beam search decoder is introduced to promote topic-relevant generations.",
"Model Formulation Our model takes as input a sequence of tokens x = {x O ; x E }, where x O is the statement se- quence and x E contains relevant evidence that is extracted from Wikipedia based on a separate retrieval module.",
"A special token <evd> is inserted between x O and x E .",
"Our model then first generates a set of keyphrases as a sequence y p = {y p l }, followed by an argument y a = {y a t }, by maximizing log P (y|x), where y = {y p ; y a }.",
"The objective is further decomposed into t log P (y t |y 1:t−1 , x), with each term estimated by a softmax function over a non-linear transformation of decoder hidden states s a t and s p t , for argument decoder and keyphrase decoder, respectively.",
"The hidden states are computed as done in Bahdanau et al.",
"(2015) with attention: s t = g(s t−1 , c t , y t ) (1) c t = T j=1 α tj h j (2) α tj = exp(e tj ) T k=1 exp(e tk ) (3) e tj = v T tanh(W h h j + W s s t + b attn ) (4) Notice that two sets of parameters and different state update functions g(·) are learned for separate decoders: {W a h , W a s , b a attn , g a (·)} for the argument decoder; {W p h , W p s , b p attn , g p (·)} for the keyphrase decoder.",
"Encoder.",
"A two-layer bidirectional LSTM (bi-LSTM) is used to obtain the encoder hidden states h i for each time step i.",
"For biLSTM, the hidden state is the concatenation of forward and backward hidden states: h i = [ − → h i ; ← − h i ].",
"Word representations are initialized with 200-dimensional pre-trained GloVe embeddings (Pennington et al., 2014) , and updated during training.",
"The last hidden state of encoder is used to initialize both decoders.",
"In our model the encoder is shared by argument and keyphrase decoders.",
"Decoders.",
"Our model is equipped with two decoders: keyphrase decoder and argument decoder, each is implemented with a separate two-layer unidirectional LSTM, in a similar spirit with one-to-many multi-task sequence-to-sequence learning (Luong et al., 2015) .",
"The distinction is that our training objective is the sum of two loss functions: L(θ) = − α T p (x,y p )∈D log P (y p |x; θ) − (1 − α) T a (x,y a )∈D log P (y a |x; θ) (5) where T p and T a denote the lengths of reference keyphrase sequence and argument sequence.",
"α is a weighting parameter, and it is set as 0.5 in our experiments.",
"Attention over Both Input and Keyphrases.",
"Intuitively, the argument decoder should consider the generated keyphrases as talking points during the generation process.",
"We therefore propose an attention mechanism that can attend both encoder hidden states and the keyphrase decoder hidden states.",
"Additional context vector c t is then computed over keyphrase decoder hidden states s p j , which is used for computing the new argument decoder state: s a t = g (s a t−1 , [c t ; c t ], y a t ) (6) c t = Tp j=1 α tj s p j (7) α tj = exp(e tj ) Tp k=1 exp(e tk ) (8) e tj = v T tanh(W p s p j + W a s a t + b attn ) (9) where s p j is the hidden state of keyphrase decoder at position j, s a t is the hidden state of argument decoder at timestep t, and c t is computed in Eq.",
"2.",
"Decoder Sharing.",
"We also experiment with a shared decoder between keyphrase generation and argument generation: the last hidden state of the keyphrase decoder is used as the initial hidden state for the argument decoder.",
"A special token <arg> is inserted between the two sequences, indicating the start of argument generation.",
"Hybrid Beam Search Decoding Here we describe our decoding strategy on the argument decoder.",
"We design a hybrid beam expansion method combined with segment-based reranking to promote diversity of beams and informativeness of the generated arguments.",
"Hybrid Beam Expansion.",
"In the standard beam search, the top k words of highest probability are selected deterministically based on the softmax output to expand each hypothesis.",
"However, this may lead to suboptimal output for text generation (Wiseman and Rush, 2016) , e.g., one beam often dominates and thus inhibits hypothesis diversity.",
"Here we only pick the top n words (n < k), and randomly draw another k − n words based on the multinomial distribution after removing the n expanded words from the candidates.",
"This leads to a more diverse set of hypotheses.",
"Segment-based Reranking.",
"We also propose to rerank the beams every p steps based on beam's coverage of content words from input.",
"Based on our observation that likelihood-based reranking often leads to overly generic arguments (e.g., \"I don't agree with you\"), this operation has the potential of encouraging more informative generation.",
"k = 10, n = 3, and p = 10 are used for experiments.",
"The effect of parameter selection is studied in Section 7.",
"Relevant Evidence Retrieval Retrieval Methodology We take a two-step approach for retrieving evidence sentences: given a statement, (1) constructing one query per sentence and retrieving relevant articles from Wikipedia, and (2) reranking paragraphs and then sentences to create the final set of evidence sentences.",
"Wikipedia is used as our evidence source mainly due to its objective perspective and broad coverage of topics.",
"A dump of December 21, 2016 was downloaded.",
"For training, evidence sentences are retrieved with queries constructed from target user arguments.",
"For test, queries are constructed from OP.",
"Article Retrieval.",
"We first create an inverted index lookup table for Wikipedia as done in Chen et al.",
"(2017) .",
"For a given statement, we construct one query per sentence to broaden the diversity of retrieved articles.",
"Therefore, multiple passes of retrieval will be conducted if more than one query is created.",
"Specifically, we first collect topic signature words of the post.",
"Topic signatures (Lin and Hovy, 2000) are terms strongly correlated with a given post, measured by log-likelihood ratio against a background corpus.",
"We treat posts from other discussions in our dataset as background.",
"For each sentence, one query is constructed based on the noun phrases and verbs containing at least one topic signature word.",
"For instance, a query \"the government, my e-mails, national security\" is constructed for the first sentence of OP in the motivating example ( Figure 2 ).",
"Top five retrieved articles with highest TF-IDF similarity scores are kept per query.",
"Sentence Reranking.",
"The retrieved articles are first segmented into paragraphs, which are reranked by TF-IDF similarity to the given statement.",
"Up to 100 top ranked paragraphs with positive scores are retained.",
"These paragraphs are further segmented into sentences, and reranked according to TF-IDF similarity again.",
"We only keep up to 10 top sentences with positive scores for inclusion in the evidence set.",
"Gold-Standard Keyphrase Construction To create training data for the keyphrase decoder, we use the following rules to identify keyphrases from evidence sentences that are reused by human writers for argument construction: • Extract noun phrases and verb phrases from evidence sentences using Stanford CoreNLP .",
"• Keep phrases of length between 2 and 10 that overlap with content words in the argument.",
"• If there is span overlap between phrases, the longer one is kept if it has more content word coverage of the argument; otherwise the shorter one is retained.",
"The resultant phrases are then concatenated with a special delimiter <phrase> and used as gold-standard generation for training.",
"6 Experimental Setup Final Dataset Statistics Encoding the full set of evidence by our current decoder takes a huge amount of time.",
"We there propose a sampling strategy to allow the encoder to finish encoding within reasonable time by considering only a subset of the evidence: For each sentence in the statement, up to three evidence sentences are randomly sampled from the retrieved set; then the sampled sentences are concatenated.",
"This procedure is repeated three times per statement, where a statement is an user argument for training data and an OP for test set.",
"In our experiments, we remove duplicates samples and the ones without any retrieved evidence sentence.",
"Finally, we break down the augmented data into a training set of 224,553 examples (9,737 unique OPs), 13,911 for validation (640 OPs), and 30,417 retained for test (1,892 OPs).",
"Training Setup For all models, we use a two-layer biLSTM as encoder and a two-layer unidirectional LSTM as decoder, with 200-dimensional hidden states in each layer.",
"We apply dropout (Gal and Ghahramani, 2016) on RNN cells with a keep probability of 0.8.",
"We use Adam (Kingma and Ba, 2015) with an initial learning rate of 0.001 to optimize the cross-entropy loss.",
"Gradient clipping is also applied with the maximum norm of 2.",
"The input and output vocabulary sizes are both 50k.",
"Curriculum Training.",
"We train the models in three stages where the truncated input and output lengths are gradually increased.",
"Details are listed in Table 2 .",
"Importantly, this strategy allows model training to make rapid progress during early stages.",
"Training each of our full models takes about 4 days on a Quadro P5000 GPU card with a batch size of 32.",
"The model converges after about 10 epochs in total with pre-training initialization, which is described below.",
"Adding Pre-training.",
"We pre-train a two-layer seq2seq model with OP as input and target argument as output from our training set.",
"After 20 epochs (before converging), parameters for the first layer are used to initialize the first layer of all comparison models and our models (except for the keyphrase decoder).",
"Experimental results show that pre-training boosts all methods by roughly 2 METEOR (Denkowski and Lavie, 2014) points.",
"We describe more detailed results in the supplementary material.",
"Baseline and Comparisons We first consider a RETRIEVAL-based baseline, which concatenates retrieved evidence sentences to form the argument.",
"We further compare with three seq2seq-based generation models with different training data: (1) SEQ2SEQ: training with OP as input and the argument as output; (2) SEQ2SEQ + encode evd: augmenting input with evidence sentences as in our model; (3) SEQ2SEQ + encode KP: augmenting input with gold-standard keyphrases, which assumes some of the talking points are known.",
"All seq2seq models use a regular beam search decoder with the same beam size as ours.",
"Variants of Our Models.",
"We experiment with variants of our models based on the proposed separate decoder model (DEC-SEPARATE) or using a shared decoder (DEC-SHARED).",
"For each, we further test whether adding keyphrase attention for argument decoding is helpful (+ attend KP).",
"System vs. Oracle Retrieval.",
"For test time, evidence sentences are retrieved with queries constructed from OP (System Retrieval).",
"We also experiment with an Oracle Retrieval setup, where the evidence is retrieved based on user arguments, to indicate how much gain can be expected with better retrieval results.",
"Results Automatic Evaluation For automatic evaluation, we use BLEU (Papineni et al., 2002) , an n-gram precision-based metric (up to bigrams are considered), and ME-TEOR (Denkowski and Lavie, 2014) , measuring unigram recall and precision by considering paraphrases, synonyms, and stemming.",
"Human arguments are used as the gold-standard.",
"Because each OP may be paired with more than one highquality arguments, we compute BLEU and ME-TEOR scores for the system argument compared against all arguments, and report the best.",
"We do not use multiple reference evaluation because the arguments are often constructed from different angles and cover distinct aspects of the issue.",
"For models that generate more than one arguments based on different sets of sampled evidence, the one with the highest score is considered.",
"As can be seen from Table 3 , our models produce better BLEU scores than almost all the comparisons.",
"Especially, our models with separate decoder yield significantly higher BLEU and ME-TEOR scores than all seq2seq-based models (approximation randomization testing, p < 0.0001) do.",
"Better METEOR scores are achieved by the RETRIEVAL baseline, mainly due to its significantly longer arguments.",
"Moreover, utilizing attention over both input and the generated keyphrases further boosts our models' performance.",
"Interestingly, utilizing system retrieved evidence yields better BLEU scores than using oracle retrieval for testing.",
"The reason could be that arguments generated based on system retrieval contain less topic-specific words and more generic argumentative phrases.",
"Since the later is often observed in human written arguments, it may lead to higher precision and thus better BLEU scores.",
"Decoder Strategy Comparison.",
"We also study the effect of our reranking-based decoder by varying the reranking step size (p) and the number of top words expanded to beam hypotheses deterministically (k).",
"From the results in Figure 3 , we find that reranking with a smaller step size, e.g., Beams are reranked at every 5, 10, and 20 steps (p).",
"For each step size, we also show the effect of varying k, where top-k words are selected deterministically for beam expansion, with 10 − k randomly sampled over multinomial distribution after removing the k words.",
"Reranking with smaller step size yields better results.",
"p = 5, can generally lead to better METEOR scores.",
"Although varying the number of top words for beam expansion does not yield significant difference, we do observe more diverse beams from the system output if more candidate words are selected stochastically (i.e.",
"with a smaller k).",
"Topic-Relevance Evaluation During our pilot study, we observe that generic arguments, such as \"I don't agree with you\" or \"this is not true\", are prevalent among generations by seq2seq models.",
"We believe that good arguments should include content that addresses the given topic.",
"Therefore, we design a novel evaluation method to measure whether the generated arguments contain topic-relevant information.",
"To achieve the goal, we first train a topicrelevance estimation model inspired by the latent semantic model in Huang et al.",
"(2013) .",
"A pair of OP and argument, each represented as the average of word embeddings, are separately fed into a twolayer transformation model.",
"A dot-product is computed over the two projected low-dimensional vectors, and then a sigmoid function outputs the relevance score.",
"For model learning, we further divide our current training data into training, developing, and test sets.",
"For each OP and argument pair, we first randomly sample 100 arguments from other threads, and then pick the top 5 dissimilar ones, measured by Jaccard distance, as negative training samples.",
"details are included in the supplementary material.",
"We then take this trained model to evaluate the relevance between OP and the corresponding system arguments.",
"Each system argument is treated as positive sample; we then select five negative samples from arguments generated for other OPs whose evidence sentences most similar to that of the positive sample.",
"Intuitively, if an argument contains more topic relevant information, then the relevance estimation model will output a higher score for it; otherwise, the argument will receive a lower similarity score, and thus cannot be easily distinguished from negative samples.",
"Ranking metrics of MRR and Precision at 1 (P@1) are utilized, with results reported in Table 4 .",
"The ranker yields significantly better scores over arguments generated from models trained with evidence, compared to arguments generated by SEQ2SEQ model.",
"Moreover, we manually pick 29 commonly used generic responses (e.g., \"I don't think so\") and count their frequency in system outputs.",
"For the seq2seq model, more than 75% of its outputs contain at least one generic argument, compared to 16.2% by our separate decoder model with attention over keyphrases.",
"This further implies that our model generates more topic-relevant content.",
"Human Evaluation We also hire three trained human judges who are fluent English speakers to rate system arguments for the following three aspects on a scale of 1 System Gram Info Rel RETRIEVAL 4.5 ± 0.6 3.7 ± 0.9 3.3 ± 1.1 SEQ2SEQ 3.3 ± 1.1 1.2 ± 0.5 1.4 ± 0.7 OUR MODEL 2.5 ± 0.8 1.6 ± 0.8 1.8 ± 0.8 Table 5 : Human evaluation results on grammaticality (Gram), informativeness (Info), and relevance (Rel) of arguments.",
"Our model with separate decoder and attention over keyphrases receives significantly better ratings in informativeness and relevance than seq2seq (one-way ANOVA, p < 0.005).",
"to 5 (with 5 as best): Grammaticality-whether an argument is fluent, informativeness-whether the argument contains useful information and is not generic, and relevance-whether the argument contains information of a different stance or offtopic.",
"30 CMV threads are randomly selected, each of which is presented with randomly-shuffled OP statement and four system arguments.",
"Table 5 shows that our model with separate decoder and attention over keyphrases produce significantly more informative and relevant arguments than seq2seq trained without evidence.",
"8 However, we also observe that human judges prefer the retrieved arguments over generation-based models, illustrating the gap between system arguments and human edited text.",
"Sample arguments are displayed in Figure 4 .",
"Further Discussion Keyphrase Generation Analysis.",
"Here we provide further analysis over the generated keyphrases by our separate decoder model.",
"First, about 10% of the keyphrases output by our model also appear in the gold-standard (i.e., used by human arguments).",
"Furthermore, 36% of generated keyphrases are reused by our system arguments.",
"With human inspection, we find that although some keyphrases are not directly reused by the argument decoder, they represent high level talking points in the argument.",
"For instance, in the first sample argument by our model in Figure 4 , keyphrases \"the motive\" and \"russian\" are generated.",
"Although not used, they suggest the topics that the argument should stay on.",
"Sample Arguments and Future Directions.",
"As can be seen from the sample outputs in Figure 4 , our model generally captures more relevant concepts, e.g., \"military army\" and \"wars Input: Putin is trying to re-form a \"Soviet Union\" with his past actions in Georgia and current actions in Ukraine Human: So far Putin has only really targeted areas where the local population is majority russian (or historically majority russian) and sympathetic to russia.",
"That's not to say that he wouldn't go further, but I don't think there is yet any compelling reason to believe he would.",
"Retrieval: In 1968, the Soviet Union and Warsaw pact allies invaded Czechoslovakia to halt the Prague spring reforms.",
"It was this significant lack of flexibility that was exploited by the US to undermine the strength of the Soviet Union and thus foster its reform.",
"In the summer of 1991, the new union treaty, which would have turned the Soviet Union into a much looser union, was agreed upon by eight republics.",
"Our Model: There are two reasons you are so far beyond.",
"There is no reason to see the military army.",
"You can infer what they assume it, so they tend to protect up.",
"There.",
"If we assume it were bad, they can not be controlled.",
"So we're talking going to ignore the wars of the world.",
"The fact of the matter is not bad.",
"I would also assume it.",
"However, the government can not be reminded of the world.",
"Seq2seq + evd: 1.",
"The reason is that the government isn't the power of the war of Russia, at the time of the war of Russia.",
"In Russia, there is no way to continue to control the military.",
"Seq2seq + KP: I don't think it's a bit.",
"I don't think you're forgetting.",
"I don't want to see the Syrian American dream in the US.",
"I don't see the Syrian American dream of American citizens.",
"I don't see the Syrian American dream of American citizens.",
"I don't see the Syrian American military in the world.",
"I don't think it wouldn't be true.",
"Input: Voting for either Hillary or Trump is morally reprehensible.",
"Human: I don't think a vote for either of them means \"I'm totally fine with...\" as you say.",
"Why is it not acceptable to pick the option whose platform most closely aligns to your values/beliefs/desires, while acknowledging neither candidate is ideal?",
"Retrieval: Vote trading under a democratic, majority-rule institution is sometimes considered morally reprehensible behavior.",
"Our Model: He's going to happen.",
"You're overestimating the more premise.",
"He's already overestimating to this hypothetical party.",
"He's already killed over the supreme court positions.",
"Looking for Bernie.",
"This is the same reasons: Clinton is not the supreme court presidency.",
"You've already sent Trump's right to vote.",
"The fact is that he's going to happen.",
"He's already going to notice.",
"However, it's a long way.",
"Seq2seq + evd: 1.",
"If they don't vote the vote, they would not be able to vote for any candidate.",
"They don't have the same effect on their political power.",
"They are not voting for them.",
"Seq2seq + KP: I agree with your view.",
"I don't agree with you.",
"I don't think it's easy to appeal to the Clintons.",
"If you don't want to do this?",
"Figure 4 : Sample arguments generated by human, our system, and seq2seq trained with evidence.",
"Only the main thesis is shown for the input OP.",
"System generations are manually detokenized and capitalized.",
"of the world\", as discussed in the first example.",
"Meanwhile, our model also acquires argumentative style language, though there is still a noticeable gap between system arguments and human constructed arguments.",
"As discovered by our prior work , both topical content and language style are essential elements for high quality arguments.",
"For future work, generation models with a better control on linguistic style need to be designed.",
"As for improving coherence, we believe that discourse-aware generation models (Ji et al., 2016) should also be explored in the future work to enhance text planning.",
"Related Work There is a growing interest in argumentation mining from the natural language processing research community (Park and Cardie, 2014; Ghosh et al., 2014; Palau and Moens, 2009; Niculae et al., 2017; Eger et al., 2017) .",
"While argument understanding has received increasingly more attention, the area of automatic argument generation is much less studied.",
"Early work on argument construction investigates the design of argumentation strategies (Reed et al., 1996; Carenini and Moore, 2000; Zukerman et al., 2000) .",
"For instance, Reed (1999) describes the first full natural language argument generation system, called Rhetorica.",
"It however only outputs a text plan, mainly relying on heuristic rules.",
"Due to the difficulty of text generation, none of the previous work represents a fully automated argument generation system.",
"This work aims to close the gap by proposing an end-to-end trained argument construction framework.",
"Additionally, argument retrieval and extraction are investigated (Rinott et al., 2015; Hua and Wang, 2017) to deliver relevant arguments for user-specified queries.",
"Wachsmuth et al.",
"(2017) build a search engine from arguments collected from various online debate portals.",
"After the retrieval step, sentence ordering algorithms are often applied to improve coherence (Sato et al., 2015; Reisert et al., 2015) .",
"Nevertheless, simply merging arguments from different resources inevitably introduces redundancy.",
"To the best of our knowledge, this is the first automatic argument generation system that can synthesize retrieved content from different articles into fluent arguments.",
"Conclusion We studied the novel problem of generating arguments of a different stance for a given statement.",
"We presented a neural argument generation framework enhanced with evidence retrieved from Wikipedia.",
"Separate decoders were designed to first produce a set of keyphrases as talking points, and then generate the final argument.",
"Both automatic evaluation against human arguments and human assessment showed that our model produced more informative arguments than popular sequence-to-sequence-based generation models."
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"4",
"4.1",
"4.2",
"5.1",
"5.2",
"6.1",
"6.2",
"6.3",
"7.1",
"7.2",
"7.3",
"8",
"9",
"10"
],
"paper_header_content": [
"Introduction",
"Framework",
"Data Collection and Processing",
"Model",
"Model Formulation",
"Hybrid Beam Search Decoding",
"Retrieval Methodology",
"Gold-Standard Keyphrase Construction",
"Final Dataset Statistics",
"Training Setup",
"Baseline and Comparisons",
"Automatic Evaluation",
"Topic-Relevance Evaluation",
"Human Evaluation",
"Further Discussion",
"Related Work",
"Conclusion"
]
} | GEM-SciDuet-train-131#paper-1354#slide-17 | Sample Argument | Original Post Generated Counterargument
Putin is trying to re-form a Soviet
Union with his past actions in Georgia and current actions in Ukraine.
I firmly believe that Putin and the
Russian Federation (RF) are trying to re-form a Soviet Union type regime
The Russian Army invaded certain regions of Georgia
There are two reasons you are so far beyond. There is no reason to see the military army. You can infer what they assume it, so they tend to protect up.
There. If we assume it were bad, they can not be controlled. So were talking going to ignore the wars of the world.
The fact of the matter is not bad. I would also assume it. However, the government can not be reminded of the world. | Original Post Generated Counterargument
Putin is trying to re-form a Soviet
Union with his past actions in Georgia and current actions in Ukraine.
I firmly believe that Putin and the
Russian Federation (RF) are trying to re-form a Soviet Union type regime
The Russian Army invaded certain regions of Georgia
There are two reasons you are so far beyond. There is no reason to see the military army. You can infer what they assume it, so they tend to protect up.
There. If we assume it were bad, they can not be controlled. So were talking going to ignore the wars of the world.
The fact of the matter is not bad. I would also assume it. However, the government can not be reminded of the world. | [] |
GEM-SciDuet-train-131#paper-1354#slide-18 | 1354 | Neural Argument Generation Augmented with Externally Retrieved Evidence | High quality arguments are essential elements for human reasoning and decision-making processes. However, effective argument construction is a challenging task for both human and machines. In this work, we study a novel task on automatically generating arguments of a different stance for a given statement. We propose an encoder-decoder style neural network-based argument generation model enriched with externally retrieved evidence from Wikipedia. Our model first generates a set of talking point phrases as intermediate representation, followed by a separate decoder producing the final argument based on both input and the keyphrases. Experiments on a large-scale dataset collected from Reddit show that our model constructs arguments with more topicrelevant content than a popular sequence-tosequence generation model according to both automatic evaluation and human assessments. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237,
238,
239,
240,
241,
242,
243,
244,
245,
246,
247,
248,
249,
250,
251,
252,
253,
254,
255,
256,
257,
258,
259,
260,
261,
262,
263,
264,
265,
266,
267,
268,
269,
270,
271,
272,
273,
274,
275,
276,
277,
278,
279
],
"paper_content_text": [
"Introduction Generating high quality arguments plays a crucial role in decision-making and reasoning processes (Bonet and Geffner, 1996; Byrnes, 2013) .",
"A multitude of arguments and counter-arguments are constructed on a daily basis, both online and offline, to persuade and inform us on a wide range of issues.",
"For instance, debates are often conducted in legislative bodies to secure enough votes for bills to pass.",
"In another example, online deliberation has become a popular way of soliciting public opinions on new policies' pros and cons (Albrecht, 2006; Park et al., 2012) .",
"Nonetheless, constructing persuasive arguments is a daunting task, for both human and computers.",
"We believe that developing effective argument generation models will enable a broad range of compelling applications, including debate coaching, improving students' essay writing skills, and pro- viding context of controversial issues from different perspectives.",
"As a consequence, there exists a pressing need for automating the argument construction process.",
"To date, progress made in argument generation has been limited to retrieval-based methodsarguments are ranked based on relevance to a given topic, then the top ones are selected for inclusion in the output (Rinott et al., 2015; Wachsmuth et al., 2017; Hua and Wang, 2017) .",
"Although sentence ordering algorithms are developed for information structuring (Sato et al., 2015; Reisert et al., 2015) , existing methods lack the ability of synthesizing information from different resources, leading to redundancy and incoherence in the output.",
"In general, the task of argument generation presents numerous challenges, ranging from aggregating supporting evidence to generating text with coherent logical structure.",
"One particular hurdle comes from the underlying natural language generation (NLG) stack, whose success has been limited to a small set of domains.",
"Especially, most previous NLG systems rely on tem-plates that are either constructed by rules (Hovy, 1993; Belz, 2008; Bouayad-Agha et al., 2011) , or acquired from a domain-specific corpus (Angeli et al., 2010) to enhance grammaticality and coherence.",
"This makes them unwieldy to be adapted for new domains.",
"In this work, we study the following novel problem: given a statement on a controversial issue, generate an argument of an alternative stance.",
"To address the above challenges, we present a neural network-based argument generation framework augmented with externally retrieved evidence.",
"Our model is inspired by the observation that when humans construct arguments, they often collect references from external sources, e.g., Wikipedia or research papers, and then write their own arguments by synthesizing talking points from the references.",
"Figure 1 displays sample arguments by users from Reddit subcommunity /r/ChangeMyView 1 who argue against the motion that \"government should be allowed to view private emails\".",
"Both replies leverage information drawn from Wikipedia, such as \"political corruption\" and \"Fourth Amendment on protections of personal privacy\".",
"Concretely, our neural argument generation model adopts the popular encoder-decoderbased sequence-to-sequence (seq2seq) framework (Sutskever et al., 2014) , which has achieved significant success in various text generation tasks (Bahdanau et al., 2015; Wen et al., 2015; Wang and Ling, 2016; Mei et al., 2016; Wiseman et al., 2017) .",
"Our encoder takes as input a statement on a disputed issue, and a set of relevant evidence automatically retrieved from English Wikipedia 2 .",
"Our decoder consists of two separate parts, one of which first generates keyphrases as intermediate representation of \"talking points\", and the other then generates an argument based on both input and keyphrases.",
"Automatic evaluation based on BLEU (Papineni et al., 2002) shows that our framework generates better arguments than directly using retrieved sentences or popular seq2seq-based generation models (Bahdanau et al., 2015) that are also trained with retrieved evidence.",
"We further design a novel evaluation procedure to measure whether the arguments are on-topic by predicting their relevance to the given statement based on a separately trained relevance estimation model.",
"Results suggest that our model generated arguments are more likely to be predicted as on-topic, compared to other seq2seq-based generations models.",
"The rest of this paper is organized as follows.",
"Section 2 highlights the roadmap of our system.",
"The dataset used for our study is introduced in Section 3.",
"The model formulation and retrieval methods are detailed in Sections 4 and 5.",
"We then describe the experimental setup and results in Sections 6 and 7, followed by further analysis and future directions in Section 8.",
"Related work is discussed in Section 9.",
"Finally, we conclude in Section 10.",
"Framework Our argument generation pipeline, consisting of evidence retrieval and argument construction, is depicted in Figure 2 .",
"Given a statement, a set of queries are constructed based on its topic signature words (e.g., \"government\" and \"national security\") to retrieve a list of relevant articles from Wikipedia.",
"A reranking component further extracts sentences that may contain supporting evidence, which are used as additional input information for the neural argument generation model.",
"The generation model then encodes the statement and the evidence with a shared encoder in sequence.",
"Two decoders are designed: the keyphrase decoder first generates an intermediate representation of talking points in the form of keyphrases (e.g., \"right to privacy\", \"political corruption\"), followed by a separate argument decoder which produces the final argument.",
"Data Collection and Processing We draw data from Reddit subcommunity /r/ChangeMyView (henceforth CMV), which focuses on facilitating open discussions on a wide range of disputed issues.",
"Specifically, CMV is structured as discussion threads, where the original post (OP) starts with a viewpoint on a controversial topic, followed with detailed reasons, then other users reply with counter-arguments.",
"Importantly, when a user believes his view has been changed by an argument, a delta is often awarded to the reply.",
"In total, 26,761 threads from CMV are downloaded, dating from January 2013 to June 2017 3 .",
"Figure 2: Overview of our system pipeline (best viewed in color).",
"Given a statement, relevant articles are retrieved from Wikipedia with topic signatures from statement as queries (marked in red and boldface).",
"A reranking module then outputs top sentences as evidence.",
"The statement and the evidence (encoder states in gray panel) are concatenated and encoded as input for our argument generation model.",
"During decoding, the keyphrase decoder first generates talking points as phrases, followed by the argument decoder which constructs the argument by attending both input and keyphrases.",
"Only root replies (i.e., replies directly addressing OP) that meet all of the following requirements are included: (1) longer than 5 words, (2) without offensive language 4 , (3) awarded with delta or with more upvotes than downvotes, and (4) not generated by system moderators.",
"After filtering, the resultant dataset contains 26,525 OPs along with 305,475 relatively high quality root replies.",
"We treat each OP as the input statement, and the corresponding root replies as target arguments, on which our model is trained and evaluated.",
"A Focused Domain Dataset.",
"The current dataset contains diverse domains with unbalanced numbers of arguments.",
"We therefore choose samples from the politics domain due to its large volume of discussions and good coverage of popular arguments in the domain.",
"However, topic labels are not available for the discussions.",
"We thus construct a domain classifier for politics vs. non-politics posts based on a logistic regression model with unigram features, trained from our heuristically labeled Wikipedia abstracts 5 .",
"Concretely, we manually collect two lists of keywords that are indicative of politics and non-politics.",
"Each abstract is labeled as politics or non-politics if its title only matches keywords from one category.",
"6 In total, 264,670 politics abstracts and 827,437 of non-politics are labeled.",
"Starting from this dataset, our domain classifier is trained in a bootstrapping manner by gradually adding OPs predicted as politics or non-politics.",
"7 Finally, 12,549 OPs are labeled as politics, each of which is paired with 9.4 high-quality target arguments on average.",
"The average length for OPs is 16.1 sentences of 356.4 words, and 7.7 sentences of 161.1 words for arguments.",
"Model In this section, we present our argument generation model, which jointly learns to generate talking points in the form of keyphrases and produce arguments based on the input and keyphrases.",
"Extended from the successful seq2seq attentional model (Bahdanau et al., 2015) , our proposed model is novel in the following ways.",
"First, two separate decoders are designed, one for generating keyphrases, the other for argument construction.",
"By sharing the encoder with keyphrase generation, our argument decoder is better aware of salient talking points in the input.",
"Second, a novel attention mechanism is designed for argument decoding by attending both input and the previously generated keyphrases.",
"Finally, a reranking-based beam search decoder is introduced to promote topic-relevant generations.",
"Model Formulation Our model takes as input a sequence of tokens x = {x O ; x E }, where x O is the statement se- quence and x E contains relevant evidence that is extracted from Wikipedia based on a separate retrieval module.",
"A special token <evd> is inserted between x O and x E .",
"Our model then first generates a set of keyphrases as a sequence y p = {y p l }, followed by an argument y a = {y a t }, by maximizing log P (y|x), where y = {y p ; y a }.",
"The objective is further decomposed into t log P (y t |y 1:t−1 , x), with each term estimated by a softmax function over a non-linear transformation of decoder hidden states s a t and s p t , for argument decoder and keyphrase decoder, respectively.",
"The hidden states are computed as done in Bahdanau et al.",
"(2015) with attention: s t = g(s t−1 , c t , y t ) (1) c t = T j=1 α tj h j (2) α tj = exp(e tj ) T k=1 exp(e tk ) (3) e tj = v T tanh(W h h j + W s s t + b attn ) (4) Notice that two sets of parameters and different state update functions g(·) are learned for separate decoders: {W a h , W a s , b a attn , g a (·)} for the argument decoder; {W p h , W p s , b p attn , g p (·)} for the keyphrase decoder.",
"Encoder.",
"A two-layer bidirectional LSTM (bi-LSTM) is used to obtain the encoder hidden states h i for each time step i.",
"For biLSTM, the hidden state is the concatenation of forward and backward hidden states: h i = [ − → h i ; ← − h i ].",
"Word representations are initialized with 200-dimensional pre-trained GloVe embeddings (Pennington et al., 2014) , and updated during training.",
"The last hidden state of encoder is used to initialize both decoders.",
"In our model the encoder is shared by argument and keyphrase decoders.",
"Decoders.",
"Our model is equipped with two decoders: keyphrase decoder and argument decoder, each is implemented with a separate two-layer unidirectional LSTM, in a similar spirit with one-to-many multi-task sequence-to-sequence learning (Luong et al., 2015) .",
"The distinction is that our training objective is the sum of two loss functions: L(θ) = − α T p (x,y p )∈D log P (y p |x; θ) − (1 − α) T a (x,y a )∈D log P (y a |x; θ) (5) where T p and T a denote the lengths of reference keyphrase sequence and argument sequence.",
"α is a weighting parameter, and it is set as 0.5 in our experiments.",
"Attention over Both Input and Keyphrases.",
"Intuitively, the argument decoder should consider the generated keyphrases as talking points during the generation process.",
"We therefore propose an attention mechanism that can attend both encoder hidden states and the keyphrase decoder hidden states.",
"Additional context vector c t is then computed over keyphrase decoder hidden states s p j , which is used for computing the new argument decoder state: s a t = g (s a t−1 , [c t ; c t ], y a t ) (6) c t = Tp j=1 α tj s p j (7) α tj = exp(e tj ) Tp k=1 exp(e tk ) (8) e tj = v T tanh(W p s p j + W a s a t + b attn ) (9) where s p j is the hidden state of keyphrase decoder at position j, s a t is the hidden state of argument decoder at timestep t, and c t is computed in Eq.",
"2.",
"Decoder Sharing.",
"We also experiment with a shared decoder between keyphrase generation and argument generation: the last hidden state of the keyphrase decoder is used as the initial hidden state for the argument decoder.",
"A special token <arg> is inserted between the two sequences, indicating the start of argument generation.",
"Hybrid Beam Search Decoding Here we describe our decoding strategy on the argument decoder.",
"We design a hybrid beam expansion method combined with segment-based reranking to promote diversity of beams and informativeness of the generated arguments.",
"Hybrid Beam Expansion.",
"In the standard beam search, the top k words of highest probability are selected deterministically based on the softmax output to expand each hypothesis.",
"However, this may lead to suboptimal output for text generation (Wiseman and Rush, 2016) , e.g., one beam often dominates and thus inhibits hypothesis diversity.",
"Here we only pick the top n words (n < k), and randomly draw another k − n words based on the multinomial distribution after removing the n expanded words from the candidates.",
"This leads to a more diverse set of hypotheses.",
"Segment-based Reranking.",
"We also propose to rerank the beams every p steps based on beam's coverage of content words from input.",
"Based on our observation that likelihood-based reranking often leads to overly generic arguments (e.g., \"I don't agree with you\"), this operation has the potential of encouraging more informative generation.",
"k = 10, n = 3, and p = 10 are used for experiments.",
"The effect of parameter selection is studied in Section 7.",
"Relevant Evidence Retrieval Retrieval Methodology We take a two-step approach for retrieving evidence sentences: given a statement, (1) constructing one query per sentence and retrieving relevant articles from Wikipedia, and (2) reranking paragraphs and then sentences to create the final set of evidence sentences.",
"Wikipedia is used as our evidence source mainly due to its objective perspective and broad coverage of topics.",
"A dump of December 21, 2016 was downloaded.",
"For training, evidence sentences are retrieved with queries constructed from target user arguments.",
"For test, queries are constructed from OP.",
"Article Retrieval.",
"We first create an inverted index lookup table for Wikipedia as done in Chen et al.",
"(2017) .",
"For a given statement, we construct one query per sentence to broaden the diversity of retrieved articles.",
"Therefore, multiple passes of retrieval will be conducted if more than one query is created.",
"Specifically, we first collect topic signature words of the post.",
"Topic signatures (Lin and Hovy, 2000) are terms strongly correlated with a given post, measured by log-likelihood ratio against a background corpus.",
"We treat posts from other discussions in our dataset as background.",
"For each sentence, one query is constructed based on the noun phrases and verbs containing at least one topic signature word.",
"For instance, a query \"the government, my e-mails, national security\" is constructed for the first sentence of OP in the motivating example ( Figure 2 ).",
"Top five retrieved articles with highest TF-IDF similarity scores are kept per query.",
"Sentence Reranking.",
"The retrieved articles are first segmented into paragraphs, which are reranked by TF-IDF similarity to the given statement.",
"Up to 100 top ranked paragraphs with positive scores are retained.",
"These paragraphs are further segmented into sentences, and reranked according to TF-IDF similarity again.",
"We only keep up to 10 top sentences with positive scores for inclusion in the evidence set.",
"Gold-Standard Keyphrase Construction To create training data for the keyphrase decoder, we use the following rules to identify keyphrases from evidence sentences that are reused by human writers for argument construction: • Extract noun phrases and verb phrases from evidence sentences using Stanford CoreNLP .",
"• Keep phrases of length between 2 and 10 that overlap with content words in the argument.",
"• If there is span overlap between phrases, the longer one is kept if it has more content word coverage of the argument; otherwise the shorter one is retained.",
"The resultant phrases are then concatenated with a special delimiter <phrase> and used as gold-standard generation for training.",
"6 Experimental Setup Final Dataset Statistics Encoding the full set of evidence by our current decoder takes a huge amount of time.",
"We there propose a sampling strategy to allow the encoder to finish encoding within reasonable time by considering only a subset of the evidence: For each sentence in the statement, up to three evidence sentences are randomly sampled from the retrieved set; then the sampled sentences are concatenated.",
"This procedure is repeated three times per statement, where a statement is an user argument for training data and an OP for test set.",
"In our experiments, we remove duplicates samples and the ones without any retrieved evidence sentence.",
"Finally, we break down the augmented data into a training set of 224,553 examples (9,737 unique OPs), 13,911 for validation (640 OPs), and 30,417 retained for test (1,892 OPs).",
"Training Setup For all models, we use a two-layer biLSTM as encoder and a two-layer unidirectional LSTM as decoder, with 200-dimensional hidden states in each layer.",
"We apply dropout (Gal and Ghahramani, 2016) on RNN cells with a keep probability of 0.8.",
"We use Adam (Kingma and Ba, 2015) with an initial learning rate of 0.001 to optimize the cross-entropy loss.",
"Gradient clipping is also applied with the maximum norm of 2.",
"The input and output vocabulary sizes are both 50k.",
"Curriculum Training.",
"We train the models in three stages where the truncated input and output lengths are gradually increased.",
"Details are listed in Table 2 .",
"Importantly, this strategy allows model training to make rapid progress during early stages.",
"Training each of our full models takes about 4 days on a Quadro P5000 GPU card with a batch size of 32.",
"The model converges after about 10 epochs in total with pre-training initialization, which is described below.",
"Adding Pre-training.",
"We pre-train a two-layer seq2seq model with OP as input and target argument as output from our training set.",
"After 20 epochs (before converging), parameters for the first layer are used to initialize the first layer of all comparison models and our models (except for the keyphrase decoder).",
"Experimental results show that pre-training boosts all methods by roughly 2 METEOR (Denkowski and Lavie, 2014) points.",
"We describe more detailed results in the supplementary material.",
"Baseline and Comparisons We first consider a RETRIEVAL-based baseline, which concatenates retrieved evidence sentences to form the argument.",
"We further compare with three seq2seq-based generation models with different training data: (1) SEQ2SEQ: training with OP as input and the argument as output; (2) SEQ2SEQ + encode evd: augmenting input with evidence sentences as in our model; (3) SEQ2SEQ + encode KP: augmenting input with gold-standard keyphrases, which assumes some of the talking points are known.",
"All seq2seq models use a regular beam search decoder with the same beam size as ours.",
"Variants of Our Models.",
"We experiment with variants of our models based on the proposed separate decoder model (DEC-SEPARATE) or using a shared decoder (DEC-SHARED).",
"For each, we further test whether adding keyphrase attention for argument decoding is helpful (+ attend KP).",
"System vs. Oracle Retrieval.",
"For test time, evidence sentences are retrieved with queries constructed from OP (System Retrieval).",
"We also experiment with an Oracle Retrieval setup, where the evidence is retrieved based on user arguments, to indicate how much gain can be expected with better retrieval results.",
"Results Automatic Evaluation For automatic evaluation, we use BLEU (Papineni et al., 2002) , an n-gram precision-based metric (up to bigrams are considered), and ME-TEOR (Denkowski and Lavie, 2014) , measuring unigram recall and precision by considering paraphrases, synonyms, and stemming.",
"Human arguments are used as the gold-standard.",
"Because each OP may be paired with more than one highquality arguments, we compute BLEU and ME-TEOR scores for the system argument compared against all arguments, and report the best.",
"We do not use multiple reference evaluation because the arguments are often constructed from different angles and cover distinct aspects of the issue.",
"For models that generate more than one arguments based on different sets of sampled evidence, the one with the highest score is considered.",
"As can be seen from Table 3 , our models produce better BLEU scores than almost all the comparisons.",
"Especially, our models with separate decoder yield significantly higher BLEU and ME-TEOR scores than all seq2seq-based models (approximation randomization testing, p < 0.0001) do.",
"Better METEOR scores are achieved by the RETRIEVAL baseline, mainly due to its significantly longer arguments.",
"Moreover, utilizing attention over both input and the generated keyphrases further boosts our models' performance.",
"Interestingly, utilizing system retrieved evidence yields better BLEU scores than using oracle retrieval for testing.",
"The reason could be that arguments generated based on system retrieval contain less topic-specific words and more generic argumentative phrases.",
"Since the later is often observed in human written arguments, it may lead to higher precision and thus better BLEU scores.",
"Decoder Strategy Comparison.",
"We also study the effect of our reranking-based decoder by varying the reranking step size (p) and the number of top words expanded to beam hypotheses deterministically (k).",
"From the results in Figure 3 , we find that reranking with a smaller step size, e.g., Beams are reranked at every 5, 10, and 20 steps (p).",
"For each step size, we also show the effect of varying k, where top-k words are selected deterministically for beam expansion, with 10 − k randomly sampled over multinomial distribution after removing the k words.",
"Reranking with smaller step size yields better results.",
"p = 5, can generally lead to better METEOR scores.",
"Although varying the number of top words for beam expansion does not yield significant difference, we do observe more diverse beams from the system output if more candidate words are selected stochastically (i.e.",
"with a smaller k).",
"Topic-Relevance Evaluation During our pilot study, we observe that generic arguments, such as \"I don't agree with you\" or \"this is not true\", are prevalent among generations by seq2seq models.",
"We believe that good arguments should include content that addresses the given topic.",
"Therefore, we design a novel evaluation method to measure whether the generated arguments contain topic-relevant information.",
"To achieve the goal, we first train a topicrelevance estimation model inspired by the latent semantic model in Huang et al.",
"(2013) .",
"A pair of OP and argument, each represented as the average of word embeddings, are separately fed into a twolayer transformation model.",
"A dot-product is computed over the two projected low-dimensional vectors, and then a sigmoid function outputs the relevance score.",
"For model learning, we further divide our current training data into training, developing, and test sets.",
"For each OP and argument pair, we first randomly sample 100 arguments from other threads, and then pick the top 5 dissimilar ones, measured by Jaccard distance, as negative training samples.",
"details are included in the supplementary material.",
"We then take this trained model to evaluate the relevance between OP and the corresponding system arguments.",
"Each system argument is treated as positive sample; we then select five negative samples from arguments generated for other OPs whose evidence sentences most similar to that of the positive sample.",
"Intuitively, if an argument contains more topic relevant information, then the relevance estimation model will output a higher score for it; otherwise, the argument will receive a lower similarity score, and thus cannot be easily distinguished from negative samples.",
"Ranking metrics of MRR and Precision at 1 (P@1) are utilized, with results reported in Table 4 .",
"The ranker yields significantly better scores over arguments generated from models trained with evidence, compared to arguments generated by SEQ2SEQ model.",
"Moreover, we manually pick 29 commonly used generic responses (e.g., \"I don't think so\") and count their frequency in system outputs.",
"For the seq2seq model, more than 75% of its outputs contain at least one generic argument, compared to 16.2% by our separate decoder model with attention over keyphrases.",
"This further implies that our model generates more topic-relevant content.",
"Human Evaluation We also hire three trained human judges who are fluent English speakers to rate system arguments for the following three aspects on a scale of 1 System Gram Info Rel RETRIEVAL 4.5 ± 0.6 3.7 ± 0.9 3.3 ± 1.1 SEQ2SEQ 3.3 ± 1.1 1.2 ± 0.5 1.4 ± 0.7 OUR MODEL 2.5 ± 0.8 1.6 ± 0.8 1.8 ± 0.8 Table 5 : Human evaluation results on grammaticality (Gram), informativeness (Info), and relevance (Rel) of arguments.",
"Our model with separate decoder and attention over keyphrases receives significantly better ratings in informativeness and relevance than seq2seq (one-way ANOVA, p < 0.005).",
"to 5 (with 5 as best): Grammaticality-whether an argument is fluent, informativeness-whether the argument contains useful information and is not generic, and relevance-whether the argument contains information of a different stance or offtopic.",
"30 CMV threads are randomly selected, each of which is presented with randomly-shuffled OP statement and four system arguments.",
"Table 5 shows that our model with separate decoder and attention over keyphrases produce significantly more informative and relevant arguments than seq2seq trained without evidence.",
"8 However, we also observe that human judges prefer the retrieved arguments over generation-based models, illustrating the gap between system arguments and human edited text.",
"Sample arguments are displayed in Figure 4 .",
"Further Discussion Keyphrase Generation Analysis.",
"Here we provide further analysis over the generated keyphrases by our separate decoder model.",
"First, about 10% of the keyphrases output by our model also appear in the gold-standard (i.e., used by human arguments).",
"Furthermore, 36% of generated keyphrases are reused by our system arguments.",
"With human inspection, we find that although some keyphrases are not directly reused by the argument decoder, they represent high level talking points in the argument.",
"For instance, in the first sample argument by our model in Figure 4 , keyphrases \"the motive\" and \"russian\" are generated.",
"Although not used, they suggest the topics that the argument should stay on.",
"Sample Arguments and Future Directions.",
"As can be seen from the sample outputs in Figure 4 , our model generally captures more relevant concepts, e.g., \"military army\" and \"wars Input: Putin is trying to re-form a \"Soviet Union\" with his past actions in Georgia and current actions in Ukraine Human: So far Putin has only really targeted areas where the local population is majority russian (or historically majority russian) and sympathetic to russia.",
"That's not to say that he wouldn't go further, but I don't think there is yet any compelling reason to believe he would.",
"Retrieval: In 1968, the Soviet Union and Warsaw pact allies invaded Czechoslovakia to halt the Prague spring reforms.",
"It was this significant lack of flexibility that was exploited by the US to undermine the strength of the Soviet Union and thus foster its reform.",
"In the summer of 1991, the new union treaty, which would have turned the Soviet Union into a much looser union, was agreed upon by eight republics.",
"Our Model: There are two reasons you are so far beyond.",
"There is no reason to see the military army.",
"You can infer what they assume it, so they tend to protect up.",
"There.",
"If we assume it were bad, they can not be controlled.",
"So we're talking going to ignore the wars of the world.",
"The fact of the matter is not bad.",
"I would also assume it.",
"However, the government can not be reminded of the world.",
"Seq2seq + evd: 1.",
"The reason is that the government isn't the power of the war of Russia, at the time of the war of Russia.",
"In Russia, there is no way to continue to control the military.",
"Seq2seq + KP: I don't think it's a bit.",
"I don't think you're forgetting.",
"I don't want to see the Syrian American dream in the US.",
"I don't see the Syrian American dream of American citizens.",
"I don't see the Syrian American dream of American citizens.",
"I don't see the Syrian American military in the world.",
"I don't think it wouldn't be true.",
"Input: Voting for either Hillary or Trump is morally reprehensible.",
"Human: I don't think a vote for either of them means \"I'm totally fine with...\" as you say.",
"Why is it not acceptable to pick the option whose platform most closely aligns to your values/beliefs/desires, while acknowledging neither candidate is ideal?",
"Retrieval: Vote trading under a democratic, majority-rule institution is sometimes considered morally reprehensible behavior.",
"Our Model: He's going to happen.",
"You're overestimating the more premise.",
"He's already overestimating to this hypothetical party.",
"He's already killed over the supreme court positions.",
"Looking for Bernie.",
"This is the same reasons: Clinton is not the supreme court presidency.",
"You've already sent Trump's right to vote.",
"The fact is that he's going to happen.",
"He's already going to notice.",
"However, it's a long way.",
"Seq2seq + evd: 1.",
"If they don't vote the vote, they would not be able to vote for any candidate.",
"They don't have the same effect on their political power.",
"They are not voting for them.",
"Seq2seq + KP: I agree with your view.",
"I don't agree with you.",
"I don't think it's easy to appeal to the Clintons.",
"If you don't want to do this?",
"Figure 4 : Sample arguments generated by human, our system, and seq2seq trained with evidence.",
"Only the main thesis is shown for the input OP.",
"System generations are manually detokenized and capitalized.",
"of the world\", as discussed in the first example.",
"Meanwhile, our model also acquires argumentative style language, though there is still a noticeable gap between system arguments and human constructed arguments.",
"As discovered by our prior work , both topical content and language style are essential elements for high quality arguments.",
"For future work, generation models with a better control on linguistic style need to be designed.",
"As for improving coherence, we believe that discourse-aware generation models (Ji et al., 2016) should also be explored in the future work to enhance text planning.",
"Related Work There is a growing interest in argumentation mining from the natural language processing research community (Park and Cardie, 2014; Ghosh et al., 2014; Palau and Moens, 2009; Niculae et al., 2017; Eger et al., 2017) .",
"While argument understanding has received increasingly more attention, the area of automatic argument generation is much less studied.",
"Early work on argument construction investigates the design of argumentation strategies (Reed et al., 1996; Carenini and Moore, 2000; Zukerman et al., 2000) .",
"For instance, Reed (1999) describes the first full natural language argument generation system, called Rhetorica.",
"It however only outputs a text plan, mainly relying on heuristic rules.",
"Due to the difficulty of text generation, none of the previous work represents a fully automated argument generation system.",
"This work aims to close the gap by proposing an end-to-end trained argument construction framework.",
"Additionally, argument retrieval and extraction are investigated (Rinott et al., 2015; Hua and Wang, 2017) to deliver relevant arguments for user-specified queries.",
"Wachsmuth et al.",
"(2017) build a search engine from arguments collected from various online debate portals.",
"After the retrieval step, sentence ordering algorithms are often applied to improve coherence (Sato et al., 2015; Reisert et al., 2015) .",
"Nevertheless, simply merging arguments from different resources inevitably introduces redundancy.",
"To the best of our knowledge, this is the first automatic argument generation system that can synthesize retrieved content from different articles into fluent arguments.",
"Conclusion We studied the novel problem of generating arguments of a different stance for a given statement.",
"We presented a neural argument generation framework enhanced with evidence retrieved from Wikipedia.",
"Separate decoders were designed to first produce a set of keyphrases as talking points, and then generate the final argument.",
"Both automatic evaluation against human arguments and human assessment showed that our model produced more informative arguments than popular sequence-to-sequence-based generation models."
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"4",
"4.1",
"4.2",
"5.1",
"5.2",
"6.1",
"6.2",
"6.3",
"7.1",
"7.2",
"7.3",
"8",
"9",
"10"
],
"paper_header_content": [
"Introduction",
"Framework",
"Data Collection and Processing",
"Model",
"Model Formulation",
"Hybrid Beam Search Decoding",
"Retrieval Methodology",
"Gold-Standard Keyphrase Construction",
"Final Dataset Statistics",
"Training Setup",
"Baseline and Comparisons",
"Automatic Evaluation",
"Topic-Relevance Evaluation",
"Human Evaluation",
"Further Discussion",
"Related Work",
"Conclusion"
]
} | GEM-SciDuet-train-131#paper-1354#slide-18 | Future Directions | Better evidence retrieval system
Prone to incoherence, inaccurate information, generic generation etc | Better evidence retrieval system
Prone to incoherence, inaccurate information, generic generation etc | [] |
GEM-SciDuet-train-131#paper-1354#slide-19 | 1354 | Neural Argument Generation Augmented with Externally Retrieved Evidence | High quality arguments are essential elements for human reasoning and decision-making processes. However, effective argument construction is a challenging task for both human and machines. In this work, we study a novel task on automatically generating arguments of a different stance for a given statement. We propose an encoder-decoder style neural network-based argument generation model enriched with externally retrieved evidence from Wikipedia. Our model first generates a set of talking point phrases as intermediate representation, followed by a separate decoder producing the final argument based on both input and the keyphrases. Experiments on a large-scale dataset collected from Reddit show that our model constructs arguments with more topicrelevant content than a popular sequence-tosequence generation model according to both automatic evaluation and human assessments. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237,
238,
239,
240,
241,
242,
243,
244,
245,
246,
247,
248,
249,
250,
251,
252,
253,
254,
255,
256,
257,
258,
259,
260,
261,
262,
263,
264,
265,
266,
267,
268,
269,
270,
271,
272,
273,
274,
275,
276,
277,
278,
279
],
"paper_content_text": [
"Introduction Generating high quality arguments plays a crucial role in decision-making and reasoning processes (Bonet and Geffner, 1996; Byrnes, 2013) .",
"A multitude of arguments and counter-arguments are constructed on a daily basis, both online and offline, to persuade and inform us on a wide range of issues.",
"For instance, debates are often conducted in legislative bodies to secure enough votes for bills to pass.",
"In another example, online deliberation has become a popular way of soliciting public opinions on new policies' pros and cons (Albrecht, 2006; Park et al., 2012) .",
"Nonetheless, constructing persuasive arguments is a daunting task, for both human and computers.",
"We believe that developing effective argument generation models will enable a broad range of compelling applications, including debate coaching, improving students' essay writing skills, and pro- viding context of controversial issues from different perspectives.",
"As a consequence, there exists a pressing need for automating the argument construction process.",
"To date, progress made in argument generation has been limited to retrieval-based methodsarguments are ranked based on relevance to a given topic, then the top ones are selected for inclusion in the output (Rinott et al., 2015; Wachsmuth et al., 2017; Hua and Wang, 2017) .",
"Although sentence ordering algorithms are developed for information structuring (Sato et al., 2015; Reisert et al., 2015) , existing methods lack the ability of synthesizing information from different resources, leading to redundancy and incoherence in the output.",
"In general, the task of argument generation presents numerous challenges, ranging from aggregating supporting evidence to generating text with coherent logical structure.",
"One particular hurdle comes from the underlying natural language generation (NLG) stack, whose success has been limited to a small set of domains.",
"Especially, most previous NLG systems rely on tem-plates that are either constructed by rules (Hovy, 1993; Belz, 2008; Bouayad-Agha et al., 2011) , or acquired from a domain-specific corpus (Angeli et al., 2010) to enhance grammaticality and coherence.",
"This makes them unwieldy to be adapted for new domains.",
"In this work, we study the following novel problem: given a statement on a controversial issue, generate an argument of an alternative stance.",
"To address the above challenges, we present a neural network-based argument generation framework augmented with externally retrieved evidence.",
"Our model is inspired by the observation that when humans construct arguments, they often collect references from external sources, e.g., Wikipedia or research papers, and then write their own arguments by synthesizing talking points from the references.",
"Figure 1 displays sample arguments by users from Reddit subcommunity /r/ChangeMyView 1 who argue against the motion that \"government should be allowed to view private emails\".",
"Both replies leverage information drawn from Wikipedia, such as \"political corruption\" and \"Fourth Amendment on protections of personal privacy\".",
"Concretely, our neural argument generation model adopts the popular encoder-decoderbased sequence-to-sequence (seq2seq) framework (Sutskever et al., 2014) , which has achieved significant success in various text generation tasks (Bahdanau et al., 2015; Wen et al., 2015; Wang and Ling, 2016; Mei et al., 2016; Wiseman et al., 2017) .",
"Our encoder takes as input a statement on a disputed issue, and a set of relevant evidence automatically retrieved from English Wikipedia 2 .",
"Our decoder consists of two separate parts, one of which first generates keyphrases as intermediate representation of \"talking points\", and the other then generates an argument based on both input and keyphrases.",
"Automatic evaluation based on BLEU (Papineni et al., 2002) shows that our framework generates better arguments than directly using retrieved sentences or popular seq2seq-based generation models (Bahdanau et al., 2015) that are also trained with retrieved evidence.",
"We further design a novel evaluation procedure to measure whether the arguments are on-topic by predicting their relevance to the given statement based on a separately trained relevance estimation model.",
"Results suggest that our model generated arguments are more likely to be predicted as on-topic, compared to other seq2seq-based generations models.",
"The rest of this paper is organized as follows.",
"Section 2 highlights the roadmap of our system.",
"The dataset used for our study is introduced in Section 3.",
"The model formulation and retrieval methods are detailed in Sections 4 and 5.",
"We then describe the experimental setup and results in Sections 6 and 7, followed by further analysis and future directions in Section 8.",
"Related work is discussed in Section 9.",
"Finally, we conclude in Section 10.",
"Framework Our argument generation pipeline, consisting of evidence retrieval and argument construction, is depicted in Figure 2 .",
"Given a statement, a set of queries are constructed based on its topic signature words (e.g., \"government\" and \"national security\") to retrieve a list of relevant articles from Wikipedia.",
"A reranking component further extracts sentences that may contain supporting evidence, which are used as additional input information for the neural argument generation model.",
"The generation model then encodes the statement and the evidence with a shared encoder in sequence.",
"Two decoders are designed: the keyphrase decoder first generates an intermediate representation of talking points in the form of keyphrases (e.g., \"right to privacy\", \"political corruption\"), followed by a separate argument decoder which produces the final argument.",
"Data Collection and Processing We draw data from Reddit subcommunity /r/ChangeMyView (henceforth CMV), which focuses on facilitating open discussions on a wide range of disputed issues.",
"Specifically, CMV is structured as discussion threads, where the original post (OP) starts with a viewpoint on a controversial topic, followed with detailed reasons, then other users reply with counter-arguments.",
"Importantly, when a user believes his view has been changed by an argument, a delta is often awarded to the reply.",
"In total, 26,761 threads from CMV are downloaded, dating from January 2013 to June 2017 3 .",
"Figure 2: Overview of our system pipeline (best viewed in color).",
"Given a statement, relevant articles are retrieved from Wikipedia with topic signatures from statement as queries (marked in red and boldface).",
"A reranking module then outputs top sentences as evidence.",
"The statement and the evidence (encoder states in gray panel) are concatenated and encoded as input for our argument generation model.",
"During decoding, the keyphrase decoder first generates talking points as phrases, followed by the argument decoder which constructs the argument by attending both input and keyphrases.",
"Only root replies (i.e., replies directly addressing OP) that meet all of the following requirements are included: (1) longer than 5 words, (2) without offensive language 4 , (3) awarded with delta or with more upvotes than downvotes, and (4) not generated by system moderators.",
"After filtering, the resultant dataset contains 26,525 OPs along with 305,475 relatively high quality root replies.",
"We treat each OP as the input statement, and the corresponding root replies as target arguments, on which our model is trained and evaluated.",
"A Focused Domain Dataset.",
"The current dataset contains diverse domains with unbalanced numbers of arguments.",
"We therefore choose samples from the politics domain due to its large volume of discussions and good coverage of popular arguments in the domain.",
"However, topic labels are not available for the discussions.",
"We thus construct a domain classifier for politics vs. non-politics posts based on a logistic regression model with unigram features, trained from our heuristically labeled Wikipedia abstracts 5 .",
"Concretely, we manually collect two lists of keywords that are indicative of politics and non-politics.",
"Each abstract is labeled as politics or non-politics if its title only matches keywords from one category.",
"6 In total, 264,670 politics abstracts and 827,437 of non-politics are labeled.",
"Starting from this dataset, our domain classifier is trained in a bootstrapping manner by gradually adding OPs predicted as politics or non-politics.",
"7 Finally, 12,549 OPs are labeled as politics, each of which is paired with 9.4 high-quality target arguments on average.",
"The average length for OPs is 16.1 sentences of 356.4 words, and 7.7 sentences of 161.1 words for arguments.",
"Model In this section, we present our argument generation model, which jointly learns to generate talking points in the form of keyphrases and produce arguments based on the input and keyphrases.",
"Extended from the successful seq2seq attentional model (Bahdanau et al., 2015) , our proposed model is novel in the following ways.",
"First, two separate decoders are designed, one for generating keyphrases, the other for argument construction.",
"By sharing the encoder with keyphrase generation, our argument decoder is better aware of salient talking points in the input.",
"Second, a novel attention mechanism is designed for argument decoding by attending both input and the previously generated keyphrases.",
"Finally, a reranking-based beam search decoder is introduced to promote topic-relevant generations.",
"Model Formulation Our model takes as input a sequence of tokens x = {x O ; x E }, where x O is the statement se- quence and x E contains relevant evidence that is extracted from Wikipedia based on a separate retrieval module.",
"A special token <evd> is inserted between x O and x E .",
"Our model then first generates a set of keyphrases as a sequence y p = {y p l }, followed by an argument y a = {y a t }, by maximizing log P (y|x), where y = {y p ; y a }.",
"The objective is further decomposed into t log P (y t |y 1:t−1 , x), with each term estimated by a softmax function over a non-linear transformation of decoder hidden states s a t and s p t , for argument decoder and keyphrase decoder, respectively.",
"The hidden states are computed as done in Bahdanau et al.",
"(2015) with attention: s t = g(s t−1 , c t , y t ) (1) c t = T j=1 α tj h j (2) α tj = exp(e tj ) T k=1 exp(e tk ) (3) e tj = v T tanh(W h h j + W s s t + b attn ) (4) Notice that two sets of parameters and different state update functions g(·) are learned for separate decoders: {W a h , W a s , b a attn , g a (·)} for the argument decoder; {W p h , W p s , b p attn , g p (·)} for the keyphrase decoder.",
"Encoder.",
"A two-layer bidirectional LSTM (bi-LSTM) is used to obtain the encoder hidden states h i for each time step i.",
"For biLSTM, the hidden state is the concatenation of forward and backward hidden states: h i = [ − → h i ; ← − h i ].",
"Word representations are initialized with 200-dimensional pre-trained GloVe embeddings (Pennington et al., 2014) , and updated during training.",
"The last hidden state of encoder is used to initialize both decoders.",
"In our model the encoder is shared by argument and keyphrase decoders.",
"Decoders.",
"Our model is equipped with two decoders: keyphrase decoder and argument decoder, each is implemented with a separate two-layer unidirectional LSTM, in a similar spirit with one-to-many multi-task sequence-to-sequence learning (Luong et al., 2015) .",
"The distinction is that our training objective is the sum of two loss functions: L(θ) = − α T p (x,y p )∈D log P (y p |x; θ) − (1 − α) T a (x,y a )∈D log P (y a |x; θ) (5) where T p and T a denote the lengths of reference keyphrase sequence and argument sequence.",
"α is a weighting parameter, and it is set as 0.5 in our experiments.",
"Attention over Both Input and Keyphrases.",
"Intuitively, the argument decoder should consider the generated keyphrases as talking points during the generation process.",
"We therefore propose an attention mechanism that can attend both encoder hidden states and the keyphrase decoder hidden states.",
"Additional context vector c t is then computed over keyphrase decoder hidden states s p j , which is used for computing the new argument decoder state: s a t = g (s a t−1 , [c t ; c t ], y a t ) (6) c t = Tp j=1 α tj s p j (7) α tj = exp(e tj ) Tp k=1 exp(e tk ) (8) e tj = v T tanh(W p s p j + W a s a t + b attn ) (9) where s p j is the hidden state of keyphrase decoder at position j, s a t is the hidden state of argument decoder at timestep t, and c t is computed in Eq.",
"2.",
"Decoder Sharing.",
"We also experiment with a shared decoder between keyphrase generation and argument generation: the last hidden state of the keyphrase decoder is used as the initial hidden state for the argument decoder.",
"A special token <arg> is inserted between the two sequences, indicating the start of argument generation.",
"Hybrid Beam Search Decoding Here we describe our decoding strategy on the argument decoder.",
"We design a hybrid beam expansion method combined with segment-based reranking to promote diversity of beams and informativeness of the generated arguments.",
"Hybrid Beam Expansion.",
"In the standard beam search, the top k words of highest probability are selected deterministically based on the softmax output to expand each hypothesis.",
"However, this may lead to suboptimal output for text generation (Wiseman and Rush, 2016) , e.g., one beam often dominates and thus inhibits hypothesis diversity.",
"Here we only pick the top n words (n < k), and randomly draw another k − n words based on the multinomial distribution after removing the n expanded words from the candidates.",
"This leads to a more diverse set of hypotheses.",
"Segment-based Reranking.",
"We also propose to rerank the beams every p steps based on beam's coverage of content words from input.",
"Based on our observation that likelihood-based reranking often leads to overly generic arguments (e.g., \"I don't agree with you\"), this operation has the potential of encouraging more informative generation.",
"k = 10, n = 3, and p = 10 are used for experiments.",
"The effect of parameter selection is studied in Section 7.",
"Relevant Evidence Retrieval Retrieval Methodology We take a two-step approach for retrieving evidence sentences: given a statement, (1) constructing one query per sentence and retrieving relevant articles from Wikipedia, and (2) reranking paragraphs and then sentences to create the final set of evidence sentences.",
"Wikipedia is used as our evidence source mainly due to its objective perspective and broad coverage of topics.",
"A dump of December 21, 2016 was downloaded.",
"For training, evidence sentences are retrieved with queries constructed from target user arguments.",
"For test, queries are constructed from OP.",
"Article Retrieval.",
"We first create an inverted index lookup table for Wikipedia as done in Chen et al.",
"(2017) .",
"For a given statement, we construct one query per sentence to broaden the diversity of retrieved articles.",
"Therefore, multiple passes of retrieval will be conducted if more than one query is created.",
"Specifically, we first collect topic signature words of the post.",
"Topic signatures (Lin and Hovy, 2000) are terms strongly correlated with a given post, measured by log-likelihood ratio against a background corpus.",
"We treat posts from other discussions in our dataset as background.",
"For each sentence, one query is constructed based on the noun phrases and verbs containing at least one topic signature word.",
"For instance, a query \"the government, my e-mails, national security\" is constructed for the first sentence of OP in the motivating example ( Figure 2 ).",
"Top five retrieved articles with highest TF-IDF similarity scores are kept per query.",
"Sentence Reranking.",
"The retrieved articles are first segmented into paragraphs, which are reranked by TF-IDF similarity to the given statement.",
"Up to 100 top ranked paragraphs with positive scores are retained.",
"These paragraphs are further segmented into sentences, and reranked according to TF-IDF similarity again.",
"We only keep up to 10 top sentences with positive scores for inclusion in the evidence set.",
"Gold-Standard Keyphrase Construction To create training data for the keyphrase decoder, we use the following rules to identify keyphrases from evidence sentences that are reused by human writers for argument construction: • Extract noun phrases and verb phrases from evidence sentences using Stanford CoreNLP .",
"• Keep phrases of length between 2 and 10 that overlap with content words in the argument.",
"• If there is span overlap between phrases, the longer one is kept if it has more content word coverage of the argument; otherwise the shorter one is retained.",
"The resultant phrases are then concatenated with a special delimiter <phrase> and used as gold-standard generation for training.",
"6 Experimental Setup Final Dataset Statistics Encoding the full set of evidence by our current decoder takes a huge amount of time.",
"We there propose a sampling strategy to allow the encoder to finish encoding within reasonable time by considering only a subset of the evidence: For each sentence in the statement, up to three evidence sentences are randomly sampled from the retrieved set; then the sampled sentences are concatenated.",
"This procedure is repeated three times per statement, where a statement is an user argument for training data and an OP for test set.",
"In our experiments, we remove duplicates samples and the ones without any retrieved evidence sentence.",
"Finally, we break down the augmented data into a training set of 224,553 examples (9,737 unique OPs), 13,911 for validation (640 OPs), and 30,417 retained for test (1,892 OPs).",
"Training Setup For all models, we use a two-layer biLSTM as encoder and a two-layer unidirectional LSTM as decoder, with 200-dimensional hidden states in each layer.",
"We apply dropout (Gal and Ghahramani, 2016) on RNN cells with a keep probability of 0.8.",
"We use Adam (Kingma and Ba, 2015) with an initial learning rate of 0.001 to optimize the cross-entropy loss.",
"Gradient clipping is also applied with the maximum norm of 2.",
"The input and output vocabulary sizes are both 50k.",
"Curriculum Training.",
"We train the models in three stages where the truncated input and output lengths are gradually increased.",
"Details are listed in Table 2 .",
"Importantly, this strategy allows model training to make rapid progress during early stages.",
"Training each of our full models takes about 4 days on a Quadro P5000 GPU card with a batch size of 32.",
"The model converges after about 10 epochs in total with pre-training initialization, which is described below.",
"Adding Pre-training.",
"We pre-train a two-layer seq2seq model with OP as input and target argument as output from our training set.",
"After 20 epochs (before converging), parameters for the first layer are used to initialize the first layer of all comparison models and our models (except for the keyphrase decoder).",
"Experimental results show that pre-training boosts all methods by roughly 2 METEOR (Denkowski and Lavie, 2014) points.",
"We describe more detailed results in the supplementary material.",
"Baseline and Comparisons We first consider a RETRIEVAL-based baseline, which concatenates retrieved evidence sentences to form the argument.",
"We further compare with three seq2seq-based generation models with different training data: (1) SEQ2SEQ: training with OP as input and the argument as output; (2) SEQ2SEQ + encode evd: augmenting input with evidence sentences as in our model; (3) SEQ2SEQ + encode KP: augmenting input with gold-standard keyphrases, which assumes some of the talking points are known.",
"All seq2seq models use a regular beam search decoder with the same beam size as ours.",
"Variants of Our Models.",
"We experiment with variants of our models based on the proposed separate decoder model (DEC-SEPARATE) or using a shared decoder (DEC-SHARED).",
"For each, we further test whether adding keyphrase attention for argument decoding is helpful (+ attend KP).",
"System vs. Oracle Retrieval.",
"For test time, evidence sentences are retrieved with queries constructed from OP (System Retrieval).",
"We also experiment with an Oracle Retrieval setup, where the evidence is retrieved based on user arguments, to indicate how much gain can be expected with better retrieval results.",
"Results Automatic Evaluation For automatic evaluation, we use BLEU (Papineni et al., 2002) , an n-gram precision-based metric (up to bigrams are considered), and ME-TEOR (Denkowski and Lavie, 2014) , measuring unigram recall and precision by considering paraphrases, synonyms, and stemming.",
"Human arguments are used as the gold-standard.",
"Because each OP may be paired with more than one highquality arguments, we compute BLEU and ME-TEOR scores for the system argument compared against all arguments, and report the best.",
"We do not use multiple reference evaluation because the arguments are often constructed from different angles and cover distinct aspects of the issue.",
"For models that generate more than one arguments based on different sets of sampled evidence, the one with the highest score is considered.",
"As can be seen from Table 3 , our models produce better BLEU scores than almost all the comparisons.",
"Especially, our models with separate decoder yield significantly higher BLEU and ME-TEOR scores than all seq2seq-based models (approximation randomization testing, p < 0.0001) do.",
"Better METEOR scores are achieved by the RETRIEVAL baseline, mainly due to its significantly longer arguments.",
"Moreover, utilizing attention over both input and the generated keyphrases further boosts our models' performance.",
"Interestingly, utilizing system retrieved evidence yields better BLEU scores than using oracle retrieval for testing.",
"The reason could be that arguments generated based on system retrieval contain less topic-specific words and more generic argumentative phrases.",
"Since the later is often observed in human written arguments, it may lead to higher precision and thus better BLEU scores.",
"Decoder Strategy Comparison.",
"We also study the effect of our reranking-based decoder by varying the reranking step size (p) and the number of top words expanded to beam hypotheses deterministically (k).",
"From the results in Figure 3 , we find that reranking with a smaller step size, e.g., Beams are reranked at every 5, 10, and 20 steps (p).",
"For each step size, we also show the effect of varying k, where top-k words are selected deterministically for beam expansion, with 10 − k randomly sampled over multinomial distribution after removing the k words.",
"Reranking with smaller step size yields better results.",
"p = 5, can generally lead to better METEOR scores.",
"Although varying the number of top words for beam expansion does not yield significant difference, we do observe more diverse beams from the system output if more candidate words are selected stochastically (i.e.",
"with a smaller k).",
"Topic-Relevance Evaluation During our pilot study, we observe that generic arguments, such as \"I don't agree with you\" or \"this is not true\", are prevalent among generations by seq2seq models.",
"We believe that good arguments should include content that addresses the given topic.",
"Therefore, we design a novel evaluation method to measure whether the generated arguments contain topic-relevant information.",
"To achieve the goal, we first train a topicrelevance estimation model inspired by the latent semantic model in Huang et al.",
"(2013) .",
"A pair of OP and argument, each represented as the average of word embeddings, are separately fed into a twolayer transformation model.",
"A dot-product is computed over the two projected low-dimensional vectors, and then a sigmoid function outputs the relevance score.",
"For model learning, we further divide our current training data into training, developing, and test sets.",
"For each OP and argument pair, we first randomly sample 100 arguments from other threads, and then pick the top 5 dissimilar ones, measured by Jaccard distance, as negative training samples.",
"details are included in the supplementary material.",
"We then take this trained model to evaluate the relevance between OP and the corresponding system arguments.",
"Each system argument is treated as positive sample; we then select five negative samples from arguments generated for other OPs whose evidence sentences most similar to that of the positive sample.",
"Intuitively, if an argument contains more topic relevant information, then the relevance estimation model will output a higher score for it; otherwise, the argument will receive a lower similarity score, and thus cannot be easily distinguished from negative samples.",
"Ranking metrics of MRR and Precision at 1 (P@1) are utilized, with results reported in Table 4 .",
"The ranker yields significantly better scores over arguments generated from models trained with evidence, compared to arguments generated by SEQ2SEQ model.",
"Moreover, we manually pick 29 commonly used generic responses (e.g., \"I don't think so\") and count their frequency in system outputs.",
"For the seq2seq model, more than 75% of its outputs contain at least one generic argument, compared to 16.2% by our separate decoder model with attention over keyphrases.",
"This further implies that our model generates more topic-relevant content.",
"Human Evaluation We also hire three trained human judges who are fluent English speakers to rate system arguments for the following three aspects on a scale of 1 System Gram Info Rel RETRIEVAL 4.5 ± 0.6 3.7 ± 0.9 3.3 ± 1.1 SEQ2SEQ 3.3 ± 1.1 1.2 ± 0.5 1.4 ± 0.7 OUR MODEL 2.5 ± 0.8 1.6 ± 0.8 1.8 ± 0.8 Table 5 : Human evaluation results on grammaticality (Gram), informativeness (Info), and relevance (Rel) of arguments.",
"Our model with separate decoder and attention over keyphrases receives significantly better ratings in informativeness and relevance than seq2seq (one-way ANOVA, p < 0.005).",
"to 5 (with 5 as best): Grammaticality-whether an argument is fluent, informativeness-whether the argument contains useful information and is not generic, and relevance-whether the argument contains information of a different stance or offtopic.",
"30 CMV threads are randomly selected, each of which is presented with randomly-shuffled OP statement and four system arguments.",
"Table 5 shows that our model with separate decoder and attention over keyphrases produce significantly more informative and relevant arguments than seq2seq trained without evidence.",
"8 However, we also observe that human judges prefer the retrieved arguments over generation-based models, illustrating the gap between system arguments and human edited text.",
"Sample arguments are displayed in Figure 4 .",
"Further Discussion Keyphrase Generation Analysis.",
"Here we provide further analysis over the generated keyphrases by our separate decoder model.",
"First, about 10% of the keyphrases output by our model also appear in the gold-standard (i.e., used by human arguments).",
"Furthermore, 36% of generated keyphrases are reused by our system arguments.",
"With human inspection, we find that although some keyphrases are not directly reused by the argument decoder, they represent high level talking points in the argument.",
"For instance, in the first sample argument by our model in Figure 4 , keyphrases \"the motive\" and \"russian\" are generated.",
"Although not used, they suggest the topics that the argument should stay on.",
"Sample Arguments and Future Directions.",
"As can be seen from the sample outputs in Figure 4 , our model generally captures more relevant concepts, e.g., \"military army\" and \"wars Input: Putin is trying to re-form a \"Soviet Union\" with his past actions in Georgia and current actions in Ukraine Human: So far Putin has only really targeted areas where the local population is majority russian (or historically majority russian) and sympathetic to russia.",
"That's not to say that he wouldn't go further, but I don't think there is yet any compelling reason to believe he would.",
"Retrieval: In 1968, the Soviet Union and Warsaw pact allies invaded Czechoslovakia to halt the Prague spring reforms.",
"It was this significant lack of flexibility that was exploited by the US to undermine the strength of the Soviet Union and thus foster its reform.",
"In the summer of 1991, the new union treaty, which would have turned the Soviet Union into a much looser union, was agreed upon by eight republics.",
"Our Model: There are two reasons you are so far beyond.",
"There is no reason to see the military army.",
"You can infer what they assume it, so they tend to protect up.",
"There.",
"If we assume it were bad, they can not be controlled.",
"So we're talking going to ignore the wars of the world.",
"The fact of the matter is not bad.",
"I would also assume it.",
"However, the government can not be reminded of the world.",
"Seq2seq + evd: 1.",
"The reason is that the government isn't the power of the war of Russia, at the time of the war of Russia.",
"In Russia, there is no way to continue to control the military.",
"Seq2seq + KP: I don't think it's a bit.",
"I don't think you're forgetting.",
"I don't want to see the Syrian American dream in the US.",
"I don't see the Syrian American dream of American citizens.",
"I don't see the Syrian American dream of American citizens.",
"I don't see the Syrian American military in the world.",
"I don't think it wouldn't be true.",
"Input: Voting for either Hillary or Trump is morally reprehensible.",
"Human: I don't think a vote for either of them means \"I'm totally fine with...\" as you say.",
"Why is it not acceptable to pick the option whose platform most closely aligns to your values/beliefs/desires, while acknowledging neither candidate is ideal?",
"Retrieval: Vote trading under a democratic, majority-rule institution is sometimes considered morally reprehensible behavior.",
"Our Model: He's going to happen.",
"You're overestimating the more premise.",
"He's already overestimating to this hypothetical party.",
"He's already killed over the supreme court positions.",
"Looking for Bernie.",
"This is the same reasons: Clinton is not the supreme court presidency.",
"You've already sent Trump's right to vote.",
"The fact is that he's going to happen.",
"He's already going to notice.",
"However, it's a long way.",
"Seq2seq + evd: 1.",
"If they don't vote the vote, they would not be able to vote for any candidate.",
"They don't have the same effect on their political power.",
"They are not voting for them.",
"Seq2seq + KP: I agree with your view.",
"I don't agree with you.",
"I don't think it's easy to appeal to the Clintons.",
"If you don't want to do this?",
"Figure 4 : Sample arguments generated by human, our system, and seq2seq trained with evidence.",
"Only the main thesis is shown for the input OP.",
"System generations are manually detokenized and capitalized.",
"of the world\", as discussed in the first example.",
"Meanwhile, our model also acquires argumentative style language, though there is still a noticeable gap between system arguments and human constructed arguments.",
"As discovered by our prior work , both topical content and language style are essential elements for high quality arguments.",
"For future work, generation models with a better control on linguistic style need to be designed.",
"As for improving coherence, we believe that discourse-aware generation models (Ji et al., 2016) should also be explored in the future work to enhance text planning.",
"Related Work There is a growing interest in argumentation mining from the natural language processing research community (Park and Cardie, 2014; Ghosh et al., 2014; Palau and Moens, 2009; Niculae et al., 2017; Eger et al., 2017) .",
"While argument understanding has received increasingly more attention, the area of automatic argument generation is much less studied.",
"Early work on argument construction investigates the design of argumentation strategies (Reed et al., 1996; Carenini and Moore, 2000; Zukerman et al., 2000) .",
"For instance, Reed (1999) describes the first full natural language argument generation system, called Rhetorica.",
"It however only outputs a text plan, mainly relying on heuristic rules.",
"Due to the difficulty of text generation, none of the previous work represents a fully automated argument generation system.",
"This work aims to close the gap by proposing an end-to-end trained argument construction framework.",
"Additionally, argument retrieval and extraction are investigated (Rinott et al., 2015; Hua and Wang, 2017) to deliver relevant arguments for user-specified queries.",
"Wachsmuth et al.",
"(2017) build a search engine from arguments collected from various online debate portals.",
"After the retrieval step, sentence ordering algorithms are often applied to improve coherence (Sato et al., 2015; Reisert et al., 2015) .",
"Nevertheless, simply merging arguments from different resources inevitably introduces redundancy.",
"To the best of our knowledge, this is the first automatic argument generation system that can synthesize retrieved content from different articles into fluent arguments.",
"Conclusion We studied the novel problem of generating arguments of a different stance for a given statement.",
"We presented a neural argument generation framework enhanced with evidence retrieved from Wikipedia.",
"Separate decoders were designed to first produce a set of keyphrases as talking points, and then generate the final argument.",
"Both automatic evaluation against human arguments and human assessment showed that our model produced more informative arguments than popular sequence-to-sequence-based generation models."
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"4",
"4.1",
"4.2",
"5.1",
"5.2",
"6.1",
"6.2",
"6.3",
"7.1",
"7.2",
"7.3",
"8",
"9",
"10"
],
"paper_header_content": [
"Introduction",
"Framework",
"Data Collection and Processing",
"Model",
"Model Formulation",
"Hybrid Beam Search Decoding",
"Retrieval Methodology",
"Gold-Standard Keyphrase Construction",
"Final Dataset Statistics",
"Training Setup",
"Baseline and Comparisons",
"Automatic Evaluation",
"Topic-Relevance Evaluation",
"Human Evaluation",
"Further Discussion",
"Related Work",
"Conclusion"
]
} | GEM-SciDuet-train-131#paper-1354#slide-19 | Conclusion | We study a novel neural argument generation task.
We collect and release a new dataset from r/ChangeMyView and accompanying Wikipedia evidence for argument generation research.
We propose an end-to-end argument generation system, enhanced with Wikipedia retrieved evidence sentences. | We study a novel neural argument generation task.
We collect and release a new dataset from r/ChangeMyView and accompanying Wikipedia evidence for argument generation research.
We propose an end-to-end argument generation system, enhanced with Wikipedia retrieved evidence sentences. | [] |
GEM-SciDuet-train-132#paper-1355#slide-0 | 1355 | Towards Understanding the Geometry of Knowledge Graph Embeddings | Knowledge Graph (KG) embedding has emerged as a very active area of research over the last few years, resulting in the development of several embedding methods. These KG embedding methods represent KG entities and relations as vectors in a high-dimensional space. Despite this popularity and effectiveness of KG embeddings in various tasks (e.g., link prediction), geometric understanding of such embeddings (i.e., arrangement of entity and relation vectors in vector space) is unexplored -we fill this gap in the paper. We initiate a study to analyze the geometry of KG embeddings and correlate it with task performance and other hyperparameters. To the best of our knowledge, this is the first study of its kind. Through extensive experiments on real-world datasets, we discover several insights. For example, we find that there are sharp differences between the geometry of embeddings learnt by different classes of KG embeddings methods. We hope that this initial study will inspire other follow-up research on this important but unexplored problem. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188
],
"paper_content_text": [
"Introduction Knowledge Graphs (KGs) are multi-relational graphs where nodes represent entities and typededges represent relationships among entities.",
"Recent research in this area has resulted in the development of several large KGs, such as NELL (Mitchell et al., 2015) , YAGO (Suchanek et al., 2007) , and Freebase (Bollacker et al., 2008) , among others.",
"These KGs contain thousands of predicates (e.g., person, city, mayorOf(person, city), etc.",
"), and millions of triples involving such predicates, e.g., (Bill de Blasio, mayorOf, New York City) .",
"The problem of learning embeddings for Knowledge Graphs has received significant attention in recent years, with several methods being proposed (Bordes et al., 2013; Lin et al., 2015; Nguyen et al., 2016; Nickel et al., 2016; Trouillon et al., 2016) .",
"These methods represent entities and relations in a KG as vectors in high dimensional space.",
"These vectors can then be used for various tasks, such as, link prediction, entity classification etc.",
"Starting with TransE (Bordes et al., 2013) , there have been many KG embedding methods such as TransH (Wang et al., 2014) , TransR (Lin et al., 2015) and STransE (Nguyen et al., 2016) which represent relations as translation vectors from head entities to tail entities.",
"These are additive models, as the vectors interact via addition and subtraction.",
"Other KG embedding models, such as, DistMult (Yang et al., 2014) , HolE (Nickel et al., 2016) , and ComplEx (Trouillon et al., 2016) are multiplicative where entityrelation-entity triple likelihood is quantified by a multiplicative score function.",
"All these methods employ a score function for distinguishing correct triples from incorrect ones.",
"In spite of the existence of many KG embedding methods, our understanding of the geometry and structure of such embeddings is very shallow.",
"A recent work (Mimno and Thompson, 2017) analyzed the geometry of word embeddings.",
"However, the problem of analyzing geometry of KG embeddings is still unexplored -we fill this important gap.",
"In this paper, we analyze the geometry of such vectors in terms of their lengths and conicity, which, as defined in Section 4, describes their positions and orientations in the vector space.",
"We later study the effects of model type and training hyperparameters on the geometry of KG embeddings and correlate geometry with performance.",
"We make the following contributions: • We initiate a study to analyze the geometry of various Knowledge Graph (KG) embeddings.",
"To the best of our knowledge, this is the first study of its kind.",
"We also formalize various metrics which can be used to study geometry of a set of vectors.",
"• Through extensive analysis, we discover several interesting insights about the geometry of KG embeddings.",
"For example, we find systematic differences between the geometries of embeddings learned by additive and multiplicative KG embedding methods.",
"• We also study the relationship between geometric attributes and predictive performance of the embeddings, resulting in several new insights.",
"For example, in case of multiplicative models, we observe that for entity vectors generated with a fixed number of negative samples, lower conicity (as defined in Section 4) or higher average vector length lead to higher performance.",
"Source code of all the analysis tools developed as part of this paper is available at https://github.com/malllabiisc/ kg-geometry.",
"We are hoping that these resources will enable one to quickly analyze the geometry of any KG embedding, and potentially other embeddings as well.",
"Related Work In spite of the extensive and growing literature on both KG and non-KG embedding methods, very little attention has been paid towards understanding the geometry of the learned embeddings.",
"A recent work (Mimno and Thompson, 2017) is an exception to this which addresses this problem in the context of word vectors.",
"This work revealed a surprising correlation between word vector geometry and the number of negative samples used during training.",
"Instead of word vectors, in this paper we focus on understanding the geometry of KG embeddings.",
"In spite of this difference, the insights we discover in this paper generalizes some of the observations in the work of (Mimno and Thompson, 2017) .",
"Please see Section 6.2 for more details.",
"Since KGs contain only positive triples, negative sampling has been used for training KG embeddings.",
"Effect of the number of negative samples in KG embedding performance was studied by (Toutanova et al., 2015) .",
"In this paper, we study the effect of the number of negative samples on KG embedding geometry as well as performance.",
"In addition to the additive and multiplicative KG embedding methods already mentioned in Section 1, there is another set of methods where the entity and relation vectors interact via a neural network.",
"Examples of methods in this category include NTN (Socher et al., 2013) , CONV (Toutanova et al., 2015) , ConvE (Dettmers et al., 2017) , R-GCN (Schlichtkrull et al., 2017) , ER-MLP (Dong et al., 2014) and ER-MLP-2n (Ravishankar et al., 2017) .",
"Due to space limitations, in this paper we restrict our scope to the analysis of the geometry of additive and multiplicative KG embedding models only, and leave the analysis of the geometry of neural network-based methods as part of future work.",
"Overview of KG Embedding Methods For our analysis, we consider six representative KG embedding methods: TransE (Bordes et al., 2013) , TransR (Lin et al., 2015) , STransE (Nguyen et al., 2016) , DistMult (Yang et al., 2014) , HolE (Nickel et al., 2016) and ComplEx (Trouillon et al., 2016) .",
"We refer to TransE, TransR and STransE as additive methods because they learn embeddings by modeling relations as translation vectors from one entity to another, which results in vectors interacting via the addition operation during training.",
"On the other hand, we refer to Dist-Mult, HolE and ComplEx as multiplicative methods as they quantify the likelihood of a triple belonging to the KG through a multiplicative score function.",
"The score functions optimized by these methods are summarized in Table 1 .",
"Notation: Let G = (E, R, T ) be a Knowledge Graph (KG) where E is the set of entities, R is the set of relations and T ⊂ E × R × E is the set of triples stored in the graph.",
"Most of the KG embedding methods learn vectors e ∈ R de for e ∈ E, and r ∈ R dr for r ∈ R. Some methods also learn projection matrices M r ∈ R dr×de for relations.",
"The correctness of a triple is evaluated using a model specific score function σ : E × R × E → R. For learning the embeddings, a loss function L(T , T ; θ), defined over a set of positive triples T , set of (sampled) negative triples T , and the parameters θ is optimized.",
"We use small italics characters (e.g., h, r) to represent entities and relations, and correspond-Type Model Score Function σ(h, r, t) Additive TransE (Bordes et al., 2013) − h + r − t 1 TransR (Lin et al., 2015) − Mrh + r − Mrt 1 STransE (Nguyen et al., 2016) − M 1 r h + r − M 2 r t 1 Multiplicative DistMult (Yang et al., 2014) r (h t) HolE (Nickel et al., 2016) r (h t) ComplEx (Trouillon et al., 2016) Re(r (h t )) Table 1 : Summary of various Knowledge Graph (KG) embedding methods used in the paper.",
"Please see Section 3 for more details.",
"ing bold characters to represent their vector embeddings (e.g., h, r).",
"We use bold capitalization (e.g., V) to represent a set of vectors.",
"Matrices are represented by capital italics characters (e.g., M ).",
"Additive KG Embedding Methods This is the set of methods where entity and relation vectors interact via additive operations.",
"The score function for these models can be expressed as below σ(h, r, t) = − M 1 r h + r − M 2 r t 1 (1) where h, t ∈ R de and r ∈ R dr are vectors for head entity, tail entity and relation respectively.",
"M 1 r , M 2 r ∈ R dr×de are projection matrices from entity space R de to relation space R dr .",
"TransE (Bordes et al., 2013) is the simplest additive model where the entity and relation vectors lie in same d−dimensional space, i.e., d e = d r = d. The projection matrices M 1 r = M 2 r = I d are identity matrices.",
"The relation vectors are modeled as translation vectors from head entity vectors to tail entity vectors.",
"Pairwise ranking loss is then used to learn these vectors.",
"Since the model is simple, it has limited capability in capturing many-to-one, one-to-many and many-to-many relations.",
"TransR (Lin et al., 2015) is another translationbased model which uses separate spaces for entity and relation vectors allowing it to address the shortcomings of TransE.",
"Entity vectors are projected into a relation specific space using the corresponding projection matrix M 1 r = M 2 r = M r .",
"The training is similar to TransE.",
"STransE (Nguyen et al., 2016) is a generalization of TransR and uses different projection matrices for head and tail entity vectors.",
"The training is similar to TransE.",
"STransE achieves better performance than the previous methods but at the cost of more number of parameters.",
"Equation 1 is the score function used in STransE.",
"TransE and TransR are special cases of STransE with M 1 r = M 2 r = I d and M 1 r = M 2 r = M r , respectively.",
"Multiplicative KG Embedding Methods This is the set of methods where the vectors interact via multiplicative operations (usually dot product).",
"The score function for these models can be expressed as σ(h, r, t) = r f (h, t) (2) where h, t, r ∈ F d are vectors for head entity, tail entity and relation respectively.",
"f (h, t) ∈ F d measures compatibility of head and tail entities and is specific to the model.",
"F is either real space R or complex space C. Detailed descriptions of the models we consider are as follows.",
"DistMult (Yang et al., 2014 ) models entities and relations as vectors in R d .",
"It uses an entry-wise product ( ) to measure compatibility between head and tail entities, while using logistic loss for training the model.",
"σ DistM ult (h, r, t) = r (h t) (3) Since the entry-wise product in (3) is symmetric, DistMult is not suitable for asymmetric and antisymmetric relations.",
"HolE (Nickel et al., 2016) also models entities and relations as vectors in R d .",
"It uses circular correlation operator ( ) as compatibility function defined as [h t] k = d−1 i=0 h i t (k+i) mod d The score function is given as σ HolE (h, r, t) = r (h t) (4) The circular correlation operator being asymmetric, can capture asymmetric and anti-symmetric relations, but at the cost of higher time complexity The points are sampled from 3d Spherical Gaussian with mean (1,1,1) and standard deviation 0.1 (left) and 1.3 (right).",
"Please refer to Section 4 for more details.",
"(O (d log d) ).",
"For training, we use pairwise ranking loss.",
"ComplEx (Trouillon et al., 2016) represents entities and relations as vectors in C d .",
"The compatibility of entity pairs is measured using entry-wise product between head and complex conjugate of tail entity vectors.",
"σ ComplEx (h, r, t) = Re(r (h t )) (5) In contrast to (3), using complex vectors in (5) allows ComplEx to handle symmetric, asymmetric and anti-symmetric relations using the same score function.",
"Similar to DistMult, logistic loss is used for training the model.",
"Metrics For our geometrical analysis, we first define a term 'alignment to mean' (ATM) of a vector v belonging to a set of vectors V, as the cosine similarity 1 between v and the mean of all vectors in V. ATM(v, V) = cosine v, 1 |V| x∈V x We also define 'conicity' of a set V as the mean ATM of all vectors in V. By this definition, a high value of Conicity(V) would imply that the vectors in V lie in a narrow cone centered at origin.",
"In other words, the vectors in the set V are highly aligned with each other.",
"In addition to that, we define the variance of ATM across all vectors in V, as the 'vector spread'(VS) of set V, For each method, a plot averaged across entity frequency bins is shown.",
"From these plots, we conclude that entity embeddings from additive models tend to have low (positive as well as negative) ATM and thereby low Conicity and high vector spread.",
"Interestingly, this is reversed in case of multiplicative methods.",
"Please see Section 6.1 for more details.",
"Conicity(V) = 1 |V| v∈V ATM(v, V) 1 cosine(u, v) = u v u v Dataset VS(V) = 1 |V| v∈V ATM(v, V)−Conicity(V) Experimental Setup Datasets: We run our experiments on subsets of two widely used datasets, viz., Freebase (Bollacker et al., 2008) and WordNet (Miller, 1995) , called FB15k and WN18 (Bordes et al., 2013) , respectively.",
"We detail the characteristics of these datasets in Table 2 .",
"Please note that while the results presented in Section 6 are on the FB15K dataset, we reach the same conclusions on WN18.",
"The plots for our experiments on WN18 can be found in the Supplementary Section.",
"Hyperparameters: We experiment with multiple values of hyperparameters to understand their effect on the geometry of KG embeddings.",
"Specifically, we vary the dimension of the generated vectors between {50, 100, 200} and the number of negative samples used during training between {1, 50, 100}.",
"For more details on algorithm specific hyperparameters, we refer the reader to the Supplementary Section.",
"2 2 For training, we used codes from https://github.",
"Frequency Bins: We follow (Mimno and Thompson, 2017) for entity and relation samples used in the analysis.",
"Multiple bins of entities and relations are created based on their frequencies and 100 randomly sampled vectors are taken from each bin.",
"These set of sampled vectors are then used for our analysis.",
"For more information about sampling vectors, please refer to (Mimno and Thompson, 2017) .",
"Results and Analysis In this section, we evaluate the following questions.",
"• Does model type (e.g., additive vs multiplicative) have any effect on the geometry of embeddings?",
"(Section 6.",
"For each method, a plot averaged across entity frequency bins is shown.",
"Trends in these plots are similar to those in Figure 2 .",
"Main findings from these plots are summarized in Section 6.1.",
"• Does negative sampling have any effect on the embedding geometry?",
"(Section 6.2) • Does the dimension of embedding have any effect on its geometry?",
"(Section 6.3) • How is task performance related to embedding geometry?",
"(Section 6.4) In each subsection, we summarize the main findings at the beginning, followed by evidence supporting those findings.",
"Effect of Model Type on Geometry Summary of Findings: Additive: Low conicity and high vector spread.",
"Multiplicative: High conicity and low vector spread.",
"In this section, we explore whether the type of the score function optimized during the training has any effect on the geometry of the resulting embedding.",
"For this experiment, we set the number of negative samples to 1 and the vector dimension to 100 (we got similar results for 50-dimensional vectors).",
"Figure 2 and Figure 3 show the distribution of ATMs of these sampled entity and relation vectors, respectively.",
"3 Entity Embeddings: As seen in Figure 2 , there is a stark difference between the geometries of entity vectors produced by additive and multiplicative models.",
"The ATMs of all entity vectors produced by multiplicative models are positive with very low vector spread.",
"Their high conicity suggests that they are not uniformly dispersed in the vector space, but lie in a narrow cone along the mean vector.",
"This is in contrast to the entity vectors obtained from additive models which are both positive and negative with higher vector spread.",
"From the lower values of conicity, we conclude that entity vectors from additive models are evenly dispersed in the vector space.",
"This observation is also reinforced by looking at the high vector spread of additive models in comparison to that of multiplicative models.",
"We also observed that additive models are sensitive to the frequency of entities, with high frequency bins having higher conicity than low frequency bins.",
"However, no such pattern was observed for multiplicative models and In each bar group, first three models are additive, while the last three are multiplicative.",
"Main findings from these plots are summarized in Section 6.2 conicity was consistently similar across frequency bins.",
"For clarity, we have not shown different plots for individual frequency bins.",
"Relation Embeddings: As in entity embeddings, we observe a similar trend when we look at the distribution of ATMs for relation vectors in Figure 3 .",
"The conicity of relation vectors generated using additive models is almost zero across frequency bands.",
"This coupled with the high vector spread observed, suggests that these vectors are scattered throughout the vector space.",
"Relation vectors from multiplicative models exhibit high conicity and low vector spread, suggesting that they lie in a narrow cone centered at origin, like their entity counterparts.",
"Effect of Number of Negative Samples on Geometry Summary of Findings: Additive: Conicity and average length are invariant to changes in #NegativeSamples for both entities and relations.",
"Multiplicative: Conicity increases while average vector length decrease with increasing #NegativeSamples for entities.",
"Conicity decreases, while average vector length remains constant (except HolE) for relations.",
"For experiments in this section, we keep the vector dimension constant at 100.",
"Entity Embeddings: As seen in Figure 4 (left) , the conicity of entity vectors increases as the number of negative samples is increased for multiplicative models.",
"In contrast, conicity of the entity vectors generated by additive models is unaffected by change in number of negative samples and they continue to be dispersed throughout the vector space.",
"From Figure 4 (right), we observe that the average length of entity vectors produced by additive models is also invariant of any changes in number of negative samples.",
"On the other hand, increase in negative sampling decreases the average entity vector length for all multiplicative models except HolE.",
"The average entity vector length for HolE is nearly 1 for any number of negative samples, which is understandable considering it constrains the entity vectors to lie inside a unit ball (Nickel et al., 2016) .",
"This constraint is also enforced by the additive models: TransE, TransR, and STransE.",
"Relation Embeddings: Similar to entity embeddings, in case of relation vectors trained using additive models, the average length and conicity do not change while varying the number of negative samples.",
"However, the conicity of relation vectors from multiplicative models decreases with increase in negative sampling.",
"The average relation vector length is invariant for all multiplicative methods, except for HolE.",
"We see a surprisingly big jump in average relation vector length for HolE going from 1 to 50 negative samples, but it does not change after that.",
"Due to space constraints in the paper, we refer the reader to the Supplementary Section for plots discussing the effect of number of negative samples on geometry of relation vectors.",
"We note that the multiplicative score between two vectors may be increased by either increasing the alignment between the two vectors (i.e., increasing Conicity and reducing vector spread between them), or by increasing their lengths.",
"It is interesting to note that we see exactly these effects in the geometry of multiplicative methods In each bar group, first three models are additive, while the last three are multiplicative.",
"Main findings from these plots are summarized in Section 6.3. analyzed above.",
"Correlation with Geometry of Word Embeddings Our conclusions from the geometrical analysis of entity vectors produced by multiplicative models are similar to the results in (Mimno and Thompson, 2017) , where increase in negative sampling leads to increased conicity of word vectors trained using the skip-gram with negative sampling (SGNS) method.",
"On the other hand, additive models remain unaffected by these changes.",
"SGNS tries to maximize a score function of the form w T · c for positive word context pairs, where w is the word vector and c is the context vector (Mikolov et al., 2013) .",
"This is very similar to the score function of multiplicative models as seen in Table 1 .",
"Hence, SGNS can be considered as a multiplicative model in the word domain.",
"Hence, we argue that our result on the increase in negative samples increasing the conicity of vectors trained using a multiplicative score function can be considered as a generalization of the one in (Mimno and Thompson, 2017) .",
"Effect of Vector Dimension on Geometry Summary of Findings: Additive: Conicity and average length are invariant to changes in dimension for both entities and relations.",
"Multiplicative: Conicity decreases for both entities and relations with increasing dimension.",
"Average vector length increases for both entities and relations, except for HolE entities.",
"Entity Embeddings: To study the effect of vec-tor dimension on conicity and length, we set the number of negative samples to 1, while varying the vector dimension.",
"From Figure 5 (left), we observe that the conicity for entity vectors generated by any additive model is almost invariant of increase in dimension, though STransE exhibits a slight decrease.",
"In contrast, entity vector from multiplicative models show a clear decreasing pattern with increasing dimension.",
"As seen in Figure 5 (right) , the average lengths of entity vectors from multiplicative models increase sharply with increasing vector dimension, except for HolE.",
"In case of HolE, the average vector length remains constant at one.",
"Deviation involving HolE is expected as it enforces entity vectors to fall within a unit ball.",
"Similar constraints are enforced on entity vectors for additive models as well.",
"Thus, the average entity vector lengths are not affected by increasing vector dimension for all additive models.",
"Relation Embeddings: We reach similar conclusion when analyzing against increasing dimension the change in geometry of relation vectors produced using these KG embedding methods.",
"In this setting, the average length of relation vectors learned by HolE also increases as dimension is increased.",
"This is consistent with the other methods in the multiplicative family.",
"This is because, unlike entity vectors, the lengths of relation vectors of HolE are not constrained to be less than unit length.",
"Due to lack of space, we are unable to show plots for relation vectors here, but the same can be found in the Supplementary Section.",
"Relating Geometry to Performance Summary of Findings: Additive: Neither entites nor relations exhibit correlation between geometry and performance.",
"Multiplicative: Keeping negative samples fixed, lower conicity or higher average vector length for entities leads to improved performance.",
"No relationship for relations.",
"In this section, we analyze the relationship between geometry and performance on the Link prediction task, using the same setting as in (Bordes et al., 2013) .",
"Figure 6 (left) presents the effects of conicity of entity vectors on performance, while Figure 6 (right) shows the effects of average entity vector length.",
"4 As we see from Figure 6 (left), for fixed number of negative samples, the multiplicative model with lower conicity of entity vectors achieves better performance.",
"This performance gain is larger for higher numbers of negative samples (N).",
"Additive models don't exhibit any relationship between performance and conicity, as they are all clustered around zero conicity, which is in-line with our observations in previous sections.",
"In Figure 6 (right) , for all multiplicative models except HolE, a higher average entity vector length translates to better performance, while the number of negative samples is kept fixed.",
"Additive models and HolE don't exhibit any such patterns, as they are all clustered just below unit average entity vector length.",
"The above two observations for multiplicative models make intuitive sense, as lower conicity and higher average vector length would both translate to vectors being more dispersed in the space.",
"We see another interesting observation regarding the high sensitivity of HolE to the number of negative samples used during training.",
"Using a large number of negative examples (e.g., N = 50 or 100) leads to very high conicity in case of HolE.",
"Figure 6 (right) shows that average entity vector length of HolE is always one.",
"These two observations point towards HolE's entity vectors lying in a tiny part of the space.",
"This translates to HolE performing poorer than all other models in case of high numbers of negative sampling.",
"We also did a similar study for relation vectors, but did not see any discernible patterns.",
"Conclusion In this paper, we have initiated a systematic study into the important but unexplored problem of analyzing geometry of various Knowledge Graph (KG) embedding methods.",
"To the best of our knowledge, this is the first study of its kind.",
"Through extensive experiments on multiple realworld datasets, we are able to identify several insights into the geometry of KG embeddings.",
"We have also explored the relationship between KG embedding geometry and its task performance.",
"We have shared all our source code to foster further research in this area."
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"4",
"5",
"6",
"6.2",
"6.2.1",
"7"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Overview of KG Embedding Methods",
"Additive KG Embedding Methods",
"Multiplicative KG Embedding Methods",
"Metrics",
"Experimental Setup",
"Results and Analysis",
"Effect of Number of Negative Samples on Geometry",
"Correlation with Geometry of Word Embeddings",
"Conclusion"
]
} | GEM-SciDuet-train-132#paper-1355#slide-0 | Knowledge Graphs KG | Football Team Lionel Messi | Football Team Lionel Messi | [] |
GEM-SciDuet-train-132#paper-1355#slide-1 | 1355 | Towards Understanding the Geometry of Knowledge Graph Embeddings | Knowledge Graph (KG) embedding has emerged as a very active area of research over the last few years, resulting in the development of several embedding methods. These KG embedding methods represent KG entities and relations as vectors in a high-dimensional space. Despite this popularity and effectiveness of KG embeddings in various tasks (e.g., link prediction), geometric understanding of such embeddings (i.e., arrangement of entity and relation vectors in vector space) is unexplored -we fill this gap in the paper. We initiate a study to analyze the geometry of KG embeddings and correlate it with task performance and other hyperparameters. To the best of our knowledge, this is the first study of its kind. Through extensive experiments on real-world datasets, we discover several insights. For example, we find that there are sharp differences between the geometry of embeddings learnt by different classes of KG embeddings methods. We hope that this initial study will inspire other follow-up research on this important but unexplored problem. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188
],
"paper_content_text": [
"Introduction Knowledge Graphs (KGs) are multi-relational graphs where nodes represent entities and typededges represent relationships among entities.",
"Recent research in this area has resulted in the development of several large KGs, such as NELL (Mitchell et al., 2015) , YAGO (Suchanek et al., 2007) , and Freebase (Bollacker et al., 2008) , among others.",
"These KGs contain thousands of predicates (e.g., person, city, mayorOf(person, city), etc.",
"), and millions of triples involving such predicates, e.g., (Bill de Blasio, mayorOf, New York City) .",
"The problem of learning embeddings for Knowledge Graphs has received significant attention in recent years, with several methods being proposed (Bordes et al., 2013; Lin et al., 2015; Nguyen et al., 2016; Nickel et al., 2016; Trouillon et al., 2016) .",
"These methods represent entities and relations in a KG as vectors in high dimensional space.",
"These vectors can then be used for various tasks, such as, link prediction, entity classification etc.",
"Starting with TransE (Bordes et al., 2013) , there have been many KG embedding methods such as TransH (Wang et al., 2014) , TransR (Lin et al., 2015) and STransE (Nguyen et al., 2016) which represent relations as translation vectors from head entities to tail entities.",
"These are additive models, as the vectors interact via addition and subtraction.",
"Other KG embedding models, such as, DistMult (Yang et al., 2014) , HolE (Nickel et al., 2016) , and ComplEx (Trouillon et al., 2016) are multiplicative where entityrelation-entity triple likelihood is quantified by a multiplicative score function.",
"All these methods employ a score function for distinguishing correct triples from incorrect ones.",
"In spite of the existence of many KG embedding methods, our understanding of the geometry and structure of such embeddings is very shallow.",
"A recent work (Mimno and Thompson, 2017) analyzed the geometry of word embeddings.",
"However, the problem of analyzing geometry of KG embeddings is still unexplored -we fill this important gap.",
"In this paper, we analyze the geometry of such vectors in terms of their lengths and conicity, which, as defined in Section 4, describes their positions and orientations in the vector space.",
"We later study the effects of model type and training hyperparameters on the geometry of KG embeddings and correlate geometry with performance.",
"We make the following contributions: • We initiate a study to analyze the geometry of various Knowledge Graph (KG) embeddings.",
"To the best of our knowledge, this is the first study of its kind.",
"We also formalize various metrics which can be used to study geometry of a set of vectors.",
"• Through extensive analysis, we discover several interesting insights about the geometry of KG embeddings.",
"For example, we find systematic differences between the geometries of embeddings learned by additive and multiplicative KG embedding methods.",
"• We also study the relationship between geometric attributes and predictive performance of the embeddings, resulting in several new insights.",
"For example, in case of multiplicative models, we observe that for entity vectors generated with a fixed number of negative samples, lower conicity (as defined in Section 4) or higher average vector length lead to higher performance.",
"Source code of all the analysis tools developed as part of this paper is available at https://github.com/malllabiisc/ kg-geometry.",
"We are hoping that these resources will enable one to quickly analyze the geometry of any KG embedding, and potentially other embeddings as well.",
"Related Work In spite of the extensive and growing literature on both KG and non-KG embedding methods, very little attention has been paid towards understanding the geometry of the learned embeddings.",
"A recent work (Mimno and Thompson, 2017) is an exception to this which addresses this problem in the context of word vectors.",
"This work revealed a surprising correlation between word vector geometry and the number of negative samples used during training.",
"Instead of word vectors, in this paper we focus on understanding the geometry of KG embeddings.",
"In spite of this difference, the insights we discover in this paper generalizes some of the observations in the work of (Mimno and Thompson, 2017) .",
"Please see Section 6.2 for more details.",
"Since KGs contain only positive triples, negative sampling has been used for training KG embeddings.",
"Effect of the number of negative samples in KG embedding performance was studied by (Toutanova et al., 2015) .",
"In this paper, we study the effect of the number of negative samples on KG embedding geometry as well as performance.",
"In addition to the additive and multiplicative KG embedding methods already mentioned in Section 1, there is another set of methods where the entity and relation vectors interact via a neural network.",
"Examples of methods in this category include NTN (Socher et al., 2013) , CONV (Toutanova et al., 2015) , ConvE (Dettmers et al., 2017) , R-GCN (Schlichtkrull et al., 2017) , ER-MLP (Dong et al., 2014) and ER-MLP-2n (Ravishankar et al., 2017) .",
"Due to space limitations, in this paper we restrict our scope to the analysis of the geometry of additive and multiplicative KG embedding models only, and leave the analysis of the geometry of neural network-based methods as part of future work.",
"Overview of KG Embedding Methods For our analysis, we consider six representative KG embedding methods: TransE (Bordes et al., 2013) , TransR (Lin et al., 2015) , STransE (Nguyen et al., 2016) , DistMult (Yang et al., 2014) , HolE (Nickel et al., 2016) and ComplEx (Trouillon et al., 2016) .",
"We refer to TransE, TransR and STransE as additive methods because they learn embeddings by modeling relations as translation vectors from one entity to another, which results in vectors interacting via the addition operation during training.",
"On the other hand, we refer to Dist-Mult, HolE and ComplEx as multiplicative methods as they quantify the likelihood of a triple belonging to the KG through a multiplicative score function.",
"The score functions optimized by these methods are summarized in Table 1 .",
"Notation: Let G = (E, R, T ) be a Knowledge Graph (KG) where E is the set of entities, R is the set of relations and T ⊂ E × R × E is the set of triples stored in the graph.",
"Most of the KG embedding methods learn vectors e ∈ R de for e ∈ E, and r ∈ R dr for r ∈ R. Some methods also learn projection matrices M r ∈ R dr×de for relations.",
"The correctness of a triple is evaluated using a model specific score function σ : E × R × E → R. For learning the embeddings, a loss function L(T , T ; θ), defined over a set of positive triples T , set of (sampled) negative triples T , and the parameters θ is optimized.",
"We use small italics characters (e.g., h, r) to represent entities and relations, and correspond-Type Model Score Function σ(h, r, t) Additive TransE (Bordes et al., 2013) − h + r − t 1 TransR (Lin et al., 2015) − Mrh + r − Mrt 1 STransE (Nguyen et al., 2016) − M 1 r h + r − M 2 r t 1 Multiplicative DistMult (Yang et al., 2014) r (h t) HolE (Nickel et al., 2016) r (h t) ComplEx (Trouillon et al., 2016) Re(r (h t )) Table 1 : Summary of various Knowledge Graph (KG) embedding methods used in the paper.",
"Please see Section 3 for more details.",
"ing bold characters to represent their vector embeddings (e.g., h, r).",
"We use bold capitalization (e.g., V) to represent a set of vectors.",
"Matrices are represented by capital italics characters (e.g., M ).",
"Additive KG Embedding Methods This is the set of methods where entity and relation vectors interact via additive operations.",
"The score function for these models can be expressed as below σ(h, r, t) = − M 1 r h + r − M 2 r t 1 (1) where h, t ∈ R de and r ∈ R dr are vectors for head entity, tail entity and relation respectively.",
"M 1 r , M 2 r ∈ R dr×de are projection matrices from entity space R de to relation space R dr .",
"TransE (Bordes et al., 2013) is the simplest additive model where the entity and relation vectors lie in same d−dimensional space, i.e., d e = d r = d. The projection matrices M 1 r = M 2 r = I d are identity matrices.",
"The relation vectors are modeled as translation vectors from head entity vectors to tail entity vectors.",
"Pairwise ranking loss is then used to learn these vectors.",
"Since the model is simple, it has limited capability in capturing many-to-one, one-to-many and many-to-many relations.",
"TransR (Lin et al., 2015) is another translationbased model which uses separate spaces for entity and relation vectors allowing it to address the shortcomings of TransE.",
"Entity vectors are projected into a relation specific space using the corresponding projection matrix M 1 r = M 2 r = M r .",
"The training is similar to TransE.",
"STransE (Nguyen et al., 2016) is a generalization of TransR and uses different projection matrices for head and tail entity vectors.",
"The training is similar to TransE.",
"STransE achieves better performance than the previous methods but at the cost of more number of parameters.",
"Equation 1 is the score function used in STransE.",
"TransE and TransR are special cases of STransE with M 1 r = M 2 r = I d and M 1 r = M 2 r = M r , respectively.",
"Multiplicative KG Embedding Methods This is the set of methods where the vectors interact via multiplicative operations (usually dot product).",
"The score function for these models can be expressed as σ(h, r, t) = r f (h, t) (2) where h, t, r ∈ F d are vectors for head entity, tail entity and relation respectively.",
"f (h, t) ∈ F d measures compatibility of head and tail entities and is specific to the model.",
"F is either real space R or complex space C. Detailed descriptions of the models we consider are as follows.",
"DistMult (Yang et al., 2014 ) models entities and relations as vectors in R d .",
"It uses an entry-wise product ( ) to measure compatibility between head and tail entities, while using logistic loss for training the model.",
"σ DistM ult (h, r, t) = r (h t) (3) Since the entry-wise product in (3) is symmetric, DistMult is not suitable for asymmetric and antisymmetric relations.",
"HolE (Nickel et al., 2016) also models entities and relations as vectors in R d .",
"It uses circular correlation operator ( ) as compatibility function defined as [h t] k = d−1 i=0 h i t (k+i) mod d The score function is given as σ HolE (h, r, t) = r (h t) (4) The circular correlation operator being asymmetric, can capture asymmetric and anti-symmetric relations, but at the cost of higher time complexity The points are sampled from 3d Spherical Gaussian with mean (1,1,1) and standard deviation 0.1 (left) and 1.3 (right).",
"Please refer to Section 4 for more details.",
"(O (d log d) ).",
"For training, we use pairwise ranking loss.",
"ComplEx (Trouillon et al., 2016) represents entities and relations as vectors in C d .",
"The compatibility of entity pairs is measured using entry-wise product between head and complex conjugate of tail entity vectors.",
"σ ComplEx (h, r, t) = Re(r (h t )) (5) In contrast to (3), using complex vectors in (5) allows ComplEx to handle symmetric, asymmetric and anti-symmetric relations using the same score function.",
"Similar to DistMult, logistic loss is used for training the model.",
"Metrics For our geometrical analysis, we first define a term 'alignment to mean' (ATM) of a vector v belonging to a set of vectors V, as the cosine similarity 1 between v and the mean of all vectors in V. ATM(v, V) = cosine v, 1 |V| x∈V x We also define 'conicity' of a set V as the mean ATM of all vectors in V. By this definition, a high value of Conicity(V) would imply that the vectors in V lie in a narrow cone centered at origin.",
"In other words, the vectors in the set V are highly aligned with each other.",
"In addition to that, we define the variance of ATM across all vectors in V, as the 'vector spread'(VS) of set V, For each method, a plot averaged across entity frequency bins is shown.",
"From these plots, we conclude that entity embeddings from additive models tend to have low (positive as well as negative) ATM and thereby low Conicity and high vector spread.",
"Interestingly, this is reversed in case of multiplicative methods.",
"Please see Section 6.1 for more details.",
"Conicity(V) = 1 |V| v∈V ATM(v, V) 1 cosine(u, v) = u v u v Dataset VS(V) = 1 |V| v∈V ATM(v, V)−Conicity(V) Experimental Setup Datasets: We run our experiments on subsets of two widely used datasets, viz., Freebase (Bollacker et al., 2008) and WordNet (Miller, 1995) , called FB15k and WN18 (Bordes et al., 2013) , respectively.",
"We detail the characteristics of these datasets in Table 2 .",
"Please note that while the results presented in Section 6 are on the FB15K dataset, we reach the same conclusions on WN18.",
"The plots for our experiments on WN18 can be found in the Supplementary Section.",
"Hyperparameters: We experiment with multiple values of hyperparameters to understand their effect on the geometry of KG embeddings.",
"Specifically, we vary the dimension of the generated vectors between {50, 100, 200} and the number of negative samples used during training between {1, 50, 100}.",
"For more details on algorithm specific hyperparameters, we refer the reader to the Supplementary Section.",
"2 2 For training, we used codes from https://github.",
"Frequency Bins: We follow (Mimno and Thompson, 2017) for entity and relation samples used in the analysis.",
"Multiple bins of entities and relations are created based on their frequencies and 100 randomly sampled vectors are taken from each bin.",
"These set of sampled vectors are then used for our analysis.",
"For more information about sampling vectors, please refer to (Mimno and Thompson, 2017) .",
"Results and Analysis In this section, we evaluate the following questions.",
"• Does model type (e.g., additive vs multiplicative) have any effect on the geometry of embeddings?",
"(Section 6.",
"For each method, a plot averaged across entity frequency bins is shown.",
"Trends in these plots are similar to those in Figure 2 .",
"Main findings from these plots are summarized in Section 6.1.",
"• Does negative sampling have any effect on the embedding geometry?",
"(Section 6.2) • Does the dimension of embedding have any effect on its geometry?",
"(Section 6.3) • How is task performance related to embedding geometry?",
"(Section 6.4) In each subsection, we summarize the main findings at the beginning, followed by evidence supporting those findings.",
"Effect of Model Type on Geometry Summary of Findings: Additive: Low conicity and high vector spread.",
"Multiplicative: High conicity and low vector spread.",
"In this section, we explore whether the type of the score function optimized during the training has any effect on the geometry of the resulting embedding.",
"For this experiment, we set the number of negative samples to 1 and the vector dimension to 100 (we got similar results for 50-dimensional vectors).",
"Figure 2 and Figure 3 show the distribution of ATMs of these sampled entity and relation vectors, respectively.",
"3 Entity Embeddings: As seen in Figure 2 , there is a stark difference between the geometries of entity vectors produced by additive and multiplicative models.",
"The ATMs of all entity vectors produced by multiplicative models are positive with very low vector spread.",
"Their high conicity suggests that they are not uniformly dispersed in the vector space, but lie in a narrow cone along the mean vector.",
"This is in contrast to the entity vectors obtained from additive models which are both positive and negative with higher vector spread.",
"From the lower values of conicity, we conclude that entity vectors from additive models are evenly dispersed in the vector space.",
"This observation is also reinforced by looking at the high vector spread of additive models in comparison to that of multiplicative models.",
"We also observed that additive models are sensitive to the frequency of entities, with high frequency bins having higher conicity than low frequency bins.",
"However, no such pattern was observed for multiplicative models and In each bar group, first three models are additive, while the last three are multiplicative.",
"Main findings from these plots are summarized in Section 6.2 conicity was consistently similar across frequency bins.",
"For clarity, we have not shown different plots for individual frequency bins.",
"Relation Embeddings: As in entity embeddings, we observe a similar trend when we look at the distribution of ATMs for relation vectors in Figure 3 .",
"The conicity of relation vectors generated using additive models is almost zero across frequency bands.",
"This coupled with the high vector spread observed, suggests that these vectors are scattered throughout the vector space.",
"Relation vectors from multiplicative models exhibit high conicity and low vector spread, suggesting that they lie in a narrow cone centered at origin, like their entity counterparts.",
"Effect of Number of Negative Samples on Geometry Summary of Findings: Additive: Conicity and average length are invariant to changes in #NegativeSamples for both entities and relations.",
"Multiplicative: Conicity increases while average vector length decrease with increasing #NegativeSamples for entities.",
"Conicity decreases, while average vector length remains constant (except HolE) for relations.",
"For experiments in this section, we keep the vector dimension constant at 100.",
"Entity Embeddings: As seen in Figure 4 (left) , the conicity of entity vectors increases as the number of negative samples is increased for multiplicative models.",
"In contrast, conicity of the entity vectors generated by additive models is unaffected by change in number of negative samples and they continue to be dispersed throughout the vector space.",
"From Figure 4 (right), we observe that the average length of entity vectors produced by additive models is also invariant of any changes in number of negative samples.",
"On the other hand, increase in negative sampling decreases the average entity vector length for all multiplicative models except HolE.",
"The average entity vector length for HolE is nearly 1 for any number of negative samples, which is understandable considering it constrains the entity vectors to lie inside a unit ball (Nickel et al., 2016) .",
"This constraint is also enforced by the additive models: TransE, TransR, and STransE.",
"Relation Embeddings: Similar to entity embeddings, in case of relation vectors trained using additive models, the average length and conicity do not change while varying the number of negative samples.",
"However, the conicity of relation vectors from multiplicative models decreases with increase in negative sampling.",
"The average relation vector length is invariant for all multiplicative methods, except for HolE.",
"We see a surprisingly big jump in average relation vector length for HolE going from 1 to 50 negative samples, but it does not change after that.",
"Due to space constraints in the paper, we refer the reader to the Supplementary Section for plots discussing the effect of number of negative samples on geometry of relation vectors.",
"We note that the multiplicative score between two vectors may be increased by either increasing the alignment between the two vectors (i.e., increasing Conicity and reducing vector spread between them), or by increasing their lengths.",
"It is interesting to note that we see exactly these effects in the geometry of multiplicative methods In each bar group, first three models are additive, while the last three are multiplicative.",
"Main findings from these plots are summarized in Section 6.3. analyzed above.",
"Correlation with Geometry of Word Embeddings Our conclusions from the geometrical analysis of entity vectors produced by multiplicative models are similar to the results in (Mimno and Thompson, 2017) , where increase in negative sampling leads to increased conicity of word vectors trained using the skip-gram with negative sampling (SGNS) method.",
"On the other hand, additive models remain unaffected by these changes.",
"SGNS tries to maximize a score function of the form w T · c for positive word context pairs, where w is the word vector and c is the context vector (Mikolov et al., 2013) .",
"This is very similar to the score function of multiplicative models as seen in Table 1 .",
"Hence, SGNS can be considered as a multiplicative model in the word domain.",
"Hence, we argue that our result on the increase in negative samples increasing the conicity of vectors trained using a multiplicative score function can be considered as a generalization of the one in (Mimno and Thompson, 2017) .",
"Effect of Vector Dimension on Geometry Summary of Findings: Additive: Conicity and average length are invariant to changes in dimension for both entities and relations.",
"Multiplicative: Conicity decreases for both entities and relations with increasing dimension.",
"Average vector length increases for both entities and relations, except for HolE entities.",
"Entity Embeddings: To study the effect of vec-tor dimension on conicity and length, we set the number of negative samples to 1, while varying the vector dimension.",
"From Figure 5 (left), we observe that the conicity for entity vectors generated by any additive model is almost invariant of increase in dimension, though STransE exhibits a slight decrease.",
"In contrast, entity vector from multiplicative models show a clear decreasing pattern with increasing dimension.",
"As seen in Figure 5 (right) , the average lengths of entity vectors from multiplicative models increase sharply with increasing vector dimension, except for HolE.",
"In case of HolE, the average vector length remains constant at one.",
"Deviation involving HolE is expected as it enforces entity vectors to fall within a unit ball.",
"Similar constraints are enforced on entity vectors for additive models as well.",
"Thus, the average entity vector lengths are not affected by increasing vector dimension for all additive models.",
"Relation Embeddings: We reach similar conclusion when analyzing against increasing dimension the change in geometry of relation vectors produced using these KG embedding methods.",
"In this setting, the average length of relation vectors learned by HolE also increases as dimension is increased.",
"This is consistent with the other methods in the multiplicative family.",
"This is because, unlike entity vectors, the lengths of relation vectors of HolE are not constrained to be less than unit length.",
"Due to lack of space, we are unable to show plots for relation vectors here, but the same can be found in the Supplementary Section.",
"Relating Geometry to Performance Summary of Findings: Additive: Neither entites nor relations exhibit correlation between geometry and performance.",
"Multiplicative: Keeping negative samples fixed, lower conicity or higher average vector length for entities leads to improved performance.",
"No relationship for relations.",
"In this section, we analyze the relationship between geometry and performance on the Link prediction task, using the same setting as in (Bordes et al., 2013) .",
"Figure 6 (left) presents the effects of conicity of entity vectors on performance, while Figure 6 (right) shows the effects of average entity vector length.",
"4 As we see from Figure 6 (left), for fixed number of negative samples, the multiplicative model with lower conicity of entity vectors achieves better performance.",
"This performance gain is larger for higher numbers of negative samples (N).",
"Additive models don't exhibit any relationship between performance and conicity, as they are all clustered around zero conicity, which is in-line with our observations in previous sections.",
"In Figure 6 (right) , for all multiplicative models except HolE, a higher average entity vector length translates to better performance, while the number of negative samples is kept fixed.",
"Additive models and HolE don't exhibit any such patterns, as they are all clustered just below unit average entity vector length.",
"The above two observations for multiplicative models make intuitive sense, as lower conicity and higher average vector length would both translate to vectors being more dispersed in the space.",
"We see another interesting observation regarding the high sensitivity of HolE to the number of negative samples used during training.",
"Using a large number of negative examples (e.g., N = 50 or 100) leads to very high conicity in case of HolE.",
"Figure 6 (right) shows that average entity vector length of HolE is always one.",
"These two observations point towards HolE's entity vectors lying in a tiny part of the space.",
"This translates to HolE performing poorer than all other models in case of high numbers of negative sampling.",
"We also did a similar study for relation vectors, but did not see any discernible patterns.",
"Conclusion In this paper, we have initiated a systematic study into the important but unexplored problem of analyzing geometry of various Knowledge Graph (KG) embedding methods.",
"To the best of our knowledge, this is the first study of its kind.",
"Through extensive experiments on multiple realworld datasets, we are able to identify several insights into the geometry of KG embeddings.",
"We have also explored the relationship between KG embedding geometry and its task performance.",
"We have shared all our source code to foster further research in this area."
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"4",
"5",
"6",
"6.2",
"6.2.1",
"7"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Overview of KG Embedding Methods",
"Additive KG Embedding Methods",
"Multiplicative KG Embedding Methods",
"Metrics",
"Experimental Setup",
"Results and Analysis",
"Effect of Number of Negative Samples on Geometry",
"Correlation with Geometry of Word Embeddings",
"Conclusion"
]
} | GEM-SciDuet-train-132#paper-1355#slide-1 | KG Embeddings | Represents entities and relations as vectors in a vector space
1. Translating Embeddings for Modeling Multi-relational Data, Bordes et al. NIPS 2013. | Represents entities and relations as vectors in a vector space
1. Translating Embeddings for Modeling Multi-relational Data, Bordes et al. NIPS 2013. | [] |
GEM-SciDuet-train-132#paper-1355#slide-2 | 1355 | Towards Understanding the Geometry of Knowledge Graph Embeddings | Knowledge Graph (KG) embedding has emerged as a very active area of research over the last few years, resulting in the development of several embedding methods. These KG embedding methods represent KG entities and relations as vectors in a high-dimensional space. Despite this popularity and effectiveness of KG embeddings in various tasks (e.g., link prediction), geometric understanding of such embeddings (i.e., arrangement of entity and relation vectors in vector space) is unexplored -we fill this gap in the paper. We initiate a study to analyze the geometry of KG embeddings and correlate it with task performance and other hyperparameters. To the best of our knowledge, this is the first study of its kind. Through extensive experiments on real-world datasets, we discover several insights. For example, we find that there are sharp differences between the geometry of embeddings learnt by different classes of KG embeddings methods. We hope that this initial study will inspire other follow-up research on this important but unexplored problem. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188
],
"paper_content_text": [
"Introduction Knowledge Graphs (KGs) are multi-relational graphs where nodes represent entities and typededges represent relationships among entities.",
"Recent research in this area has resulted in the development of several large KGs, such as NELL (Mitchell et al., 2015) , YAGO (Suchanek et al., 2007) , and Freebase (Bollacker et al., 2008) , among others.",
"These KGs contain thousands of predicates (e.g., person, city, mayorOf(person, city), etc.",
"), and millions of triples involving such predicates, e.g., (Bill de Blasio, mayorOf, New York City) .",
"The problem of learning embeddings for Knowledge Graphs has received significant attention in recent years, with several methods being proposed (Bordes et al., 2013; Lin et al., 2015; Nguyen et al., 2016; Nickel et al., 2016; Trouillon et al., 2016) .",
"These methods represent entities and relations in a KG as vectors in high dimensional space.",
"These vectors can then be used for various tasks, such as, link prediction, entity classification etc.",
"Starting with TransE (Bordes et al., 2013) , there have been many KG embedding methods such as TransH (Wang et al., 2014) , TransR (Lin et al., 2015) and STransE (Nguyen et al., 2016) which represent relations as translation vectors from head entities to tail entities.",
"These are additive models, as the vectors interact via addition and subtraction.",
"Other KG embedding models, such as, DistMult (Yang et al., 2014) , HolE (Nickel et al., 2016) , and ComplEx (Trouillon et al., 2016) are multiplicative where entityrelation-entity triple likelihood is quantified by a multiplicative score function.",
"All these methods employ a score function for distinguishing correct triples from incorrect ones.",
"In spite of the existence of many KG embedding methods, our understanding of the geometry and structure of such embeddings is very shallow.",
"A recent work (Mimno and Thompson, 2017) analyzed the geometry of word embeddings.",
"However, the problem of analyzing geometry of KG embeddings is still unexplored -we fill this important gap.",
"In this paper, we analyze the geometry of such vectors in terms of their lengths and conicity, which, as defined in Section 4, describes their positions and orientations in the vector space.",
"We later study the effects of model type and training hyperparameters on the geometry of KG embeddings and correlate geometry with performance.",
"We make the following contributions: • We initiate a study to analyze the geometry of various Knowledge Graph (KG) embeddings.",
"To the best of our knowledge, this is the first study of its kind.",
"We also formalize various metrics which can be used to study geometry of a set of vectors.",
"• Through extensive analysis, we discover several interesting insights about the geometry of KG embeddings.",
"For example, we find systematic differences between the geometries of embeddings learned by additive and multiplicative KG embedding methods.",
"• We also study the relationship between geometric attributes and predictive performance of the embeddings, resulting in several new insights.",
"For example, in case of multiplicative models, we observe that for entity vectors generated with a fixed number of negative samples, lower conicity (as defined in Section 4) or higher average vector length lead to higher performance.",
"Source code of all the analysis tools developed as part of this paper is available at https://github.com/malllabiisc/ kg-geometry.",
"We are hoping that these resources will enable one to quickly analyze the geometry of any KG embedding, and potentially other embeddings as well.",
"Related Work In spite of the extensive and growing literature on both KG and non-KG embedding methods, very little attention has been paid towards understanding the geometry of the learned embeddings.",
"A recent work (Mimno and Thompson, 2017) is an exception to this which addresses this problem in the context of word vectors.",
"This work revealed a surprising correlation between word vector geometry and the number of negative samples used during training.",
"Instead of word vectors, in this paper we focus on understanding the geometry of KG embeddings.",
"In spite of this difference, the insights we discover in this paper generalizes some of the observations in the work of (Mimno and Thompson, 2017) .",
"Please see Section 6.2 for more details.",
"Since KGs contain only positive triples, negative sampling has been used for training KG embeddings.",
"Effect of the number of negative samples in KG embedding performance was studied by (Toutanova et al., 2015) .",
"In this paper, we study the effect of the number of negative samples on KG embedding geometry as well as performance.",
"In addition to the additive and multiplicative KG embedding methods already mentioned in Section 1, there is another set of methods where the entity and relation vectors interact via a neural network.",
"Examples of methods in this category include NTN (Socher et al., 2013) , CONV (Toutanova et al., 2015) , ConvE (Dettmers et al., 2017) , R-GCN (Schlichtkrull et al., 2017) , ER-MLP (Dong et al., 2014) and ER-MLP-2n (Ravishankar et al., 2017) .",
"Due to space limitations, in this paper we restrict our scope to the analysis of the geometry of additive and multiplicative KG embedding models only, and leave the analysis of the geometry of neural network-based methods as part of future work.",
"Overview of KG Embedding Methods For our analysis, we consider six representative KG embedding methods: TransE (Bordes et al., 2013) , TransR (Lin et al., 2015) , STransE (Nguyen et al., 2016) , DistMult (Yang et al., 2014) , HolE (Nickel et al., 2016) and ComplEx (Trouillon et al., 2016) .",
"We refer to TransE, TransR and STransE as additive methods because they learn embeddings by modeling relations as translation vectors from one entity to another, which results in vectors interacting via the addition operation during training.",
"On the other hand, we refer to Dist-Mult, HolE and ComplEx as multiplicative methods as they quantify the likelihood of a triple belonging to the KG through a multiplicative score function.",
"The score functions optimized by these methods are summarized in Table 1 .",
"Notation: Let G = (E, R, T ) be a Knowledge Graph (KG) where E is the set of entities, R is the set of relations and T ⊂ E × R × E is the set of triples stored in the graph.",
"Most of the KG embedding methods learn vectors e ∈ R de for e ∈ E, and r ∈ R dr for r ∈ R. Some methods also learn projection matrices M r ∈ R dr×de for relations.",
"The correctness of a triple is evaluated using a model specific score function σ : E × R × E → R. For learning the embeddings, a loss function L(T , T ; θ), defined over a set of positive triples T , set of (sampled) negative triples T , and the parameters θ is optimized.",
"We use small italics characters (e.g., h, r) to represent entities and relations, and correspond-Type Model Score Function σ(h, r, t) Additive TransE (Bordes et al., 2013) − h + r − t 1 TransR (Lin et al., 2015) − Mrh + r − Mrt 1 STransE (Nguyen et al., 2016) − M 1 r h + r − M 2 r t 1 Multiplicative DistMult (Yang et al., 2014) r (h t) HolE (Nickel et al., 2016) r (h t) ComplEx (Trouillon et al., 2016) Re(r (h t )) Table 1 : Summary of various Knowledge Graph (KG) embedding methods used in the paper.",
"Please see Section 3 for more details.",
"ing bold characters to represent their vector embeddings (e.g., h, r).",
"We use bold capitalization (e.g., V) to represent a set of vectors.",
"Matrices are represented by capital italics characters (e.g., M ).",
"Additive KG Embedding Methods This is the set of methods where entity and relation vectors interact via additive operations.",
"The score function for these models can be expressed as below σ(h, r, t) = − M 1 r h + r − M 2 r t 1 (1) where h, t ∈ R de and r ∈ R dr are vectors for head entity, tail entity and relation respectively.",
"M 1 r , M 2 r ∈ R dr×de are projection matrices from entity space R de to relation space R dr .",
"TransE (Bordes et al., 2013) is the simplest additive model where the entity and relation vectors lie in same d−dimensional space, i.e., d e = d r = d. The projection matrices M 1 r = M 2 r = I d are identity matrices.",
"The relation vectors are modeled as translation vectors from head entity vectors to tail entity vectors.",
"Pairwise ranking loss is then used to learn these vectors.",
"Since the model is simple, it has limited capability in capturing many-to-one, one-to-many and many-to-many relations.",
"TransR (Lin et al., 2015) is another translationbased model which uses separate spaces for entity and relation vectors allowing it to address the shortcomings of TransE.",
"Entity vectors are projected into a relation specific space using the corresponding projection matrix M 1 r = M 2 r = M r .",
"The training is similar to TransE.",
"STransE (Nguyen et al., 2016) is a generalization of TransR and uses different projection matrices for head and tail entity vectors.",
"The training is similar to TransE.",
"STransE achieves better performance than the previous methods but at the cost of more number of parameters.",
"Equation 1 is the score function used in STransE.",
"TransE and TransR are special cases of STransE with M 1 r = M 2 r = I d and M 1 r = M 2 r = M r , respectively.",
"Multiplicative KG Embedding Methods This is the set of methods where the vectors interact via multiplicative operations (usually dot product).",
"The score function for these models can be expressed as σ(h, r, t) = r f (h, t) (2) where h, t, r ∈ F d are vectors for head entity, tail entity and relation respectively.",
"f (h, t) ∈ F d measures compatibility of head and tail entities and is specific to the model.",
"F is either real space R or complex space C. Detailed descriptions of the models we consider are as follows.",
"DistMult (Yang et al., 2014 ) models entities and relations as vectors in R d .",
"It uses an entry-wise product ( ) to measure compatibility between head and tail entities, while using logistic loss for training the model.",
"σ DistM ult (h, r, t) = r (h t) (3) Since the entry-wise product in (3) is symmetric, DistMult is not suitable for asymmetric and antisymmetric relations.",
"HolE (Nickel et al., 2016) also models entities and relations as vectors in R d .",
"It uses circular correlation operator ( ) as compatibility function defined as [h t] k = d−1 i=0 h i t (k+i) mod d The score function is given as σ HolE (h, r, t) = r (h t) (4) The circular correlation operator being asymmetric, can capture asymmetric and anti-symmetric relations, but at the cost of higher time complexity The points are sampled from 3d Spherical Gaussian with mean (1,1,1) and standard deviation 0.1 (left) and 1.3 (right).",
"Please refer to Section 4 for more details.",
"(O (d log d) ).",
"For training, we use pairwise ranking loss.",
"ComplEx (Trouillon et al., 2016) represents entities and relations as vectors in C d .",
"The compatibility of entity pairs is measured using entry-wise product between head and complex conjugate of tail entity vectors.",
"σ ComplEx (h, r, t) = Re(r (h t )) (5) In contrast to (3), using complex vectors in (5) allows ComplEx to handle symmetric, asymmetric and anti-symmetric relations using the same score function.",
"Similar to DistMult, logistic loss is used for training the model.",
"Metrics For our geometrical analysis, we first define a term 'alignment to mean' (ATM) of a vector v belonging to a set of vectors V, as the cosine similarity 1 between v and the mean of all vectors in V. ATM(v, V) = cosine v, 1 |V| x∈V x We also define 'conicity' of a set V as the mean ATM of all vectors in V. By this definition, a high value of Conicity(V) would imply that the vectors in V lie in a narrow cone centered at origin.",
"In other words, the vectors in the set V are highly aligned with each other.",
"In addition to that, we define the variance of ATM across all vectors in V, as the 'vector spread'(VS) of set V, For each method, a plot averaged across entity frequency bins is shown.",
"From these plots, we conclude that entity embeddings from additive models tend to have low (positive as well as negative) ATM and thereby low Conicity and high vector spread.",
"Interestingly, this is reversed in case of multiplicative methods.",
"Please see Section 6.1 for more details.",
"Conicity(V) = 1 |V| v∈V ATM(v, V) 1 cosine(u, v) = u v u v Dataset VS(V) = 1 |V| v∈V ATM(v, V)−Conicity(V) Experimental Setup Datasets: We run our experiments on subsets of two widely used datasets, viz., Freebase (Bollacker et al., 2008) and WordNet (Miller, 1995) , called FB15k and WN18 (Bordes et al., 2013) , respectively.",
"We detail the characteristics of these datasets in Table 2 .",
"Please note that while the results presented in Section 6 are on the FB15K dataset, we reach the same conclusions on WN18.",
"The plots for our experiments on WN18 can be found in the Supplementary Section.",
"Hyperparameters: We experiment with multiple values of hyperparameters to understand their effect on the geometry of KG embeddings.",
"Specifically, we vary the dimension of the generated vectors between {50, 100, 200} and the number of negative samples used during training between {1, 50, 100}.",
"For more details on algorithm specific hyperparameters, we refer the reader to the Supplementary Section.",
"2 2 For training, we used codes from https://github.",
"Frequency Bins: We follow (Mimno and Thompson, 2017) for entity and relation samples used in the analysis.",
"Multiple bins of entities and relations are created based on their frequencies and 100 randomly sampled vectors are taken from each bin.",
"These set of sampled vectors are then used for our analysis.",
"For more information about sampling vectors, please refer to (Mimno and Thompson, 2017) .",
"Results and Analysis In this section, we evaluate the following questions.",
"• Does model type (e.g., additive vs multiplicative) have any effect on the geometry of embeddings?",
"(Section 6.",
"For each method, a plot averaged across entity frequency bins is shown.",
"Trends in these plots are similar to those in Figure 2 .",
"Main findings from these plots are summarized in Section 6.1.",
"• Does negative sampling have any effect on the embedding geometry?",
"(Section 6.2) • Does the dimension of embedding have any effect on its geometry?",
"(Section 6.3) • How is task performance related to embedding geometry?",
"(Section 6.4) In each subsection, we summarize the main findings at the beginning, followed by evidence supporting those findings.",
"Effect of Model Type on Geometry Summary of Findings: Additive: Low conicity and high vector spread.",
"Multiplicative: High conicity and low vector spread.",
"In this section, we explore whether the type of the score function optimized during the training has any effect on the geometry of the resulting embedding.",
"For this experiment, we set the number of negative samples to 1 and the vector dimension to 100 (we got similar results for 50-dimensional vectors).",
"Figure 2 and Figure 3 show the distribution of ATMs of these sampled entity and relation vectors, respectively.",
"3 Entity Embeddings: As seen in Figure 2 , there is a stark difference between the geometries of entity vectors produced by additive and multiplicative models.",
"The ATMs of all entity vectors produced by multiplicative models are positive with very low vector spread.",
"Their high conicity suggests that they are not uniformly dispersed in the vector space, but lie in a narrow cone along the mean vector.",
"This is in contrast to the entity vectors obtained from additive models which are both positive and negative with higher vector spread.",
"From the lower values of conicity, we conclude that entity vectors from additive models are evenly dispersed in the vector space.",
"This observation is also reinforced by looking at the high vector spread of additive models in comparison to that of multiplicative models.",
"We also observed that additive models are sensitive to the frequency of entities, with high frequency bins having higher conicity than low frequency bins.",
"However, no such pattern was observed for multiplicative models and In each bar group, first three models are additive, while the last three are multiplicative.",
"Main findings from these plots are summarized in Section 6.2 conicity was consistently similar across frequency bins.",
"For clarity, we have not shown different plots for individual frequency bins.",
"Relation Embeddings: As in entity embeddings, we observe a similar trend when we look at the distribution of ATMs for relation vectors in Figure 3 .",
"The conicity of relation vectors generated using additive models is almost zero across frequency bands.",
"This coupled with the high vector spread observed, suggests that these vectors are scattered throughout the vector space.",
"Relation vectors from multiplicative models exhibit high conicity and low vector spread, suggesting that they lie in a narrow cone centered at origin, like their entity counterparts.",
"Effect of Number of Negative Samples on Geometry Summary of Findings: Additive: Conicity and average length are invariant to changes in #NegativeSamples for both entities and relations.",
"Multiplicative: Conicity increases while average vector length decrease with increasing #NegativeSamples for entities.",
"Conicity decreases, while average vector length remains constant (except HolE) for relations.",
"For experiments in this section, we keep the vector dimension constant at 100.",
"Entity Embeddings: As seen in Figure 4 (left) , the conicity of entity vectors increases as the number of negative samples is increased for multiplicative models.",
"In contrast, conicity of the entity vectors generated by additive models is unaffected by change in number of negative samples and they continue to be dispersed throughout the vector space.",
"From Figure 4 (right), we observe that the average length of entity vectors produced by additive models is also invariant of any changes in number of negative samples.",
"On the other hand, increase in negative sampling decreases the average entity vector length for all multiplicative models except HolE.",
"The average entity vector length for HolE is nearly 1 for any number of negative samples, which is understandable considering it constrains the entity vectors to lie inside a unit ball (Nickel et al., 2016) .",
"This constraint is also enforced by the additive models: TransE, TransR, and STransE.",
"Relation Embeddings: Similar to entity embeddings, in case of relation vectors trained using additive models, the average length and conicity do not change while varying the number of negative samples.",
"However, the conicity of relation vectors from multiplicative models decreases with increase in negative sampling.",
"The average relation vector length is invariant for all multiplicative methods, except for HolE.",
"We see a surprisingly big jump in average relation vector length for HolE going from 1 to 50 negative samples, but it does not change after that.",
"Due to space constraints in the paper, we refer the reader to the Supplementary Section for plots discussing the effect of number of negative samples on geometry of relation vectors.",
"We note that the multiplicative score between two vectors may be increased by either increasing the alignment between the two vectors (i.e., increasing Conicity and reducing vector spread between them), or by increasing their lengths.",
"It is interesting to note that we see exactly these effects in the geometry of multiplicative methods In each bar group, first three models are additive, while the last three are multiplicative.",
"Main findings from these plots are summarized in Section 6.3. analyzed above.",
"Correlation with Geometry of Word Embeddings Our conclusions from the geometrical analysis of entity vectors produced by multiplicative models are similar to the results in (Mimno and Thompson, 2017) , where increase in negative sampling leads to increased conicity of word vectors trained using the skip-gram with negative sampling (SGNS) method.",
"On the other hand, additive models remain unaffected by these changes.",
"SGNS tries to maximize a score function of the form w T · c for positive word context pairs, where w is the word vector and c is the context vector (Mikolov et al., 2013) .",
"This is very similar to the score function of multiplicative models as seen in Table 1 .",
"Hence, SGNS can be considered as a multiplicative model in the word domain.",
"Hence, we argue that our result on the increase in negative samples increasing the conicity of vectors trained using a multiplicative score function can be considered as a generalization of the one in (Mimno and Thompson, 2017) .",
"Effect of Vector Dimension on Geometry Summary of Findings: Additive: Conicity and average length are invariant to changes in dimension for both entities and relations.",
"Multiplicative: Conicity decreases for both entities and relations with increasing dimension.",
"Average vector length increases for both entities and relations, except for HolE entities.",
"Entity Embeddings: To study the effect of vec-tor dimension on conicity and length, we set the number of negative samples to 1, while varying the vector dimension.",
"From Figure 5 (left), we observe that the conicity for entity vectors generated by any additive model is almost invariant of increase in dimension, though STransE exhibits a slight decrease.",
"In contrast, entity vector from multiplicative models show a clear decreasing pattern with increasing dimension.",
"As seen in Figure 5 (right) , the average lengths of entity vectors from multiplicative models increase sharply with increasing vector dimension, except for HolE.",
"In case of HolE, the average vector length remains constant at one.",
"Deviation involving HolE is expected as it enforces entity vectors to fall within a unit ball.",
"Similar constraints are enforced on entity vectors for additive models as well.",
"Thus, the average entity vector lengths are not affected by increasing vector dimension for all additive models.",
"Relation Embeddings: We reach similar conclusion when analyzing against increasing dimension the change in geometry of relation vectors produced using these KG embedding methods.",
"In this setting, the average length of relation vectors learned by HolE also increases as dimension is increased.",
"This is consistent with the other methods in the multiplicative family.",
"This is because, unlike entity vectors, the lengths of relation vectors of HolE are not constrained to be less than unit length.",
"Due to lack of space, we are unable to show plots for relation vectors here, but the same can be found in the Supplementary Section.",
"Relating Geometry to Performance Summary of Findings: Additive: Neither entites nor relations exhibit correlation between geometry and performance.",
"Multiplicative: Keeping negative samples fixed, lower conicity or higher average vector length for entities leads to improved performance.",
"No relationship for relations.",
"In this section, we analyze the relationship between geometry and performance on the Link prediction task, using the same setting as in (Bordes et al., 2013) .",
"Figure 6 (left) presents the effects of conicity of entity vectors on performance, while Figure 6 (right) shows the effects of average entity vector length.",
"4 As we see from Figure 6 (left), for fixed number of negative samples, the multiplicative model with lower conicity of entity vectors achieves better performance.",
"This performance gain is larger for higher numbers of negative samples (N).",
"Additive models don't exhibit any relationship between performance and conicity, as they are all clustered around zero conicity, which is in-line with our observations in previous sections.",
"In Figure 6 (right) , for all multiplicative models except HolE, a higher average entity vector length translates to better performance, while the number of negative samples is kept fixed.",
"Additive models and HolE don't exhibit any such patterns, as they are all clustered just below unit average entity vector length.",
"The above two observations for multiplicative models make intuitive sense, as lower conicity and higher average vector length would both translate to vectors being more dispersed in the space.",
"We see another interesting observation regarding the high sensitivity of HolE to the number of negative samples used during training.",
"Using a large number of negative examples (e.g., N = 50 or 100) leads to very high conicity in case of HolE.",
"Figure 6 (right) shows that average entity vector length of HolE is always one.",
"These two observations point towards HolE's entity vectors lying in a tiny part of the space.",
"This translates to HolE performing poorer than all other models in case of high numbers of negative sampling.",
"We also did a similar study for relation vectors, but did not see any discernible patterns.",
"Conclusion In this paper, we have initiated a systematic study into the important but unexplored problem of analyzing geometry of various Knowledge Graph (KG) embedding methods.",
"To the best of our knowledge, this is the first study of its kind.",
"Through extensive experiments on multiple realworld datasets, we are able to identify several insights into the geometry of KG embeddings.",
"We have also explored the relationship between KG embedding geometry and its task performance.",
"We have shared all our source code to foster further research in this area."
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"4",
"5",
"6",
"6.2",
"6.2.1",
"7"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Overview of KG Embedding Methods",
"Additive KG Embedding Methods",
"Multiplicative KG Embedding Methods",
"Metrics",
"Experimental Setup",
"Results and Analysis",
"Effect of Number of Negative Samples on Geometry",
"Correlation with Geometry of Word Embeddings",
"Conclusion"
]
} | GEM-SciDuet-train-132#paper-1355#slide-2 | Geometry of Embeddings | Arrangement of vectors in the vector space.
A recent work by (Mimno and Thompson, 2017)1 presented an analysis of the geometry of word embeddings and revealed interesting results.
However, geometrical understanding of KG embeddings is very limited, despite their popularity.
1. The strange geometry of skip-gram with negative sampling, Mimno and Thompson, EMNLP 2017 | Arrangement of vectors in the vector space.
A recent work by (Mimno and Thompson, 2017)1 presented an analysis of the geometry of word embeddings and revealed interesting results.
However, geometrical understanding of KG embeddings is very limited, despite their popularity.
1. The strange geometry of skip-gram with negative sampling, Mimno and Thompson, EMNLP 2017 | [] |
GEM-SciDuet-train-132#paper-1355#slide-3 | 1355 | Towards Understanding the Geometry of Knowledge Graph Embeddings | Knowledge Graph (KG) embedding has emerged as a very active area of research over the last few years, resulting in the development of several embedding methods. These KG embedding methods represent KG entities and relations as vectors in a high-dimensional space. Despite this popularity and effectiveness of KG embeddings in various tasks (e.g., link prediction), geometric understanding of such embeddings (i.e., arrangement of entity and relation vectors in vector space) is unexplored -we fill this gap in the paper. We initiate a study to analyze the geometry of KG embeddings and correlate it with task performance and other hyperparameters. To the best of our knowledge, this is the first study of its kind. Through extensive experiments on real-world datasets, we discover several insights. For example, we find that there are sharp differences between the geometry of embeddings learnt by different classes of KG embeddings methods. We hope that this initial study will inspire other follow-up research on this important but unexplored problem. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188
],
"paper_content_text": [
"Introduction Knowledge Graphs (KGs) are multi-relational graphs where nodes represent entities and typededges represent relationships among entities.",
"Recent research in this area has resulted in the development of several large KGs, such as NELL (Mitchell et al., 2015) , YAGO (Suchanek et al., 2007) , and Freebase (Bollacker et al., 2008) , among others.",
"These KGs contain thousands of predicates (e.g., person, city, mayorOf(person, city), etc.",
"), and millions of triples involving such predicates, e.g., (Bill de Blasio, mayorOf, New York City) .",
"The problem of learning embeddings for Knowledge Graphs has received significant attention in recent years, with several methods being proposed (Bordes et al., 2013; Lin et al., 2015; Nguyen et al., 2016; Nickel et al., 2016; Trouillon et al., 2016) .",
"These methods represent entities and relations in a KG as vectors in high dimensional space.",
"These vectors can then be used for various tasks, such as, link prediction, entity classification etc.",
"Starting with TransE (Bordes et al., 2013) , there have been many KG embedding methods such as TransH (Wang et al., 2014) , TransR (Lin et al., 2015) and STransE (Nguyen et al., 2016) which represent relations as translation vectors from head entities to tail entities.",
"These are additive models, as the vectors interact via addition and subtraction.",
"Other KG embedding models, such as, DistMult (Yang et al., 2014) , HolE (Nickel et al., 2016) , and ComplEx (Trouillon et al., 2016) are multiplicative where entityrelation-entity triple likelihood is quantified by a multiplicative score function.",
"All these methods employ a score function for distinguishing correct triples from incorrect ones.",
"In spite of the existence of many KG embedding methods, our understanding of the geometry and structure of such embeddings is very shallow.",
"A recent work (Mimno and Thompson, 2017) analyzed the geometry of word embeddings.",
"However, the problem of analyzing geometry of KG embeddings is still unexplored -we fill this important gap.",
"In this paper, we analyze the geometry of such vectors in terms of their lengths and conicity, which, as defined in Section 4, describes their positions and orientations in the vector space.",
"We later study the effects of model type and training hyperparameters on the geometry of KG embeddings and correlate geometry with performance.",
"We make the following contributions: • We initiate a study to analyze the geometry of various Knowledge Graph (KG) embeddings.",
"To the best of our knowledge, this is the first study of its kind.",
"We also formalize various metrics which can be used to study geometry of a set of vectors.",
"• Through extensive analysis, we discover several interesting insights about the geometry of KG embeddings.",
"For example, we find systematic differences between the geometries of embeddings learned by additive and multiplicative KG embedding methods.",
"• We also study the relationship between geometric attributes and predictive performance of the embeddings, resulting in several new insights.",
"For example, in case of multiplicative models, we observe that for entity vectors generated with a fixed number of negative samples, lower conicity (as defined in Section 4) or higher average vector length lead to higher performance.",
"Source code of all the analysis tools developed as part of this paper is available at https://github.com/malllabiisc/ kg-geometry.",
"We are hoping that these resources will enable one to quickly analyze the geometry of any KG embedding, and potentially other embeddings as well.",
"Related Work In spite of the extensive and growing literature on both KG and non-KG embedding methods, very little attention has been paid towards understanding the geometry of the learned embeddings.",
"A recent work (Mimno and Thompson, 2017) is an exception to this which addresses this problem in the context of word vectors.",
"This work revealed a surprising correlation between word vector geometry and the number of negative samples used during training.",
"Instead of word vectors, in this paper we focus on understanding the geometry of KG embeddings.",
"In spite of this difference, the insights we discover in this paper generalizes some of the observations in the work of (Mimno and Thompson, 2017) .",
"Please see Section 6.2 for more details.",
"Since KGs contain only positive triples, negative sampling has been used for training KG embeddings.",
"Effect of the number of negative samples in KG embedding performance was studied by (Toutanova et al., 2015) .",
"In this paper, we study the effect of the number of negative samples on KG embedding geometry as well as performance.",
"In addition to the additive and multiplicative KG embedding methods already mentioned in Section 1, there is another set of methods where the entity and relation vectors interact via a neural network.",
"Examples of methods in this category include NTN (Socher et al., 2013) , CONV (Toutanova et al., 2015) , ConvE (Dettmers et al., 2017) , R-GCN (Schlichtkrull et al., 2017) , ER-MLP (Dong et al., 2014) and ER-MLP-2n (Ravishankar et al., 2017) .",
"Due to space limitations, in this paper we restrict our scope to the analysis of the geometry of additive and multiplicative KG embedding models only, and leave the analysis of the geometry of neural network-based methods as part of future work.",
"Overview of KG Embedding Methods For our analysis, we consider six representative KG embedding methods: TransE (Bordes et al., 2013) , TransR (Lin et al., 2015) , STransE (Nguyen et al., 2016) , DistMult (Yang et al., 2014) , HolE (Nickel et al., 2016) and ComplEx (Trouillon et al., 2016) .",
"We refer to TransE, TransR and STransE as additive methods because they learn embeddings by modeling relations as translation vectors from one entity to another, which results in vectors interacting via the addition operation during training.",
"On the other hand, we refer to Dist-Mult, HolE and ComplEx as multiplicative methods as they quantify the likelihood of a triple belonging to the KG through a multiplicative score function.",
"The score functions optimized by these methods are summarized in Table 1 .",
"Notation: Let G = (E, R, T ) be a Knowledge Graph (KG) where E is the set of entities, R is the set of relations and T ⊂ E × R × E is the set of triples stored in the graph.",
"Most of the KG embedding methods learn vectors e ∈ R de for e ∈ E, and r ∈ R dr for r ∈ R. Some methods also learn projection matrices M r ∈ R dr×de for relations.",
"The correctness of a triple is evaluated using a model specific score function σ : E × R × E → R. For learning the embeddings, a loss function L(T , T ; θ), defined over a set of positive triples T , set of (sampled) negative triples T , and the parameters θ is optimized.",
"We use small italics characters (e.g., h, r) to represent entities and relations, and correspond-Type Model Score Function σ(h, r, t) Additive TransE (Bordes et al., 2013) − h + r − t 1 TransR (Lin et al., 2015) − Mrh + r − Mrt 1 STransE (Nguyen et al., 2016) − M 1 r h + r − M 2 r t 1 Multiplicative DistMult (Yang et al., 2014) r (h t) HolE (Nickel et al., 2016) r (h t) ComplEx (Trouillon et al., 2016) Re(r (h t )) Table 1 : Summary of various Knowledge Graph (KG) embedding methods used in the paper.",
"Please see Section 3 for more details.",
"ing bold characters to represent their vector embeddings (e.g., h, r).",
"We use bold capitalization (e.g., V) to represent a set of vectors.",
"Matrices are represented by capital italics characters (e.g., M ).",
"Additive KG Embedding Methods This is the set of methods where entity and relation vectors interact via additive operations.",
"The score function for these models can be expressed as below σ(h, r, t) = − M 1 r h + r − M 2 r t 1 (1) where h, t ∈ R de and r ∈ R dr are vectors for head entity, tail entity and relation respectively.",
"M 1 r , M 2 r ∈ R dr×de are projection matrices from entity space R de to relation space R dr .",
"TransE (Bordes et al., 2013) is the simplest additive model where the entity and relation vectors lie in same d−dimensional space, i.e., d e = d r = d. The projection matrices M 1 r = M 2 r = I d are identity matrices.",
"The relation vectors are modeled as translation vectors from head entity vectors to tail entity vectors.",
"Pairwise ranking loss is then used to learn these vectors.",
"Since the model is simple, it has limited capability in capturing many-to-one, one-to-many and many-to-many relations.",
"TransR (Lin et al., 2015) is another translationbased model which uses separate spaces for entity and relation vectors allowing it to address the shortcomings of TransE.",
"Entity vectors are projected into a relation specific space using the corresponding projection matrix M 1 r = M 2 r = M r .",
"The training is similar to TransE.",
"STransE (Nguyen et al., 2016) is a generalization of TransR and uses different projection matrices for head and tail entity vectors.",
"The training is similar to TransE.",
"STransE achieves better performance than the previous methods but at the cost of more number of parameters.",
"Equation 1 is the score function used in STransE.",
"TransE and TransR are special cases of STransE with M 1 r = M 2 r = I d and M 1 r = M 2 r = M r , respectively.",
"Multiplicative KG Embedding Methods This is the set of methods where the vectors interact via multiplicative operations (usually dot product).",
"The score function for these models can be expressed as σ(h, r, t) = r f (h, t) (2) where h, t, r ∈ F d are vectors for head entity, tail entity and relation respectively.",
"f (h, t) ∈ F d measures compatibility of head and tail entities and is specific to the model.",
"F is either real space R or complex space C. Detailed descriptions of the models we consider are as follows.",
"DistMult (Yang et al., 2014 ) models entities and relations as vectors in R d .",
"It uses an entry-wise product ( ) to measure compatibility between head and tail entities, while using logistic loss for training the model.",
"σ DistM ult (h, r, t) = r (h t) (3) Since the entry-wise product in (3) is symmetric, DistMult is not suitable for asymmetric and antisymmetric relations.",
"HolE (Nickel et al., 2016) also models entities and relations as vectors in R d .",
"It uses circular correlation operator ( ) as compatibility function defined as [h t] k = d−1 i=0 h i t (k+i) mod d The score function is given as σ HolE (h, r, t) = r (h t) (4) The circular correlation operator being asymmetric, can capture asymmetric and anti-symmetric relations, but at the cost of higher time complexity The points are sampled from 3d Spherical Gaussian with mean (1,1,1) and standard deviation 0.1 (left) and 1.3 (right).",
"Please refer to Section 4 for more details.",
"(O (d log d) ).",
"For training, we use pairwise ranking loss.",
"ComplEx (Trouillon et al., 2016) represents entities and relations as vectors in C d .",
"The compatibility of entity pairs is measured using entry-wise product between head and complex conjugate of tail entity vectors.",
"σ ComplEx (h, r, t) = Re(r (h t )) (5) In contrast to (3), using complex vectors in (5) allows ComplEx to handle symmetric, asymmetric and anti-symmetric relations using the same score function.",
"Similar to DistMult, logistic loss is used for training the model.",
"Metrics For our geometrical analysis, we first define a term 'alignment to mean' (ATM) of a vector v belonging to a set of vectors V, as the cosine similarity 1 between v and the mean of all vectors in V. ATM(v, V) = cosine v, 1 |V| x∈V x We also define 'conicity' of a set V as the mean ATM of all vectors in V. By this definition, a high value of Conicity(V) would imply that the vectors in V lie in a narrow cone centered at origin.",
"In other words, the vectors in the set V are highly aligned with each other.",
"In addition to that, we define the variance of ATM across all vectors in V, as the 'vector spread'(VS) of set V, For each method, a plot averaged across entity frequency bins is shown.",
"From these plots, we conclude that entity embeddings from additive models tend to have low (positive as well as negative) ATM and thereby low Conicity and high vector spread.",
"Interestingly, this is reversed in case of multiplicative methods.",
"Please see Section 6.1 for more details.",
"Conicity(V) = 1 |V| v∈V ATM(v, V) 1 cosine(u, v) = u v u v Dataset VS(V) = 1 |V| v∈V ATM(v, V)−Conicity(V) Experimental Setup Datasets: We run our experiments on subsets of two widely used datasets, viz., Freebase (Bollacker et al., 2008) and WordNet (Miller, 1995) , called FB15k and WN18 (Bordes et al., 2013) , respectively.",
"We detail the characteristics of these datasets in Table 2 .",
"Please note that while the results presented in Section 6 are on the FB15K dataset, we reach the same conclusions on WN18.",
"The plots for our experiments on WN18 can be found in the Supplementary Section.",
"Hyperparameters: We experiment with multiple values of hyperparameters to understand their effect on the geometry of KG embeddings.",
"Specifically, we vary the dimension of the generated vectors between {50, 100, 200} and the number of negative samples used during training between {1, 50, 100}.",
"For more details on algorithm specific hyperparameters, we refer the reader to the Supplementary Section.",
"2 2 For training, we used codes from https://github.",
"Frequency Bins: We follow (Mimno and Thompson, 2017) for entity and relation samples used in the analysis.",
"Multiple bins of entities and relations are created based on their frequencies and 100 randomly sampled vectors are taken from each bin.",
"These set of sampled vectors are then used for our analysis.",
"For more information about sampling vectors, please refer to (Mimno and Thompson, 2017) .",
"Results and Analysis In this section, we evaluate the following questions.",
"• Does model type (e.g., additive vs multiplicative) have any effect on the geometry of embeddings?",
"(Section 6.",
"For each method, a plot averaged across entity frequency bins is shown.",
"Trends in these plots are similar to those in Figure 2 .",
"Main findings from these plots are summarized in Section 6.1.",
"• Does negative sampling have any effect on the embedding geometry?",
"(Section 6.2) • Does the dimension of embedding have any effect on its geometry?",
"(Section 6.3) • How is task performance related to embedding geometry?",
"(Section 6.4) In each subsection, we summarize the main findings at the beginning, followed by evidence supporting those findings.",
"Effect of Model Type on Geometry Summary of Findings: Additive: Low conicity and high vector spread.",
"Multiplicative: High conicity and low vector spread.",
"In this section, we explore whether the type of the score function optimized during the training has any effect on the geometry of the resulting embedding.",
"For this experiment, we set the number of negative samples to 1 and the vector dimension to 100 (we got similar results for 50-dimensional vectors).",
"Figure 2 and Figure 3 show the distribution of ATMs of these sampled entity and relation vectors, respectively.",
"3 Entity Embeddings: As seen in Figure 2 , there is a stark difference between the geometries of entity vectors produced by additive and multiplicative models.",
"The ATMs of all entity vectors produced by multiplicative models are positive with very low vector spread.",
"Their high conicity suggests that they are not uniformly dispersed in the vector space, but lie in a narrow cone along the mean vector.",
"This is in contrast to the entity vectors obtained from additive models which are both positive and negative with higher vector spread.",
"From the lower values of conicity, we conclude that entity vectors from additive models are evenly dispersed in the vector space.",
"This observation is also reinforced by looking at the high vector spread of additive models in comparison to that of multiplicative models.",
"We also observed that additive models are sensitive to the frequency of entities, with high frequency bins having higher conicity than low frequency bins.",
"However, no such pattern was observed for multiplicative models and In each bar group, first three models are additive, while the last three are multiplicative.",
"Main findings from these plots are summarized in Section 6.2 conicity was consistently similar across frequency bins.",
"For clarity, we have not shown different plots for individual frequency bins.",
"Relation Embeddings: As in entity embeddings, we observe a similar trend when we look at the distribution of ATMs for relation vectors in Figure 3 .",
"The conicity of relation vectors generated using additive models is almost zero across frequency bands.",
"This coupled with the high vector spread observed, suggests that these vectors are scattered throughout the vector space.",
"Relation vectors from multiplicative models exhibit high conicity and low vector spread, suggesting that they lie in a narrow cone centered at origin, like their entity counterparts.",
"Effect of Number of Negative Samples on Geometry Summary of Findings: Additive: Conicity and average length are invariant to changes in #NegativeSamples for both entities and relations.",
"Multiplicative: Conicity increases while average vector length decrease with increasing #NegativeSamples for entities.",
"Conicity decreases, while average vector length remains constant (except HolE) for relations.",
"For experiments in this section, we keep the vector dimension constant at 100.",
"Entity Embeddings: As seen in Figure 4 (left) , the conicity of entity vectors increases as the number of negative samples is increased for multiplicative models.",
"In contrast, conicity of the entity vectors generated by additive models is unaffected by change in number of negative samples and they continue to be dispersed throughout the vector space.",
"From Figure 4 (right), we observe that the average length of entity vectors produced by additive models is also invariant of any changes in number of negative samples.",
"On the other hand, increase in negative sampling decreases the average entity vector length for all multiplicative models except HolE.",
"The average entity vector length for HolE is nearly 1 for any number of negative samples, which is understandable considering it constrains the entity vectors to lie inside a unit ball (Nickel et al., 2016) .",
"This constraint is also enforced by the additive models: TransE, TransR, and STransE.",
"Relation Embeddings: Similar to entity embeddings, in case of relation vectors trained using additive models, the average length and conicity do not change while varying the number of negative samples.",
"However, the conicity of relation vectors from multiplicative models decreases with increase in negative sampling.",
"The average relation vector length is invariant for all multiplicative methods, except for HolE.",
"We see a surprisingly big jump in average relation vector length for HolE going from 1 to 50 negative samples, but it does not change after that.",
"Due to space constraints in the paper, we refer the reader to the Supplementary Section for plots discussing the effect of number of negative samples on geometry of relation vectors.",
"We note that the multiplicative score between two vectors may be increased by either increasing the alignment between the two vectors (i.e., increasing Conicity and reducing vector spread between them), or by increasing their lengths.",
"It is interesting to note that we see exactly these effects in the geometry of multiplicative methods In each bar group, first three models are additive, while the last three are multiplicative.",
"Main findings from these plots are summarized in Section 6.3. analyzed above.",
"Correlation with Geometry of Word Embeddings Our conclusions from the geometrical analysis of entity vectors produced by multiplicative models are similar to the results in (Mimno and Thompson, 2017) , where increase in negative sampling leads to increased conicity of word vectors trained using the skip-gram with negative sampling (SGNS) method.",
"On the other hand, additive models remain unaffected by these changes.",
"SGNS tries to maximize a score function of the form w T · c for positive word context pairs, where w is the word vector and c is the context vector (Mikolov et al., 2013) .",
"This is very similar to the score function of multiplicative models as seen in Table 1 .",
"Hence, SGNS can be considered as a multiplicative model in the word domain.",
"Hence, we argue that our result on the increase in negative samples increasing the conicity of vectors trained using a multiplicative score function can be considered as a generalization of the one in (Mimno and Thompson, 2017) .",
"Effect of Vector Dimension on Geometry Summary of Findings: Additive: Conicity and average length are invariant to changes in dimension for both entities and relations.",
"Multiplicative: Conicity decreases for both entities and relations with increasing dimension.",
"Average vector length increases for both entities and relations, except for HolE entities.",
"Entity Embeddings: To study the effect of vec-tor dimension on conicity and length, we set the number of negative samples to 1, while varying the vector dimension.",
"From Figure 5 (left), we observe that the conicity for entity vectors generated by any additive model is almost invariant of increase in dimension, though STransE exhibits a slight decrease.",
"In contrast, entity vector from multiplicative models show a clear decreasing pattern with increasing dimension.",
"As seen in Figure 5 (right) , the average lengths of entity vectors from multiplicative models increase sharply with increasing vector dimension, except for HolE.",
"In case of HolE, the average vector length remains constant at one.",
"Deviation involving HolE is expected as it enforces entity vectors to fall within a unit ball.",
"Similar constraints are enforced on entity vectors for additive models as well.",
"Thus, the average entity vector lengths are not affected by increasing vector dimension for all additive models.",
"Relation Embeddings: We reach similar conclusion when analyzing against increasing dimension the change in geometry of relation vectors produced using these KG embedding methods.",
"In this setting, the average length of relation vectors learned by HolE also increases as dimension is increased.",
"This is consistent with the other methods in the multiplicative family.",
"This is because, unlike entity vectors, the lengths of relation vectors of HolE are not constrained to be less than unit length.",
"Due to lack of space, we are unable to show plots for relation vectors here, but the same can be found in the Supplementary Section.",
"Relating Geometry to Performance Summary of Findings: Additive: Neither entites nor relations exhibit correlation between geometry and performance.",
"Multiplicative: Keeping negative samples fixed, lower conicity or higher average vector length for entities leads to improved performance.",
"No relationship for relations.",
"In this section, we analyze the relationship between geometry and performance on the Link prediction task, using the same setting as in (Bordes et al., 2013) .",
"Figure 6 (left) presents the effects of conicity of entity vectors on performance, while Figure 6 (right) shows the effects of average entity vector length.",
"4 As we see from Figure 6 (left), for fixed number of negative samples, the multiplicative model with lower conicity of entity vectors achieves better performance.",
"This performance gain is larger for higher numbers of negative samples (N).",
"Additive models don't exhibit any relationship between performance and conicity, as they are all clustered around zero conicity, which is in-line with our observations in previous sections.",
"In Figure 6 (right) , for all multiplicative models except HolE, a higher average entity vector length translates to better performance, while the number of negative samples is kept fixed.",
"Additive models and HolE don't exhibit any such patterns, as they are all clustered just below unit average entity vector length.",
"The above two observations for multiplicative models make intuitive sense, as lower conicity and higher average vector length would both translate to vectors being more dispersed in the space.",
"We see another interesting observation regarding the high sensitivity of HolE to the number of negative samples used during training.",
"Using a large number of negative examples (e.g., N = 50 or 100) leads to very high conicity in case of HolE.",
"Figure 6 (right) shows that average entity vector length of HolE is always one.",
"These two observations point towards HolE's entity vectors lying in a tiny part of the space.",
"This translates to HolE performing poorer than all other models in case of high numbers of negative sampling.",
"We also did a similar study for relation vectors, but did not see any discernible patterns.",
"Conclusion In this paper, we have initiated a systematic study into the important but unexplored problem of analyzing geometry of various Knowledge Graph (KG) embedding methods.",
"To the best of our knowledge, this is the first study of its kind.",
"Through extensive experiments on multiple realworld datasets, we are able to identify several insights into the geometry of KG embeddings.",
"We have also explored the relationship between KG embedding geometry and its task performance.",
"We have shared all our source code to foster further research in this area."
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"4",
"5",
"6",
"6.2",
"6.2.1",
"7"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Overview of KG Embedding Methods",
"Additive KG Embedding Methods",
"Multiplicative KG Embedding Methods",
"Metrics",
"Experimental Setup",
"Results and Analysis",
"Effect of Number of Negative Samples on Geometry",
"Correlation with Geometry of Word Embeddings",
"Conclusion"
]
} | GEM-SciDuet-train-132#paper-1355#slide-3 | Problem | Study the geometrical behavior of KG embeddings learnt by different methods.
Study the effect of various hyper-parameters used during training on the geometry of KG embeddings.
Study the correlation between the geometry and performance of KG embeddings. | Study the geometrical behavior of KG embeddings learnt by different methods.
Study the effect of various hyper-parameters used during training on the geometry of KG embeddings.
Study the correlation between the geometry and performance of KG embeddings. | [] |
GEM-SciDuet-train-132#paper-1355#slide-4 | 1355 | Towards Understanding the Geometry of Knowledge Graph Embeddings | Knowledge Graph (KG) embedding has emerged as a very active area of research over the last few years, resulting in the development of several embedding methods. These KG embedding methods represent KG entities and relations as vectors in a high-dimensional space. Despite this popularity and effectiveness of KG embeddings in various tasks (e.g., link prediction), geometric understanding of such embeddings (i.e., arrangement of entity and relation vectors in vector space) is unexplored -we fill this gap in the paper. We initiate a study to analyze the geometry of KG embeddings and correlate it with task performance and other hyperparameters. To the best of our knowledge, this is the first study of its kind. Through extensive experiments on real-world datasets, we discover several insights. For example, we find that there are sharp differences between the geometry of embeddings learnt by different classes of KG embeddings methods. We hope that this initial study will inspire other follow-up research on this important but unexplored problem. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188
],
"paper_content_text": [
"Introduction Knowledge Graphs (KGs) are multi-relational graphs where nodes represent entities and typededges represent relationships among entities.",
"Recent research in this area has resulted in the development of several large KGs, such as NELL (Mitchell et al., 2015) , YAGO (Suchanek et al., 2007) , and Freebase (Bollacker et al., 2008) , among others.",
"These KGs contain thousands of predicates (e.g., person, city, mayorOf(person, city), etc.",
"), and millions of triples involving such predicates, e.g., (Bill de Blasio, mayorOf, New York City) .",
"The problem of learning embeddings for Knowledge Graphs has received significant attention in recent years, with several methods being proposed (Bordes et al., 2013; Lin et al., 2015; Nguyen et al., 2016; Nickel et al., 2016; Trouillon et al., 2016) .",
"These methods represent entities and relations in a KG as vectors in high dimensional space.",
"These vectors can then be used for various tasks, such as, link prediction, entity classification etc.",
"Starting with TransE (Bordes et al., 2013) , there have been many KG embedding methods such as TransH (Wang et al., 2014) , TransR (Lin et al., 2015) and STransE (Nguyen et al., 2016) which represent relations as translation vectors from head entities to tail entities.",
"These are additive models, as the vectors interact via addition and subtraction.",
"Other KG embedding models, such as, DistMult (Yang et al., 2014) , HolE (Nickel et al., 2016) , and ComplEx (Trouillon et al., 2016) are multiplicative where entityrelation-entity triple likelihood is quantified by a multiplicative score function.",
"All these methods employ a score function for distinguishing correct triples from incorrect ones.",
"In spite of the existence of many KG embedding methods, our understanding of the geometry and structure of such embeddings is very shallow.",
"A recent work (Mimno and Thompson, 2017) analyzed the geometry of word embeddings.",
"However, the problem of analyzing geometry of KG embeddings is still unexplored -we fill this important gap.",
"In this paper, we analyze the geometry of such vectors in terms of their lengths and conicity, which, as defined in Section 4, describes their positions and orientations in the vector space.",
"We later study the effects of model type and training hyperparameters on the geometry of KG embeddings and correlate geometry with performance.",
"We make the following contributions: • We initiate a study to analyze the geometry of various Knowledge Graph (KG) embeddings.",
"To the best of our knowledge, this is the first study of its kind.",
"We also formalize various metrics which can be used to study geometry of a set of vectors.",
"• Through extensive analysis, we discover several interesting insights about the geometry of KG embeddings.",
"For example, we find systematic differences between the geometries of embeddings learned by additive and multiplicative KG embedding methods.",
"• We also study the relationship between geometric attributes and predictive performance of the embeddings, resulting in several new insights.",
"For example, in case of multiplicative models, we observe that for entity vectors generated with a fixed number of negative samples, lower conicity (as defined in Section 4) or higher average vector length lead to higher performance.",
"Source code of all the analysis tools developed as part of this paper is available at https://github.com/malllabiisc/ kg-geometry.",
"We are hoping that these resources will enable one to quickly analyze the geometry of any KG embedding, and potentially other embeddings as well.",
"Related Work In spite of the extensive and growing literature on both KG and non-KG embedding methods, very little attention has been paid towards understanding the geometry of the learned embeddings.",
"A recent work (Mimno and Thompson, 2017) is an exception to this which addresses this problem in the context of word vectors.",
"This work revealed a surprising correlation between word vector geometry and the number of negative samples used during training.",
"Instead of word vectors, in this paper we focus on understanding the geometry of KG embeddings.",
"In spite of this difference, the insights we discover in this paper generalizes some of the observations in the work of (Mimno and Thompson, 2017) .",
"Please see Section 6.2 for more details.",
"Since KGs contain only positive triples, negative sampling has been used for training KG embeddings.",
"Effect of the number of negative samples in KG embedding performance was studied by (Toutanova et al., 2015) .",
"In this paper, we study the effect of the number of negative samples on KG embedding geometry as well as performance.",
"In addition to the additive and multiplicative KG embedding methods already mentioned in Section 1, there is another set of methods where the entity and relation vectors interact via a neural network.",
"Examples of methods in this category include NTN (Socher et al., 2013) , CONV (Toutanova et al., 2015) , ConvE (Dettmers et al., 2017) , R-GCN (Schlichtkrull et al., 2017) , ER-MLP (Dong et al., 2014) and ER-MLP-2n (Ravishankar et al., 2017) .",
"Due to space limitations, in this paper we restrict our scope to the analysis of the geometry of additive and multiplicative KG embedding models only, and leave the analysis of the geometry of neural network-based methods as part of future work.",
"Overview of KG Embedding Methods For our analysis, we consider six representative KG embedding methods: TransE (Bordes et al., 2013) , TransR (Lin et al., 2015) , STransE (Nguyen et al., 2016) , DistMult (Yang et al., 2014) , HolE (Nickel et al., 2016) and ComplEx (Trouillon et al., 2016) .",
"We refer to TransE, TransR and STransE as additive methods because they learn embeddings by modeling relations as translation vectors from one entity to another, which results in vectors interacting via the addition operation during training.",
"On the other hand, we refer to Dist-Mult, HolE and ComplEx as multiplicative methods as they quantify the likelihood of a triple belonging to the KG through a multiplicative score function.",
"The score functions optimized by these methods are summarized in Table 1 .",
"Notation: Let G = (E, R, T ) be a Knowledge Graph (KG) where E is the set of entities, R is the set of relations and T ⊂ E × R × E is the set of triples stored in the graph.",
"Most of the KG embedding methods learn vectors e ∈ R de for e ∈ E, and r ∈ R dr for r ∈ R. Some methods also learn projection matrices M r ∈ R dr×de for relations.",
"The correctness of a triple is evaluated using a model specific score function σ : E × R × E → R. For learning the embeddings, a loss function L(T , T ; θ), defined over a set of positive triples T , set of (sampled) negative triples T , and the parameters θ is optimized.",
"We use small italics characters (e.g., h, r) to represent entities and relations, and correspond-Type Model Score Function σ(h, r, t) Additive TransE (Bordes et al., 2013) − h + r − t 1 TransR (Lin et al., 2015) − Mrh + r − Mrt 1 STransE (Nguyen et al., 2016) − M 1 r h + r − M 2 r t 1 Multiplicative DistMult (Yang et al., 2014) r (h t) HolE (Nickel et al., 2016) r (h t) ComplEx (Trouillon et al., 2016) Re(r (h t )) Table 1 : Summary of various Knowledge Graph (KG) embedding methods used in the paper.",
"Please see Section 3 for more details.",
"ing bold characters to represent their vector embeddings (e.g., h, r).",
"We use bold capitalization (e.g., V) to represent a set of vectors.",
"Matrices are represented by capital italics characters (e.g., M ).",
"Additive KG Embedding Methods This is the set of methods where entity and relation vectors interact via additive operations.",
"The score function for these models can be expressed as below σ(h, r, t) = − M 1 r h + r − M 2 r t 1 (1) where h, t ∈ R de and r ∈ R dr are vectors for head entity, tail entity and relation respectively.",
"M 1 r , M 2 r ∈ R dr×de are projection matrices from entity space R de to relation space R dr .",
"TransE (Bordes et al., 2013) is the simplest additive model where the entity and relation vectors lie in same d−dimensional space, i.e., d e = d r = d. The projection matrices M 1 r = M 2 r = I d are identity matrices.",
"The relation vectors are modeled as translation vectors from head entity vectors to tail entity vectors.",
"Pairwise ranking loss is then used to learn these vectors.",
"Since the model is simple, it has limited capability in capturing many-to-one, one-to-many and many-to-many relations.",
"TransR (Lin et al., 2015) is another translationbased model which uses separate spaces for entity and relation vectors allowing it to address the shortcomings of TransE.",
"Entity vectors are projected into a relation specific space using the corresponding projection matrix M 1 r = M 2 r = M r .",
"The training is similar to TransE.",
"STransE (Nguyen et al., 2016) is a generalization of TransR and uses different projection matrices for head and tail entity vectors.",
"The training is similar to TransE.",
"STransE achieves better performance than the previous methods but at the cost of more number of parameters.",
"Equation 1 is the score function used in STransE.",
"TransE and TransR are special cases of STransE with M 1 r = M 2 r = I d and M 1 r = M 2 r = M r , respectively.",
"Multiplicative KG Embedding Methods This is the set of methods where the vectors interact via multiplicative operations (usually dot product).",
"The score function for these models can be expressed as σ(h, r, t) = r f (h, t) (2) where h, t, r ∈ F d are vectors for head entity, tail entity and relation respectively.",
"f (h, t) ∈ F d measures compatibility of head and tail entities and is specific to the model.",
"F is either real space R or complex space C. Detailed descriptions of the models we consider are as follows.",
"DistMult (Yang et al., 2014 ) models entities and relations as vectors in R d .",
"It uses an entry-wise product ( ) to measure compatibility between head and tail entities, while using logistic loss for training the model.",
"σ DistM ult (h, r, t) = r (h t) (3) Since the entry-wise product in (3) is symmetric, DistMult is not suitable for asymmetric and antisymmetric relations.",
"HolE (Nickel et al., 2016) also models entities and relations as vectors in R d .",
"It uses circular correlation operator ( ) as compatibility function defined as [h t] k = d−1 i=0 h i t (k+i) mod d The score function is given as σ HolE (h, r, t) = r (h t) (4) The circular correlation operator being asymmetric, can capture asymmetric and anti-symmetric relations, but at the cost of higher time complexity The points are sampled from 3d Spherical Gaussian with mean (1,1,1) and standard deviation 0.1 (left) and 1.3 (right).",
"Please refer to Section 4 for more details.",
"(O (d log d) ).",
"For training, we use pairwise ranking loss.",
"ComplEx (Trouillon et al., 2016) represents entities and relations as vectors in C d .",
"The compatibility of entity pairs is measured using entry-wise product between head and complex conjugate of tail entity vectors.",
"σ ComplEx (h, r, t) = Re(r (h t )) (5) In contrast to (3), using complex vectors in (5) allows ComplEx to handle symmetric, asymmetric and anti-symmetric relations using the same score function.",
"Similar to DistMult, logistic loss is used for training the model.",
"Metrics For our geometrical analysis, we first define a term 'alignment to mean' (ATM) of a vector v belonging to a set of vectors V, as the cosine similarity 1 between v and the mean of all vectors in V. ATM(v, V) = cosine v, 1 |V| x∈V x We also define 'conicity' of a set V as the mean ATM of all vectors in V. By this definition, a high value of Conicity(V) would imply that the vectors in V lie in a narrow cone centered at origin.",
"In other words, the vectors in the set V are highly aligned with each other.",
"In addition to that, we define the variance of ATM across all vectors in V, as the 'vector spread'(VS) of set V, For each method, a plot averaged across entity frequency bins is shown.",
"From these plots, we conclude that entity embeddings from additive models tend to have low (positive as well as negative) ATM and thereby low Conicity and high vector spread.",
"Interestingly, this is reversed in case of multiplicative methods.",
"Please see Section 6.1 for more details.",
"Conicity(V) = 1 |V| v∈V ATM(v, V) 1 cosine(u, v) = u v u v Dataset VS(V) = 1 |V| v∈V ATM(v, V)−Conicity(V) Experimental Setup Datasets: We run our experiments on subsets of two widely used datasets, viz., Freebase (Bollacker et al., 2008) and WordNet (Miller, 1995) , called FB15k and WN18 (Bordes et al., 2013) , respectively.",
"We detail the characteristics of these datasets in Table 2 .",
"Please note that while the results presented in Section 6 are on the FB15K dataset, we reach the same conclusions on WN18.",
"The plots for our experiments on WN18 can be found in the Supplementary Section.",
"Hyperparameters: We experiment with multiple values of hyperparameters to understand their effect on the geometry of KG embeddings.",
"Specifically, we vary the dimension of the generated vectors between {50, 100, 200} and the number of negative samples used during training between {1, 50, 100}.",
"For more details on algorithm specific hyperparameters, we refer the reader to the Supplementary Section.",
"2 2 For training, we used codes from https://github.",
"Frequency Bins: We follow (Mimno and Thompson, 2017) for entity and relation samples used in the analysis.",
"Multiple bins of entities and relations are created based on their frequencies and 100 randomly sampled vectors are taken from each bin.",
"These set of sampled vectors are then used for our analysis.",
"For more information about sampling vectors, please refer to (Mimno and Thompson, 2017) .",
"Results and Analysis In this section, we evaluate the following questions.",
"• Does model type (e.g., additive vs multiplicative) have any effect on the geometry of embeddings?",
"(Section 6.",
"For each method, a plot averaged across entity frequency bins is shown.",
"Trends in these plots are similar to those in Figure 2 .",
"Main findings from these plots are summarized in Section 6.1.",
"• Does negative sampling have any effect on the embedding geometry?",
"(Section 6.2) • Does the dimension of embedding have any effect on its geometry?",
"(Section 6.3) • How is task performance related to embedding geometry?",
"(Section 6.4) In each subsection, we summarize the main findings at the beginning, followed by evidence supporting those findings.",
"Effect of Model Type on Geometry Summary of Findings: Additive: Low conicity and high vector spread.",
"Multiplicative: High conicity and low vector spread.",
"In this section, we explore whether the type of the score function optimized during the training has any effect on the geometry of the resulting embedding.",
"For this experiment, we set the number of negative samples to 1 and the vector dimension to 100 (we got similar results for 50-dimensional vectors).",
"Figure 2 and Figure 3 show the distribution of ATMs of these sampled entity and relation vectors, respectively.",
"3 Entity Embeddings: As seen in Figure 2 , there is a stark difference between the geometries of entity vectors produced by additive and multiplicative models.",
"The ATMs of all entity vectors produced by multiplicative models are positive with very low vector spread.",
"Their high conicity suggests that they are not uniformly dispersed in the vector space, but lie in a narrow cone along the mean vector.",
"This is in contrast to the entity vectors obtained from additive models which are both positive and negative with higher vector spread.",
"From the lower values of conicity, we conclude that entity vectors from additive models are evenly dispersed in the vector space.",
"This observation is also reinforced by looking at the high vector spread of additive models in comparison to that of multiplicative models.",
"We also observed that additive models are sensitive to the frequency of entities, with high frequency bins having higher conicity than low frequency bins.",
"However, no such pattern was observed for multiplicative models and In each bar group, first three models are additive, while the last three are multiplicative.",
"Main findings from these plots are summarized in Section 6.2 conicity was consistently similar across frequency bins.",
"For clarity, we have not shown different plots for individual frequency bins.",
"Relation Embeddings: As in entity embeddings, we observe a similar trend when we look at the distribution of ATMs for relation vectors in Figure 3 .",
"The conicity of relation vectors generated using additive models is almost zero across frequency bands.",
"This coupled with the high vector spread observed, suggests that these vectors are scattered throughout the vector space.",
"Relation vectors from multiplicative models exhibit high conicity and low vector spread, suggesting that they lie in a narrow cone centered at origin, like their entity counterparts.",
"Effect of Number of Negative Samples on Geometry Summary of Findings: Additive: Conicity and average length are invariant to changes in #NegativeSamples for both entities and relations.",
"Multiplicative: Conicity increases while average vector length decrease with increasing #NegativeSamples for entities.",
"Conicity decreases, while average vector length remains constant (except HolE) for relations.",
"For experiments in this section, we keep the vector dimension constant at 100.",
"Entity Embeddings: As seen in Figure 4 (left) , the conicity of entity vectors increases as the number of negative samples is increased for multiplicative models.",
"In contrast, conicity of the entity vectors generated by additive models is unaffected by change in number of negative samples and they continue to be dispersed throughout the vector space.",
"From Figure 4 (right), we observe that the average length of entity vectors produced by additive models is also invariant of any changes in number of negative samples.",
"On the other hand, increase in negative sampling decreases the average entity vector length for all multiplicative models except HolE.",
"The average entity vector length for HolE is nearly 1 for any number of negative samples, which is understandable considering it constrains the entity vectors to lie inside a unit ball (Nickel et al., 2016) .",
"This constraint is also enforced by the additive models: TransE, TransR, and STransE.",
"Relation Embeddings: Similar to entity embeddings, in case of relation vectors trained using additive models, the average length and conicity do not change while varying the number of negative samples.",
"However, the conicity of relation vectors from multiplicative models decreases with increase in negative sampling.",
"The average relation vector length is invariant for all multiplicative methods, except for HolE.",
"We see a surprisingly big jump in average relation vector length for HolE going from 1 to 50 negative samples, but it does not change after that.",
"Due to space constraints in the paper, we refer the reader to the Supplementary Section for plots discussing the effect of number of negative samples on geometry of relation vectors.",
"We note that the multiplicative score between two vectors may be increased by either increasing the alignment between the two vectors (i.e., increasing Conicity and reducing vector spread between them), or by increasing their lengths.",
"It is interesting to note that we see exactly these effects in the geometry of multiplicative methods In each bar group, first three models are additive, while the last three are multiplicative.",
"Main findings from these plots are summarized in Section 6.3. analyzed above.",
"Correlation with Geometry of Word Embeddings Our conclusions from the geometrical analysis of entity vectors produced by multiplicative models are similar to the results in (Mimno and Thompson, 2017) , where increase in negative sampling leads to increased conicity of word vectors trained using the skip-gram with negative sampling (SGNS) method.",
"On the other hand, additive models remain unaffected by these changes.",
"SGNS tries to maximize a score function of the form w T · c for positive word context pairs, where w is the word vector and c is the context vector (Mikolov et al., 2013) .",
"This is very similar to the score function of multiplicative models as seen in Table 1 .",
"Hence, SGNS can be considered as a multiplicative model in the word domain.",
"Hence, we argue that our result on the increase in negative samples increasing the conicity of vectors trained using a multiplicative score function can be considered as a generalization of the one in (Mimno and Thompson, 2017) .",
"Effect of Vector Dimension on Geometry Summary of Findings: Additive: Conicity and average length are invariant to changes in dimension for both entities and relations.",
"Multiplicative: Conicity decreases for both entities and relations with increasing dimension.",
"Average vector length increases for both entities and relations, except for HolE entities.",
"Entity Embeddings: To study the effect of vec-tor dimension on conicity and length, we set the number of negative samples to 1, while varying the vector dimension.",
"From Figure 5 (left), we observe that the conicity for entity vectors generated by any additive model is almost invariant of increase in dimension, though STransE exhibits a slight decrease.",
"In contrast, entity vector from multiplicative models show a clear decreasing pattern with increasing dimension.",
"As seen in Figure 5 (right) , the average lengths of entity vectors from multiplicative models increase sharply with increasing vector dimension, except for HolE.",
"In case of HolE, the average vector length remains constant at one.",
"Deviation involving HolE is expected as it enforces entity vectors to fall within a unit ball.",
"Similar constraints are enforced on entity vectors for additive models as well.",
"Thus, the average entity vector lengths are not affected by increasing vector dimension for all additive models.",
"Relation Embeddings: We reach similar conclusion when analyzing against increasing dimension the change in geometry of relation vectors produced using these KG embedding methods.",
"In this setting, the average length of relation vectors learned by HolE also increases as dimension is increased.",
"This is consistent with the other methods in the multiplicative family.",
"This is because, unlike entity vectors, the lengths of relation vectors of HolE are not constrained to be less than unit length.",
"Due to lack of space, we are unable to show plots for relation vectors here, but the same can be found in the Supplementary Section.",
"Relating Geometry to Performance Summary of Findings: Additive: Neither entites nor relations exhibit correlation between geometry and performance.",
"Multiplicative: Keeping negative samples fixed, lower conicity or higher average vector length for entities leads to improved performance.",
"No relationship for relations.",
"In this section, we analyze the relationship between geometry and performance on the Link prediction task, using the same setting as in (Bordes et al., 2013) .",
"Figure 6 (left) presents the effects of conicity of entity vectors on performance, while Figure 6 (right) shows the effects of average entity vector length.",
"4 As we see from Figure 6 (left), for fixed number of negative samples, the multiplicative model with lower conicity of entity vectors achieves better performance.",
"This performance gain is larger for higher numbers of negative samples (N).",
"Additive models don't exhibit any relationship between performance and conicity, as they are all clustered around zero conicity, which is in-line with our observations in previous sections.",
"In Figure 6 (right) , for all multiplicative models except HolE, a higher average entity vector length translates to better performance, while the number of negative samples is kept fixed.",
"Additive models and HolE don't exhibit any such patterns, as they are all clustered just below unit average entity vector length.",
"The above two observations for multiplicative models make intuitive sense, as lower conicity and higher average vector length would both translate to vectors being more dispersed in the space.",
"We see another interesting observation regarding the high sensitivity of HolE to the number of negative samples used during training.",
"Using a large number of negative examples (e.g., N = 50 or 100) leads to very high conicity in case of HolE.",
"Figure 6 (right) shows that average entity vector length of HolE is always one.",
"These two observations point towards HolE's entity vectors lying in a tiny part of the space.",
"This translates to HolE performing poorer than all other models in case of high numbers of negative sampling.",
"We also did a similar study for relation vectors, but did not see any discernible patterns.",
"Conclusion In this paper, we have initiated a systematic study into the important but unexplored problem of analyzing geometry of various Knowledge Graph (KG) embedding methods.",
"To the best of our knowledge, this is the first study of its kind.",
"Through extensive experiments on multiple realworld datasets, we are able to identify several insights into the geometry of KG embeddings.",
"We have also explored the relationship between KG embedding geometry and its task performance.",
"We have shared all our source code to foster further research in this area."
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"4",
"5",
"6",
"6.2",
"6.2.1",
"7"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Overview of KG Embedding Methods",
"Additive KG Embedding Methods",
"Multiplicative KG Embedding Methods",
"Metrics",
"Experimental Setup",
"Results and Analysis",
"Effect of Number of Negative Samples on Geometry",
"Correlation with Geometry of Word Embeddings",
"Conclusion"
]
} | GEM-SciDuet-train-132#paper-1355#slide-4 | KG Embedding Methods | Learns d-dimensional vectors for entities and relations in a KG.
A score function distinguishes correct triples from incorrect triples
(Messi, plays-for-team, Barcelona) > (Messi, plays-for-team, Liverpool)
A loss function is used for training the embeddings (usually logistic loss or margin-based ranking loss).
Entry-wise product Circular correlation | Learns d-dimensional vectors for entities and relations in a KG.
A score function distinguishes correct triples from incorrect triples
(Messi, plays-for-team, Barcelona) > (Messi, plays-for-team, Liverpool)
A loss function is used for training the embeddings (usually logistic loss or margin-based ranking loss).
Entry-wise product Circular correlation | [] |
GEM-SciDuet-train-132#paper-1355#slide-6 | 1355 | Towards Understanding the Geometry of Knowledge Graph Embeddings | Knowledge Graph (KG) embedding has emerged as a very active area of research over the last few years, resulting in the development of several embedding methods. These KG embedding methods represent KG entities and relations as vectors in a high-dimensional space. Despite this popularity and effectiveness of KG embeddings in various tasks (e.g., link prediction), geometric understanding of such embeddings (i.e., arrangement of entity and relation vectors in vector space) is unexplored -we fill this gap in the paper. We initiate a study to analyze the geometry of KG embeddings and correlate it with task performance and other hyperparameters. To the best of our knowledge, this is the first study of its kind. Through extensive experiments on real-world datasets, we discover several insights. For example, we find that there are sharp differences between the geometry of embeddings learnt by different classes of KG embeddings methods. We hope that this initial study will inspire other follow-up research on this important but unexplored problem. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188
],
"paper_content_text": [
"Introduction Knowledge Graphs (KGs) are multi-relational graphs where nodes represent entities and typededges represent relationships among entities.",
"Recent research in this area has resulted in the development of several large KGs, such as NELL (Mitchell et al., 2015) , YAGO (Suchanek et al., 2007) , and Freebase (Bollacker et al., 2008) , among others.",
"These KGs contain thousands of predicates (e.g., person, city, mayorOf(person, city), etc.",
"), and millions of triples involving such predicates, e.g., (Bill de Blasio, mayorOf, New York City) .",
"The problem of learning embeddings for Knowledge Graphs has received significant attention in recent years, with several methods being proposed (Bordes et al., 2013; Lin et al., 2015; Nguyen et al., 2016; Nickel et al., 2016; Trouillon et al., 2016) .",
"These methods represent entities and relations in a KG as vectors in high dimensional space.",
"These vectors can then be used for various tasks, such as, link prediction, entity classification etc.",
"Starting with TransE (Bordes et al., 2013) , there have been many KG embedding methods such as TransH (Wang et al., 2014) , TransR (Lin et al., 2015) and STransE (Nguyen et al., 2016) which represent relations as translation vectors from head entities to tail entities.",
"These are additive models, as the vectors interact via addition and subtraction.",
"Other KG embedding models, such as, DistMult (Yang et al., 2014) , HolE (Nickel et al., 2016) , and ComplEx (Trouillon et al., 2016) are multiplicative where entityrelation-entity triple likelihood is quantified by a multiplicative score function.",
"All these methods employ a score function for distinguishing correct triples from incorrect ones.",
"In spite of the existence of many KG embedding methods, our understanding of the geometry and structure of such embeddings is very shallow.",
"A recent work (Mimno and Thompson, 2017) analyzed the geometry of word embeddings.",
"However, the problem of analyzing geometry of KG embeddings is still unexplored -we fill this important gap.",
"In this paper, we analyze the geometry of such vectors in terms of their lengths and conicity, which, as defined in Section 4, describes their positions and orientations in the vector space.",
"We later study the effects of model type and training hyperparameters on the geometry of KG embeddings and correlate geometry with performance.",
"We make the following contributions: • We initiate a study to analyze the geometry of various Knowledge Graph (KG) embeddings.",
"To the best of our knowledge, this is the first study of its kind.",
"We also formalize various metrics which can be used to study geometry of a set of vectors.",
"• Through extensive analysis, we discover several interesting insights about the geometry of KG embeddings.",
"For example, we find systematic differences between the geometries of embeddings learned by additive and multiplicative KG embedding methods.",
"• We also study the relationship between geometric attributes and predictive performance of the embeddings, resulting in several new insights.",
"For example, in case of multiplicative models, we observe that for entity vectors generated with a fixed number of negative samples, lower conicity (as defined in Section 4) or higher average vector length lead to higher performance.",
"Source code of all the analysis tools developed as part of this paper is available at https://github.com/malllabiisc/ kg-geometry.",
"We are hoping that these resources will enable one to quickly analyze the geometry of any KG embedding, and potentially other embeddings as well.",
"Related Work In spite of the extensive and growing literature on both KG and non-KG embedding methods, very little attention has been paid towards understanding the geometry of the learned embeddings.",
"A recent work (Mimno and Thompson, 2017) is an exception to this which addresses this problem in the context of word vectors.",
"This work revealed a surprising correlation between word vector geometry and the number of negative samples used during training.",
"Instead of word vectors, in this paper we focus on understanding the geometry of KG embeddings.",
"In spite of this difference, the insights we discover in this paper generalizes some of the observations in the work of (Mimno and Thompson, 2017) .",
"Please see Section 6.2 for more details.",
"Since KGs contain only positive triples, negative sampling has been used for training KG embeddings.",
"Effect of the number of negative samples in KG embedding performance was studied by (Toutanova et al., 2015) .",
"In this paper, we study the effect of the number of negative samples on KG embedding geometry as well as performance.",
"In addition to the additive and multiplicative KG embedding methods already mentioned in Section 1, there is another set of methods where the entity and relation vectors interact via a neural network.",
"Examples of methods in this category include NTN (Socher et al., 2013) , CONV (Toutanova et al., 2015) , ConvE (Dettmers et al., 2017) , R-GCN (Schlichtkrull et al., 2017) , ER-MLP (Dong et al., 2014) and ER-MLP-2n (Ravishankar et al., 2017) .",
"Due to space limitations, in this paper we restrict our scope to the analysis of the geometry of additive and multiplicative KG embedding models only, and leave the analysis of the geometry of neural network-based methods as part of future work.",
"Overview of KG Embedding Methods For our analysis, we consider six representative KG embedding methods: TransE (Bordes et al., 2013) , TransR (Lin et al., 2015) , STransE (Nguyen et al., 2016) , DistMult (Yang et al., 2014) , HolE (Nickel et al., 2016) and ComplEx (Trouillon et al., 2016) .",
"We refer to TransE, TransR and STransE as additive methods because they learn embeddings by modeling relations as translation vectors from one entity to another, which results in vectors interacting via the addition operation during training.",
"On the other hand, we refer to Dist-Mult, HolE and ComplEx as multiplicative methods as they quantify the likelihood of a triple belonging to the KG through a multiplicative score function.",
"The score functions optimized by these methods are summarized in Table 1 .",
"Notation: Let G = (E, R, T ) be a Knowledge Graph (KG) where E is the set of entities, R is the set of relations and T ⊂ E × R × E is the set of triples stored in the graph.",
"Most of the KG embedding methods learn vectors e ∈ R de for e ∈ E, and r ∈ R dr for r ∈ R. Some methods also learn projection matrices M r ∈ R dr×de for relations.",
"The correctness of a triple is evaluated using a model specific score function σ : E × R × E → R. For learning the embeddings, a loss function L(T , T ; θ), defined over a set of positive triples T , set of (sampled) negative triples T , and the parameters θ is optimized.",
"We use small italics characters (e.g., h, r) to represent entities and relations, and correspond-Type Model Score Function σ(h, r, t) Additive TransE (Bordes et al., 2013) − h + r − t 1 TransR (Lin et al., 2015) − Mrh + r − Mrt 1 STransE (Nguyen et al., 2016) − M 1 r h + r − M 2 r t 1 Multiplicative DistMult (Yang et al., 2014) r (h t) HolE (Nickel et al., 2016) r (h t) ComplEx (Trouillon et al., 2016) Re(r (h t )) Table 1 : Summary of various Knowledge Graph (KG) embedding methods used in the paper.",
"Please see Section 3 for more details.",
"ing bold characters to represent their vector embeddings (e.g., h, r).",
"We use bold capitalization (e.g., V) to represent a set of vectors.",
"Matrices are represented by capital italics characters (e.g., M ).",
"Additive KG Embedding Methods This is the set of methods where entity and relation vectors interact via additive operations.",
"The score function for these models can be expressed as below σ(h, r, t) = − M 1 r h + r − M 2 r t 1 (1) where h, t ∈ R de and r ∈ R dr are vectors for head entity, tail entity and relation respectively.",
"M 1 r , M 2 r ∈ R dr×de are projection matrices from entity space R de to relation space R dr .",
"TransE (Bordes et al., 2013) is the simplest additive model where the entity and relation vectors lie in same d−dimensional space, i.e., d e = d r = d. The projection matrices M 1 r = M 2 r = I d are identity matrices.",
"The relation vectors are modeled as translation vectors from head entity vectors to tail entity vectors.",
"Pairwise ranking loss is then used to learn these vectors.",
"Since the model is simple, it has limited capability in capturing many-to-one, one-to-many and many-to-many relations.",
"TransR (Lin et al., 2015) is another translationbased model which uses separate spaces for entity and relation vectors allowing it to address the shortcomings of TransE.",
"Entity vectors are projected into a relation specific space using the corresponding projection matrix M 1 r = M 2 r = M r .",
"The training is similar to TransE.",
"STransE (Nguyen et al., 2016) is a generalization of TransR and uses different projection matrices for head and tail entity vectors.",
"The training is similar to TransE.",
"STransE achieves better performance than the previous methods but at the cost of more number of parameters.",
"Equation 1 is the score function used in STransE.",
"TransE and TransR are special cases of STransE with M 1 r = M 2 r = I d and M 1 r = M 2 r = M r , respectively.",
"Multiplicative KG Embedding Methods This is the set of methods where the vectors interact via multiplicative operations (usually dot product).",
"The score function for these models can be expressed as σ(h, r, t) = r f (h, t) (2) where h, t, r ∈ F d are vectors for head entity, tail entity and relation respectively.",
"f (h, t) ∈ F d measures compatibility of head and tail entities and is specific to the model.",
"F is either real space R or complex space C. Detailed descriptions of the models we consider are as follows.",
"DistMult (Yang et al., 2014 ) models entities and relations as vectors in R d .",
"It uses an entry-wise product ( ) to measure compatibility between head and tail entities, while using logistic loss for training the model.",
"σ DistM ult (h, r, t) = r (h t) (3) Since the entry-wise product in (3) is symmetric, DistMult is not suitable for asymmetric and antisymmetric relations.",
"HolE (Nickel et al., 2016) also models entities and relations as vectors in R d .",
"It uses circular correlation operator ( ) as compatibility function defined as [h t] k = d−1 i=0 h i t (k+i) mod d The score function is given as σ HolE (h, r, t) = r (h t) (4) The circular correlation operator being asymmetric, can capture asymmetric and anti-symmetric relations, but at the cost of higher time complexity The points are sampled from 3d Spherical Gaussian with mean (1,1,1) and standard deviation 0.1 (left) and 1.3 (right).",
"Please refer to Section 4 for more details.",
"(O (d log d) ).",
"For training, we use pairwise ranking loss.",
"ComplEx (Trouillon et al., 2016) represents entities and relations as vectors in C d .",
"The compatibility of entity pairs is measured using entry-wise product between head and complex conjugate of tail entity vectors.",
"σ ComplEx (h, r, t) = Re(r (h t )) (5) In contrast to (3), using complex vectors in (5) allows ComplEx to handle symmetric, asymmetric and anti-symmetric relations using the same score function.",
"Similar to DistMult, logistic loss is used for training the model.",
"Metrics For our geometrical analysis, we first define a term 'alignment to mean' (ATM) of a vector v belonging to a set of vectors V, as the cosine similarity 1 between v and the mean of all vectors in V. ATM(v, V) = cosine v, 1 |V| x∈V x We also define 'conicity' of a set V as the mean ATM of all vectors in V. By this definition, a high value of Conicity(V) would imply that the vectors in V lie in a narrow cone centered at origin.",
"In other words, the vectors in the set V are highly aligned with each other.",
"In addition to that, we define the variance of ATM across all vectors in V, as the 'vector spread'(VS) of set V, For each method, a plot averaged across entity frequency bins is shown.",
"From these plots, we conclude that entity embeddings from additive models tend to have low (positive as well as negative) ATM and thereby low Conicity and high vector spread.",
"Interestingly, this is reversed in case of multiplicative methods.",
"Please see Section 6.1 for more details.",
"Conicity(V) = 1 |V| v∈V ATM(v, V) 1 cosine(u, v) = u v u v Dataset VS(V) = 1 |V| v∈V ATM(v, V)−Conicity(V) Experimental Setup Datasets: We run our experiments on subsets of two widely used datasets, viz., Freebase (Bollacker et al., 2008) and WordNet (Miller, 1995) , called FB15k and WN18 (Bordes et al., 2013) , respectively.",
"We detail the characteristics of these datasets in Table 2 .",
"Please note that while the results presented in Section 6 are on the FB15K dataset, we reach the same conclusions on WN18.",
"The plots for our experiments on WN18 can be found in the Supplementary Section.",
"Hyperparameters: We experiment with multiple values of hyperparameters to understand their effect on the geometry of KG embeddings.",
"Specifically, we vary the dimension of the generated vectors between {50, 100, 200} and the number of negative samples used during training between {1, 50, 100}.",
"For more details on algorithm specific hyperparameters, we refer the reader to the Supplementary Section.",
"2 2 For training, we used codes from https://github.",
"Frequency Bins: We follow (Mimno and Thompson, 2017) for entity and relation samples used in the analysis.",
"Multiple bins of entities and relations are created based on their frequencies and 100 randomly sampled vectors are taken from each bin.",
"These set of sampled vectors are then used for our analysis.",
"For more information about sampling vectors, please refer to (Mimno and Thompson, 2017) .",
"Results and Analysis In this section, we evaluate the following questions.",
"• Does model type (e.g., additive vs multiplicative) have any effect on the geometry of embeddings?",
"(Section 6.",
"For each method, a plot averaged across entity frequency bins is shown.",
"Trends in these plots are similar to those in Figure 2 .",
"Main findings from these plots are summarized in Section 6.1.",
"• Does negative sampling have any effect on the embedding geometry?",
"(Section 6.2) • Does the dimension of embedding have any effect on its geometry?",
"(Section 6.3) • How is task performance related to embedding geometry?",
"(Section 6.4) In each subsection, we summarize the main findings at the beginning, followed by evidence supporting those findings.",
"Effect of Model Type on Geometry Summary of Findings: Additive: Low conicity and high vector spread.",
"Multiplicative: High conicity and low vector spread.",
"In this section, we explore whether the type of the score function optimized during the training has any effect on the geometry of the resulting embedding.",
"For this experiment, we set the number of negative samples to 1 and the vector dimension to 100 (we got similar results for 50-dimensional vectors).",
"Figure 2 and Figure 3 show the distribution of ATMs of these sampled entity and relation vectors, respectively.",
"3 Entity Embeddings: As seen in Figure 2 , there is a stark difference between the geometries of entity vectors produced by additive and multiplicative models.",
"The ATMs of all entity vectors produced by multiplicative models are positive with very low vector spread.",
"Their high conicity suggests that they are not uniformly dispersed in the vector space, but lie in a narrow cone along the mean vector.",
"This is in contrast to the entity vectors obtained from additive models which are both positive and negative with higher vector spread.",
"From the lower values of conicity, we conclude that entity vectors from additive models are evenly dispersed in the vector space.",
"This observation is also reinforced by looking at the high vector spread of additive models in comparison to that of multiplicative models.",
"We also observed that additive models are sensitive to the frequency of entities, with high frequency bins having higher conicity than low frequency bins.",
"However, no such pattern was observed for multiplicative models and In each bar group, first three models are additive, while the last three are multiplicative.",
"Main findings from these plots are summarized in Section 6.2 conicity was consistently similar across frequency bins.",
"For clarity, we have not shown different plots for individual frequency bins.",
"Relation Embeddings: As in entity embeddings, we observe a similar trend when we look at the distribution of ATMs for relation vectors in Figure 3 .",
"The conicity of relation vectors generated using additive models is almost zero across frequency bands.",
"This coupled with the high vector spread observed, suggests that these vectors are scattered throughout the vector space.",
"Relation vectors from multiplicative models exhibit high conicity and low vector spread, suggesting that they lie in a narrow cone centered at origin, like their entity counterparts.",
"Effect of Number of Negative Samples on Geometry Summary of Findings: Additive: Conicity and average length are invariant to changes in #NegativeSamples for both entities and relations.",
"Multiplicative: Conicity increases while average vector length decrease with increasing #NegativeSamples for entities.",
"Conicity decreases, while average vector length remains constant (except HolE) for relations.",
"For experiments in this section, we keep the vector dimension constant at 100.",
"Entity Embeddings: As seen in Figure 4 (left) , the conicity of entity vectors increases as the number of negative samples is increased for multiplicative models.",
"In contrast, conicity of the entity vectors generated by additive models is unaffected by change in number of negative samples and they continue to be dispersed throughout the vector space.",
"From Figure 4 (right), we observe that the average length of entity vectors produced by additive models is also invariant of any changes in number of negative samples.",
"On the other hand, increase in negative sampling decreases the average entity vector length for all multiplicative models except HolE.",
"The average entity vector length for HolE is nearly 1 for any number of negative samples, which is understandable considering it constrains the entity vectors to lie inside a unit ball (Nickel et al., 2016) .",
"This constraint is also enforced by the additive models: TransE, TransR, and STransE.",
"Relation Embeddings: Similar to entity embeddings, in case of relation vectors trained using additive models, the average length and conicity do not change while varying the number of negative samples.",
"However, the conicity of relation vectors from multiplicative models decreases with increase in negative sampling.",
"The average relation vector length is invariant for all multiplicative methods, except for HolE.",
"We see a surprisingly big jump in average relation vector length for HolE going from 1 to 50 negative samples, but it does not change after that.",
"Due to space constraints in the paper, we refer the reader to the Supplementary Section for plots discussing the effect of number of negative samples on geometry of relation vectors.",
"We note that the multiplicative score between two vectors may be increased by either increasing the alignment between the two vectors (i.e., increasing Conicity and reducing vector spread between them), or by increasing their lengths.",
"It is interesting to note that we see exactly these effects in the geometry of multiplicative methods In each bar group, first three models are additive, while the last three are multiplicative.",
"Main findings from these plots are summarized in Section 6.3. analyzed above.",
"Correlation with Geometry of Word Embeddings Our conclusions from the geometrical analysis of entity vectors produced by multiplicative models are similar to the results in (Mimno and Thompson, 2017) , where increase in negative sampling leads to increased conicity of word vectors trained using the skip-gram with negative sampling (SGNS) method.",
"On the other hand, additive models remain unaffected by these changes.",
"SGNS tries to maximize a score function of the form w T · c for positive word context pairs, where w is the word vector and c is the context vector (Mikolov et al., 2013) .",
"This is very similar to the score function of multiplicative models as seen in Table 1 .",
"Hence, SGNS can be considered as a multiplicative model in the word domain.",
"Hence, we argue that our result on the increase in negative samples increasing the conicity of vectors trained using a multiplicative score function can be considered as a generalization of the one in (Mimno and Thompson, 2017) .",
"Effect of Vector Dimension on Geometry Summary of Findings: Additive: Conicity and average length are invariant to changes in dimension for both entities and relations.",
"Multiplicative: Conicity decreases for both entities and relations with increasing dimension.",
"Average vector length increases for both entities and relations, except for HolE entities.",
"Entity Embeddings: To study the effect of vec-tor dimension on conicity and length, we set the number of negative samples to 1, while varying the vector dimension.",
"From Figure 5 (left), we observe that the conicity for entity vectors generated by any additive model is almost invariant of increase in dimension, though STransE exhibits a slight decrease.",
"In contrast, entity vector from multiplicative models show a clear decreasing pattern with increasing dimension.",
"As seen in Figure 5 (right) , the average lengths of entity vectors from multiplicative models increase sharply with increasing vector dimension, except for HolE.",
"In case of HolE, the average vector length remains constant at one.",
"Deviation involving HolE is expected as it enforces entity vectors to fall within a unit ball.",
"Similar constraints are enforced on entity vectors for additive models as well.",
"Thus, the average entity vector lengths are not affected by increasing vector dimension for all additive models.",
"Relation Embeddings: We reach similar conclusion when analyzing against increasing dimension the change in geometry of relation vectors produced using these KG embedding methods.",
"In this setting, the average length of relation vectors learned by HolE also increases as dimension is increased.",
"This is consistent with the other methods in the multiplicative family.",
"This is because, unlike entity vectors, the lengths of relation vectors of HolE are not constrained to be less than unit length.",
"Due to lack of space, we are unable to show plots for relation vectors here, but the same can be found in the Supplementary Section.",
"Relating Geometry to Performance Summary of Findings: Additive: Neither entites nor relations exhibit correlation between geometry and performance.",
"Multiplicative: Keeping negative samples fixed, lower conicity or higher average vector length for entities leads to improved performance.",
"No relationship for relations.",
"In this section, we analyze the relationship between geometry and performance on the Link prediction task, using the same setting as in (Bordes et al., 2013) .",
"Figure 6 (left) presents the effects of conicity of entity vectors on performance, while Figure 6 (right) shows the effects of average entity vector length.",
"4 As we see from Figure 6 (left), for fixed number of negative samples, the multiplicative model with lower conicity of entity vectors achieves better performance.",
"This performance gain is larger for higher numbers of negative samples (N).",
"Additive models don't exhibit any relationship between performance and conicity, as they are all clustered around zero conicity, which is in-line with our observations in previous sections.",
"In Figure 6 (right) , for all multiplicative models except HolE, a higher average entity vector length translates to better performance, while the number of negative samples is kept fixed.",
"Additive models and HolE don't exhibit any such patterns, as they are all clustered just below unit average entity vector length.",
"The above two observations for multiplicative models make intuitive sense, as lower conicity and higher average vector length would both translate to vectors being more dispersed in the space.",
"We see another interesting observation regarding the high sensitivity of HolE to the number of negative samples used during training.",
"Using a large number of negative examples (e.g., N = 50 or 100) leads to very high conicity in case of HolE.",
"Figure 6 (right) shows that average entity vector length of HolE is always one.",
"These two observations point towards HolE's entity vectors lying in a tiny part of the space.",
"This translates to HolE performing poorer than all other models in case of high numbers of negative sampling.",
"We also did a similar study for relation vectors, but did not see any discernible patterns.",
"Conclusion In this paper, we have initiated a systematic study into the important but unexplored problem of analyzing geometry of various Knowledge Graph (KG) embedding methods.",
"To the best of our knowledge, this is the first study of its kind.",
"Through extensive experiments on multiple realworld datasets, we are able to identify several insights into the geometry of KG embeddings.",
"We have also explored the relationship between KG embedding geometry and its task performance.",
"We have shared all our source code to foster further research in this area."
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"4",
"5",
"6",
"6.2",
"6.2.1",
"7"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Overview of KG Embedding Methods",
"Additive KG Embedding Methods",
"Multiplicative KG Embedding Methods",
"Metrics",
"Experimental Setup",
"Results and Analysis",
"Effect of Number of Negative Samples on Geometry",
"Correlation with Geometry of Word Embeddings",
"Conclusion"
]
} | GEM-SciDuet-train-132#paper-1355#slide-6 | Experiments | We study the effect of following factors on the geometry of KG
Type of method (Additive or Multiplicative)
Number of Negative Samples
Dimension of Vector Space
We also study the correlation of performance and geometry. | We study the effect of following factors on the geometry of KG
Type of method (Additive or Multiplicative)
Number of Negative Samples
Dimension of Vector Space
We also study the correlation of performance and geometry. | [] |
GEM-SciDuet-train-132#paper-1355#slide-9 | 1355 | Towards Understanding the Geometry of Knowledge Graph Embeddings | Knowledge Graph (KG) embedding has emerged as a very active area of research over the last few years, resulting in the development of several embedding methods. These KG embedding methods represent KG entities and relations as vectors in a high-dimensional space. Despite this popularity and effectiveness of KG embeddings in various tasks (e.g., link prediction), geometric understanding of such embeddings (i.e., arrangement of entity and relation vectors in vector space) is unexplored -we fill this gap in the paper. We initiate a study to analyze the geometry of KG embeddings and correlate it with task performance and other hyperparameters. To the best of our knowledge, this is the first study of its kind. Through extensive experiments on real-world datasets, we discover several insights. For example, we find that there are sharp differences between the geometry of embeddings learnt by different classes of KG embeddings methods. We hope that this initial study will inspire other follow-up research on this important but unexplored problem. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188
],
"paper_content_text": [
"Introduction Knowledge Graphs (KGs) are multi-relational graphs where nodes represent entities and typededges represent relationships among entities.",
"Recent research in this area has resulted in the development of several large KGs, such as NELL (Mitchell et al., 2015) , YAGO (Suchanek et al., 2007) , and Freebase (Bollacker et al., 2008) , among others.",
"These KGs contain thousands of predicates (e.g., person, city, mayorOf(person, city), etc.",
"), and millions of triples involving such predicates, e.g., (Bill de Blasio, mayorOf, New York City) .",
"The problem of learning embeddings for Knowledge Graphs has received significant attention in recent years, with several methods being proposed (Bordes et al., 2013; Lin et al., 2015; Nguyen et al., 2016; Nickel et al., 2016; Trouillon et al., 2016) .",
"These methods represent entities and relations in a KG as vectors in high dimensional space.",
"These vectors can then be used for various tasks, such as, link prediction, entity classification etc.",
"Starting with TransE (Bordes et al., 2013) , there have been many KG embedding methods such as TransH (Wang et al., 2014) , TransR (Lin et al., 2015) and STransE (Nguyen et al., 2016) which represent relations as translation vectors from head entities to tail entities.",
"These are additive models, as the vectors interact via addition and subtraction.",
"Other KG embedding models, such as, DistMult (Yang et al., 2014) , HolE (Nickel et al., 2016) , and ComplEx (Trouillon et al., 2016) are multiplicative where entityrelation-entity triple likelihood is quantified by a multiplicative score function.",
"All these methods employ a score function for distinguishing correct triples from incorrect ones.",
"In spite of the existence of many KG embedding methods, our understanding of the geometry and structure of such embeddings is very shallow.",
"A recent work (Mimno and Thompson, 2017) analyzed the geometry of word embeddings.",
"However, the problem of analyzing geometry of KG embeddings is still unexplored -we fill this important gap.",
"In this paper, we analyze the geometry of such vectors in terms of their lengths and conicity, which, as defined in Section 4, describes their positions and orientations in the vector space.",
"We later study the effects of model type and training hyperparameters on the geometry of KG embeddings and correlate geometry with performance.",
"We make the following contributions: • We initiate a study to analyze the geometry of various Knowledge Graph (KG) embeddings.",
"To the best of our knowledge, this is the first study of its kind.",
"We also formalize various metrics which can be used to study geometry of a set of vectors.",
"• Through extensive analysis, we discover several interesting insights about the geometry of KG embeddings.",
"For example, we find systematic differences between the geometries of embeddings learned by additive and multiplicative KG embedding methods.",
"• We also study the relationship between geometric attributes and predictive performance of the embeddings, resulting in several new insights.",
"For example, in case of multiplicative models, we observe that for entity vectors generated with a fixed number of negative samples, lower conicity (as defined in Section 4) or higher average vector length lead to higher performance.",
"Source code of all the analysis tools developed as part of this paper is available at https://github.com/malllabiisc/ kg-geometry.",
"We are hoping that these resources will enable one to quickly analyze the geometry of any KG embedding, and potentially other embeddings as well.",
"Related Work In spite of the extensive and growing literature on both KG and non-KG embedding methods, very little attention has been paid towards understanding the geometry of the learned embeddings.",
"A recent work (Mimno and Thompson, 2017) is an exception to this which addresses this problem in the context of word vectors.",
"This work revealed a surprising correlation between word vector geometry and the number of negative samples used during training.",
"Instead of word vectors, in this paper we focus on understanding the geometry of KG embeddings.",
"In spite of this difference, the insights we discover in this paper generalizes some of the observations in the work of (Mimno and Thompson, 2017) .",
"Please see Section 6.2 for more details.",
"Since KGs contain only positive triples, negative sampling has been used for training KG embeddings.",
"Effect of the number of negative samples in KG embedding performance was studied by (Toutanova et al., 2015) .",
"In this paper, we study the effect of the number of negative samples on KG embedding geometry as well as performance.",
"In addition to the additive and multiplicative KG embedding methods already mentioned in Section 1, there is another set of methods where the entity and relation vectors interact via a neural network.",
"Examples of methods in this category include NTN (Socher et al., 2013) , CONV (Toutanova et al., 2015) , ConvE (Dettmers et al., 2017) , R-GCN (Schlichtkrull et al., 2017) , ER-MLP (Dong et al., 2014) and ER-MLP-2n (Ravishankar et al., 2017) .",
"Due to space limitations, in this paper we restrict our scope to the analysis of the geometry of additive and multiplicative KG embedding models only, and leave the analysis of the geometry of neural network-based methods as part of future work.",
"Overview of KG Embedding Methods For our analysis, we consider six representative KG embedding methods: TransE (Bordes et al., 2013) , TransR (Lin et al., 2015) , STransE (Nguyen et al., 2016) , DistMult (Yang et al., 2014) , HolE (Nickel et al., 2016) and ComplEx (Trouillon et al., 2016) .",
"We refer to TransE, TransR and STransE as additive methods because they learn embeddings by modeling relations as translation vectors from one entity to another, which results in vectors interacting via the addition operation during training.",
"On the other hand, we refer to Dist-Mult, HolE and ComplEx as multiplicative methods as they quantify the likelihood of a triple belonging to the KG through a multiplicative score function.",
"The score functions optimized by these methods are summarized in Table 1 .",
"Notation: Let G = (E, R, T ) be a Knowledge Graph (KG) where E is the set of entities, R is the set of relations and T ⊂ E × R × E is the set of triples stored in the graph.",
"Most of the KG embedding methods learn vectors e ∈ R de for e ∈ E, and r ∈ R dr for r ∈ R. Some methods also learn projection matrices M r ∈ R dr×de for relations.",
"The correctness of a triple is evaluated using a model specific score function σ : E × R × E → R. For learning the embeddings, a loss function L(T , T ; θ), defined over a set of positive triples T , set of (sampled) negative triples T , and the parameters θ is optimized.",
"We use small italics characters (e.g., h, r) to represent entities and relations, and correspond-Type Model Score Function σ(h, r, t) Additive TransE (Bordes et al., 2013) − h + r − t 1 TransR (Lin et al., 2015) − Mrh + r − Mrt 1 STransE (Nguyen et al., 2016) − M 1 r h + r − M 2 r t 1 Multiplicative DistMult (Yang et al., 2014) r (h t) HolE (Nickel et al., 2016) r (h t) ComplEx (Trouillon et al., 2016) Re(r (h t )) Table 1 : Summary of various Knowledge Graph (KG) embedding methods used in the paper.",
"Please see Section 3 for more details.",
"ing bold characters to represent their vector embeddings (e.g., h, r).",
"We use bold capitalization (e.g., V) to represent a set of vectors.",
"Matrices are represented by capital italics characters (e.g., M ).",
"Additive KG Embedding Methods This is the set of methods where entity and relation vectors interact via additive operations.",
"The score function for these models can be expressed as below σ(h, r, t) = − M 1 r h + r − M 2 r t 1 (1) where h, t ∈ R de and r ∈ R dr are vectors for head entity, tail entity and relation respectively.",
"M 1 r , M 2 r ∈ R dr×de are projection matrices from entity space R de to relation space R dr .",
"TransE (Bordes et al., 2013) is the simplest additive model where the entity and relation vectors lie in same d−dimensional space, i.e., d e = d r = d. The projection matrices M 1 r = M 2 r = I d are identity matrices.",
"The relation vectors are modeled as translation vectors from head entity vectors to tail entity vectors.",
"Pairwise ranking loss is then used to learn these vectors.",
"Since the model is simple, it has limited capability in capturing many-to-one, one-to-many and many-to-many relations.",
"TransR (Lin et al., 2015) is another translationbased model which uses separate spaces for entity and relation vectors allowing it to address the shortcomings of TransE.",
"Entity vectors are projected into a relation specific space using the corresponding projection matrix M 1 r = M 2 r = M r .",
"The training is similar to TransE.",
"STransE (Nguyen et al., 2016) is a generalization of TransR and uses different projection matrices for head and tail entity vectors.",
"The training is similar to TransE.",
"STransE achieves better performance than the previous methods but at the cost of more number of parameters.",
"Equation 1 is the score function used in STransE.",
"TransE and TransR are special cases of STransE with M 1 r = M 2 r = I d and M 1 r = M 2 r = M r , respectively.",
"Multiplicative KG Embedding Methods This is the set of methods where the vectors interact via multiplicative operations (usually dot product).",
"The score function for these models can be expressed as σ(h, r, t) = r f (h, t) (2) where h, t, r ∈ F d are vectors for head entity, tail entity and relation respectively.",
"f (h, t) ∈ F d measures compatibility of head and tail entities and is specific to the model.",
"F is either real space R or complex space C. Detailed descriptions of the models we consider are as follows.",
"DistMult (Yang et al., 2014 ) models entities and relations as vectors in R d .",
"It uses an entry-wise product ( ) to measure compatibility between head and tail entities, while using logistic loss for training the model.",
"σ DistM ult (h, r, t) = r (h t) (3) Since the entry-wise product in (3) is symmetric, DistMult is not suitable for asymmetric and antisymmetric relations.",
"HolE (Nickel et al., 2016) also models entities and relations as vectors in R d .",
"It uses circular correlation operator ( ) as compatibility function defined as [h t] k = d−1 i=0 h i t (k+i) mod d The score function is given as σ HolE (h, r, t) = r (h t) (4) The circular correlation operator being asymmetric, can capture asymmetric and anti-symmetric relations, but at the cost of higher time complexity The points are sampled from 3d Spherical Gaussian with mean (1,1,1) and standard deviation 0.1 (left) and 1.3 (right).",
"Please refer to Section 4 for more details.",
"(O (d log d) ).",
"For training, we use pairwise ranking loss.",
"ComplEx (Trouillon et al., 2016) represents entities and relations as vectors in C d .",
"The compatibility of entity pairs is measured using entry-wise product between head and complex conjugate of tail entity vectors.",
"σ ComplEx (h, r, t) = Re(r (h t )) (5) In contrast to (3), using complex vectors in (5) allows ComplEx to handle symmetric, asymmetric and anti-symmetric relations using the same score function.",
"Similar to DistMult, logistic loss is used for training the model.",
"Metrics For our geometrical analysis, we first define a term 'alignment to mean' (ATM) of a vector v belonging to a set of vectors V, as the cosine similarity 1 between v and the mean of all vectors in V. ATM(v, V) = cosine v, 1 |V| x∈V x We also define 'conicity' of a set V as the mean ATM of all vectors in V. By this definition, a high value of Conicity(V) would imply that the vectors in V lie in a narrow cone centered at origin.",
"In other words, the vectors in the set V are highly aligned with each other.",
"In addition to that, we define the variance of ATM across all vectors in V, as the 'vector spread'(VS) of set V, For each method, a plot averaged across entity frequency bins is shown.",
"From these plots, we conclude that entity embeddings from additive models tend to have low (positive as well as negative) ATM and thereby low Conicity and high vector spread.",
"Interestingly, this is reversed in case of multiplicative methods.",
"Please see Section 6.1 for more details.",
"Conicity(V) = 1 |V| v∈V ATM(v, V) 1 cosine(u, v) = u v u v Dataset VS(V) = 1 |V| v∈V ATM(v, V)−Conicity(V) Experimental Setup Datasets: We run our experiments on subsets of two widely used datasets, viz., Freebase (Bollacker et al., 2008) and WordNet (Miller, 1995) , called FB15k and WN18 (Bordes et al., 2013) , respectively.",
"We detail the characteristics of these datasets in Table 2 .",
"Please note that while the results presented in Section 6 are on the FB15K dataset, we reach the same conclusions on WN18.",
"The plots for our experiments on WN18 can be found in the Supplementary Section.",
"Hyperparameters: We experiment with multiple values of hyperparameters to understand their effect on the geometry of KG embeddings.",
"Specifically, we vary the dimension of the generated vectors between {50, 100, 200} and the number of negative samples used during training between {1, 50, 100}.",
"For more details on algorithm specific hyperparameters, we refer the reader to the Supplementary Section.",
"2 2 For training, we used codes from https://github.",
"Frequency Bins: We follow (Mimno and Thompson, 2017) for entity and relation samples used in the analysis.",
"Multiple bins of entities and relations are created based on their frequencies and 100 randomly sampled vectors are taken from each bin.",
"These set of sampled vectors are then used for our analysis.",
"For more information about sampling vectors, please refer to (Mimno and Thompson, 2017) .",
"Results and Analysis In this section, we evaluate the following questions.",
"• Does model type (e.g., additive vs multiplicative) have any effect on the geometry of embeddings?",
"(Section 6.",
"For each method, a plot averaged across entity frequency bins is shown.",
"Trends in these plots are similar to those in Figure 2 .",
"Main findings from these plots are summarized in Section 6.1.",
"• Does negative sampling have any effect on the embedding geometry?",
"(Section 6.2) • Does the dimension of embedding have any effect on its geometry?",
"(Section 6.3) • How is task performance related to embedding geometry?",
"(Section 6.4) In each subsection, we summarize the main findings at the beginning, followed by evidence supporting those findings.",
"Effect of Model Type on Geometry Summary of Findings: Additive: Low conicity and high vector spread.",
"Multiplicative: High conicity and low vector spread.",
"In this section, we explore whether the type of the score function optimized during the training has any effect on the geometry of the resulting embedding.",
"For this experiment, we set the number of negative samples to 1 and the vector dimension to 100 (we got similar results for 50-dimensional vectors).",
"Figure 2 and Figure 3 show the distribution of ATMs of these sampled entity and relation vectors, respectively.",
"3 Entity Embeddings: As seen in Figure 2 , there is a stark difference between the geometries of entity vectors produced by additive and multiplicative models.",
"The ATMs of all entity vectors produced by multiplicative models are positive with very low vector spread.",
"Their high conicity suggests that they are not uniformly dispersed in the vector space, but lie in a narrow cone along the mean vector.",
"This is in contrast to the entity vectors obtained from additive models which are both positive and negative with higher vector spread.",
"From the lower values of conicity, we conclude that entity vectors from additive models are evenly dispersed in the vector space.",
"This observation is also reinforced by looking at the high vector spread of additive models in comparison to that of multiplicative models.",
"We also observed that additive models are sensitive to the frequency of entities, with high frequency bins having higher conicity than low frequency bins.",
"However, no such pattern was observed for multiplicative models and In each bar group, first three models are additive, while the last three are multiplicative.",
"Main findings from these plots are summarized in Section 6.2 conicity was consistently similar across frequency bins.",
"For clarity, we have not shown different plots for individual frequency bins.",
"Relation Embeddings: As in entity embeddings, we observe a similar trend when we look at the distribution of ATMs for relation vectors in Figure 3 .",
"The conicity of relation vectors generated using additive models is almost zero across frequency bands.",
"This coupled with the high vector spread observed, suggests that these vectors are scattered throughout the vector space.",
"Relation vectors from multiplicative models exhibit high conicity and low vector spread, suggesting that they lie in a narrow cone centered at origin, like their entity counterparts.",
"Effect of Number of Negative Samples on Geometry Summary of Findings: Additive: Conicity and average length are invariant to changes in #NegativeSamples for both entities and relations.",
"Multiplicative: Conicity increases while average vector length decrease with increasing #NegativeSamples for entities.",
"Conicity decreases, while average vector length remains constant (except HolE) for relations.",
"For experiments in this section, we keep the vector dimension constant at 100.",
"Entity Embeddings: As seen in Figure 4 (left) , the conicity of entity vectors increases as the number of negative samples is increased for multiplicative models.",
"In contrast, conicity of the entity vectors generated by additive models is unaffected by change in number of negative samples and they continue to be dispersed throughout the vector space.",
"From Figure 4 (right), we observe that the average length of entity vectors produced by additive models is also invariant of any changes in number of negative samples.",
"On the other hand, increase in negative sampling decreases the average entity vector length for all multiplicative models except HolE.",
"The average entity vector length for HolE is nearly 1 for any number of negative samples, which is understandable considering it constrains the entity vectors to lie inside a unit ball (Nickel et al., 2016) .",
"This constraint is also enforced by the additive models: TransE, TransR, and STransE.",
"Relation Embeddings: Similar to entity embeddings, in case of relation vectors trained using additive models, the average length and conicity do not change while varying the number of negative samples.",
"However, the conicity of relation vectors from multiplicative models decreases with increase in negative sampling.",
"The average relation vector length is invariant for all multiplicative methods, except for HolE.",
"We see a surprisingly big jump in average relation vector length for HolE going from 1 to 50 negative samples, but it does not change after that.",
"Due to space constraints in the paper, we refer the reader to the Supplementary Section for plots discussing the effect of number of negative samples on geometry of relation vectors.",
"We note that the multiplicative score between two vectors may be increased by either increasing the alignment between the two vectors (i.e., increasing Conicity and reducing vector spread between them), or by increasing their lengths.",
"It is interesting to note that we see exactly these effects in the geometry of multiplicative methods In each bar group, first three models are additive, while the last three are multiplicative.",
"Main findings from these plots are summarized in Section 6.3. analyzed above.",
"Correlation with Geometry of Word Embeddings Our conclusions from the geometrical analysis of entity vectors produced by multiplicative models are similar to the results in (Mimno and Thompson, 2017) , where increase in negative sampling leads to increased conicity of word vectors trained using the skip-gram with negative sampling (SGNS) method.",
"On the other hand, additive models remain unaffected by these changes.",
"SGNS tries to maximize a score function of the form w T · c for positive word context pairs, where w is the word vector and c is the context vector (Mikolov et al., 2013) .",
"This is very similar to the score function of multiplicative models as seen in Table 1 .",
"Hence, SGNS can be considered as a multiplicative model in the word domain.",
"Hence, we argue that our result on the increase in negative samples increasing the conicity of vectors trained using a multiplicative score function can be considered as a generalization of the one in (Mimno and Thompson, 2017) .",
"Effect of Vector Dimension on Geometry Summary of Findings: Additive: Conicity and average length are invariant to changes in dimension for both entities and relations.",
"Multiplicative: Conicity decreases for both entities and relations with increasing dimension.",
"Average vector length increases for both entities and relations, except for HolE entities.",
"Entity Embeddings: To study the effect of vec-tor dimension on conicity and length, we set the number of negative samples to 1, while varying the vector dimension.",
"From Figure 5 (left), we observe that the conicity for entity vectors generated by any additive model is almost invariant of increase in dimension, though STransE exhibits a slight decrease.",
"In contrast, entity vector from multiplicative models show a clear decreasing pattern with increasing dimension.",
"As seen in Figure 5 (right) , the average lengths of entity vectors from multiplicative models increase sharply with increasing vector dimension, except for HolE.",
"In case of HolE, the average vector length remains constant at one.",
"Deviation involving HolE is expected as it enforces entity vectors to fall within a unit ball.",
"Similar constraints are enforced on entity vectors for additive models as well.",
"Thus, the average entity vector lengths are not affected by increasing vector dimension for all additive models.",
"Relation Embeddings: We reach similar conclusion when analyzing against increasing dimension the change in geometry of relation vectors produced using these KG embedding methods.",
"In this setting, the average length of relation vectors learned by HolE also increases as dimension is increased.",
"This is consistent with the other methods in the multiplicative family.",
"This is because, unlike entity vectors, the lengths of relation vectors of HolE are not constrained to be less than unit length.",
"Due to lack of space, we are unable to show plots for relation vectors here, but the same can be found in the Supplementary Section.",
"Relating Geometry to Performance Summary of Findings: Additive: Neither entites nor relations exhibit correlation between geometry and performance.",
"Multiplicative: Keeping negative samples fixed, lower conicity or higher average vector length for entities leads to improved performance.",
"No relationship for relations.",
"In this section, we analyze the relationship between geometry and performance on the Link prediction task, using the same setting as in (Bordes et al., 2013) .",
"Figure 6 (left) presents the effects of conicity of entity vectors on performance, while Figure 6 (right) shows the effects of average entity vector length.",
"4 As we see from Figure 6 (left), for fixed number of negative samples, the multiplicative model with lower conicity of entity vectors achieves better performance.",
"This performance gain is larger for higher numbers of negative samples (N).",
"Additive models don't exhibit any relationship between performance and conicity, as they are all clustered around zero conicity, which is in-line with our observations in previous sections.",
"In Figure 6 (right) , for all multiplicative models except HolE, a higher average entity vector length translates to better performance, while the number of negative samples is kept fixed.",
"Additive models and HolE don't exhibit any such patterns, as they are all clustered just below unit average entity vector length.",
"The above two observations for multiplicative models make intuitive sense, as lower conicity and higher average vector length would both translate to vectors being more dispersed in the space.",
"We see another interesting observation regarding the high sensitivity of HolE to the number of negative samples used during training.",
"Using a large number of negative examples (e.g., N = 50 or 100) leads to very high conicity in case of HolE.",
"Figure 6 (right) shows that average entity vector length of HolE is always one.",
"These two observations point towards HolE's entity vectors lying in a tiny part of the space.",
"This translates to HolE performing poorer than all other models in case of high numbers of negative sampling.",
"We also did a similar study for relation vectors, but did not see any discernible patterns.",
"Conclusion In this paper, we have initiated a systematic study into the important but unexplored problem of analyzing geometry of various Knowledge Graph (KG) embedding methods.",
"To the best of our knowledge, this is the first study of its kind.",
"Through extensive experiments on multiple realworld datasets, we are able to identify several insights into the geometry of KG embeddings.",
"We have also explored the relationship between KG embedding geometry and its task performance.",
"We have shared all our source code to foster further research in this area."
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"4",
"5",
"6",
"6.2",
"6.2.1",
"7"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Overview of KG Embedding Methods",
"Additive KG Embedding Methods",
"Multiplicative KG Embedding Methods",
"Metrics",
"Experimental Setup",
"Results and Analysis",
"Effect of Number of Negative Samples on Geometry",
"Correlation with Geometry of Word Embeddings",
"Conclusion"
]
} | GEM-SciDuet-train-132#paper-1355#slide-9 | Additive vs Multiplicative | Model Type Conicity Vector Spread | Model Type Conicity Vector Spread | [] |
GEM-SciDuet-train-132#paper-1355#slide-10 | 1355 | Towards Understanding the Geometry of Knowledge Graph Embeddings | Knowledge Graph (KG) embedding has emerged as a very active area of research over the last few years, resulting in the development of several embedding methods. These KG embedding methods represent KG entities and relations as vectors in a high-dimensional space. Despite this popularity and effectiveness of KG embeddings in various tasks (e.g., link prediction), geometric understanding of such embeddings (i.e., arrangement of entity and relation vectors in vector space) is unexplored -we fill this gap in the paper. We initiate a study to analyze the geometry of KG embeddings and correlate it with task performance and other hyperparameters. To the best of our knowledge, this is the first study of its kind. Through extensive experiments on real-world datasets, we discover several insights. For example, we find that there are sharp differences between the geometry of embeddings learnt by different classes of KG embeddings methods. We hope that this initial study will inspire other follow-up research on this important but unexplored problem. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188
],
"paper_content_text": [
"Introduction Knowledge Graphs (KGs) are multi-relational graphs where nodes represent entities and typededges represent relationships among entities.",
"Recent research in this area has resulted in the development of several large KGs, such as NELL (Mitchell et al., 2015) , YAGO (Suchanek et al., 2007) , and Freebase (Bollacker et al., 2008) , among others.",
"These KGs contain thousands of predicates (e.g., person, city, mayorOf(person, city), etc.",
"), and millions of triples involving such predicates, e.g., (Bill de Blasio, mayorOf, New York City) .",
"The problem of learning embeddings for Knowledge Graphs has received significant attention in recent years, with several methods being proposed (Bordes et al., 2013; Lin et al., 2015; Nguyen et al., 2016; Nickel et al., 2016; Trouillon et al., 2016) .",
"These methods represent entities and relations in a KG as vectors in high dimensional space.",
"These vectors can then be used for various tasks, such as, link prediction, entity classification etc.",
"Starting with TransE (Bordes et al., 2013) , there have been many KG embedding methods such as TransH (Wang et al., 2014) , TransR (Lin et al., 2015) and STransE (Nguyen et al., 2016) which represent relations as translation vectors from head entities to tail entities.",
"These are additive models, as the vectors interact via addition and subtraction.",
"Other KG embedding models, such as, DistMult (Yang et al., 2014) , HolE (Nickel et al., 2016) , and ComplEx (Trouillon et al., 2016) are multiplicative where entityrelation-entity triple likelihood is quantified by a multiplicative score function.",
"All these methods employ a score function for distinguishing correct triples from incorrect ones.",
"In spite of the existence of many KG embedding methods, our understanding of the geometry and structure of such embeddings is very shallow.",
"A recent work (Mimno and Thompson, 2017) analyzed the geometry of word embeddings.",
"However, the problem of analyzing geometry of KG embeddings is still unexplored -we fill this important gap.",
"In this paper, we analyze the geometry of such vectors in terms of their lengths and conicity, which, as defined in Section 4, describes their positions and orientations in the vector space.",
"We later study the effects of model type and training hyperparameters on the geometry of KG embeddings and correlate geometry with performance.",
"We make the following contributions: • We initiate a study to analyze the geometry of various Knowledge Graph (KG) embeddings.",
"To the best of our knowledge, this is the first study of its kind.",
"We also formalize various metrics which can be used to study geometry of a set of vectors.",
"• Through extensive analysis, we discover several interesting insights about the geometry of KG embeddings.",
"For example, we find systematic differences between the geometries of embeddings learned by additive and multiplicative KG embedding methods.",
"• We also study the relationship between geometric attributes and predictive performance of the embeddings, resulting in several new insights.",
"For example, in case of multiplicative models, we observe that for entity vectors generated with a fixed number of negative samples, lower conicity (as defined in Section 4) or higher average vector length lead to higher performance.",
"Source code of all the analysis tools developed as part of this paper is available at https://github.com/malllabiisc/ kg-geometry.",
"We are hoping that these resources will enable one to quickly analyze the geometry of any KG embedding, and potentially other embeddings as well.",
"Related Work In spite of the extensive and growing literature on both KG and non-KG embedding methods, very little attention has been paid towards understanding the geometry of the learned embeddings.",
"A recent work (Mimno and Thompson, 2017) is an exception to this which addresses this problem in the context of word vectors.",
"This work revealed a surprising correlation between word vector geometry and the number of negative samples used during training.",
"Instead of word vectors, in this paper we focus on understanding the geometry of KG embeddings.",
"In spite of this difference, the insights we discover in this paper generalizes some of the observations in the work of (Mimno and Thompson, 2017) .",
"Please see Section 6.2 for more details.",
"Since KGs contain only positive triples, negative sampling has been used for training KG embeddings.",
"Effect of the number of negative samples in KG embedding performance was studied by (Toutanova et al., 2015) .",
"In this paper, we study the effect of the number of negative samples on KG embedding geometry as well as performance.",
"In addition to the additive and multiplicative KG embedding methods already mentioned in Section 1, there is another set of methods where the entity and relation vectors interact via a neural network.",
"Examples of methods in this category include NTN (Socher et al., 2013) , CONV (Toutanova et al., 2015) , ConvE (Dettmers et al., 2017) , R-GCN (Schlichtkrull et al., 2017) , ER-MLP (Dong et al., 2014) and ER-MLP-2n (Ravishankar et al., 2017) .",
"Due to space limitations, in this paper we restrict our scope to the analysis of the geometry of additive and multiplicative KG embedding models only, and leave the analysis of the geometry of neural network-based methods as part of future work.",
"Overview of KG Embedding Methods For our analysis, we consider six representative KG embedding methods: TransE (Bordes et al., 2013) , TransR (Lin et al., 2015) , STransE (Nguyen et al., 2016) , DistMult (Yang et al., 2014) , HolE (Nickel et al., 2016) and ComplEx (Trouillon et al., 2016) .",
"We refer to TransE, TransR and STransE as additive methods because they learn embeddings by modeling relations as translation vectors from one entity to another, which results in vectors interacting via the addition operation during training.",
"On the other hand, we refer to Dist-Mult, HolE and ComplEx as multiplicative methods as they quantify the likelihood of a triple belonging to the KG through a multiplicative score function.",
"The score functions optimized by these methods are summarized in Table 1 .",
"Notation: Let G = (E, R, T ) be a Knowledge Graph (KG) where E is the set of entities, R is the set of relations and T ⊂ E × R × E is the set of triples stored in the graph.",
"Most of the KG embedding methods learn vectors e ∈ R de for e ∈ E, and r ∈ R dr for r ∈ R. Some methods also learn projection matrices M r ∈ R dr×de for relations.",
"The correctness of a triple is evaluated using a model specific score function σ : E × R × E → R. For learning the embeddings, a loss function L(T , T ; θ), defined over a set of positive triples T , set of (sampled) negative triples T , and the parameters θ is optimized.",
"We use small italics characters (e.g., h, r) to represent entities and relations, and correspond-Type Model Score Function σ(h, r, t) Additive TransE (Bordes et al., 2013) − h + r − t 1 TransR (Lin et al., 2015) − Mrh + r − Mrt 1 STransE (Nguyen et al., 2016) − M 1 r h + r − M 2 r t 1 Multiplicative DistMult (Yang et al., 2014) r (h t) HolE (Nickel et al., 2016) r (h t) ComplEx (Trouillon et al., 2016) Re(r (h t )) Table 1 : Summary of various Knowledge Graph (KG) embedding methods used in the paper.",
"Please see Section 3 for more details.",
"ing bold characters to represent their vector embeddings (e.g., h, r).",
"We use bold capitalization (e.g., V) to represent a set of vectors.",
"Matrices are represented by capital italics characters (e.g., M ).",
"Additive KG Embedding Methods This is the set of methods where entity and relation vectors interact via additive operations.",
"The score function for these models can be expressed as below σ(h, r, t) = − M 1 r h + r − M 2 r t 1 (1) where h, t ∈ R de and r ∈ R dr are vectors for head entity, tail entity and relation respectively.",
"M 1 r , M 2 r ∈ R dr×de are projection matrices from entity space R de to relation space R dr .",
"TransE (Bordes et al., 2013) is the simplest additive model where the entity and relation vectors lie in same d−dimensional space, i.e., d e = d r = d. The projection matrices M 1 r = M 2 r = I d are identity matrices.",
"The relation vectors are modeled as translation vectors from head entity vectors to tail entity vectors.",
"Pairwise ranking loss is then used to learn these vectors.",
"Since the model is simple, it has limited capability in capturing many-to-one, one-to-many and many-to-many relations.",
"TransR (Lin et al., 2015) is another translationbased model which uses separate spaces for entity and relation vectors allowing it to address the shortcomings of TransE.",
"Entity vectors are projected into a relation specific space using the corresponding projection matrix M 1 r = M 2 r = M r .",
"The training is similar to TransE.",
"STransE (Nguyen et al., 2016) is a generalization of TransR and uses different projection matrices for head and tail entity vectors.",
"The training is similar to TransE.",
"STransE achieves better performance than the previous methods but at the cost of more number of parameters.",
"Equation 1 is the score function used in STransE.",
"TransE and TransR are special cases of STransE with M 1 r = M 2 r = I d and M 1 r = M 2 r = M r , respectively.",
"Multiplicative KG Embedding Methods This is the set of methods where the vectors interact via multiplicative operations (usually dot product).",
"The score function for these models can be expressed as σ(h, r, t) = r f (h, t) (2) where h, t, r ∈ F d are vectors for head entity, tail entity and relation respectively.",
"f (h, t) ∈ F d measures compatibility of head and tail entities and is specific to the model.",
"F is either real space R or complex space C. Detailed descriptions of the models we consider are as follows.",
"DistMult (Yang et al., 2014 ) models entities and relations as vectors in R d .",
"It uses an entry-wise product ( ) to measure compatibility between head and tail entities, while using logistic loss for training the model.",
"σ DistM ult (h, r, t) = r (h t) (3) Since the entry-wise product in (3) is symmetric, DistMult is not suitable for asymmetric and antisymmetric relations.",
"HolE (Nickel et al., 2016) also models entities and relations as vectors in R d .",
"It uses circular correlation operator ( ) as compatibility function defined as [h t] k = d−1 i=0 h i t (k+i) mod d The score function is given as σ HolE (h, r, t) = r (h t) (4) The circular correlation operator being asymmetric, can capture asymmetric and anti-symmetric relations, but at the cost of higher time complexity The points are sampled from 3d Spherical Gaussian with mean (1,1,1) and standard deviation 0.1 (left) and 1.3 (right).",
"Please refer to Section 4 for more details.",
"(O (d log d) ).",
"For training, we use pairwise ranking loss.",
"ComplEx (Trouillon et al., 2016) represents entities and relations as vectors in C d .",
"The compatibility of entity pairs is measured using entry-wise product between head and complex conjugate of tail entity vectors.",
"σ ComplEx (h, r, t) = Re(r (h t )) (5) In contrast to (3), using complex vectors in (5) allows ComplEx to handle symmetric, asymmetric and anti-symmetric relations using the same score function.",
"Similar to DistMult, logistic loss is used for training the model.",
"Metrics For our geometrical analysis, we first define a term 'alignment to mean' (ATM) of a vector v belonging to a set of vectors V, as the cosine similarity 1 between v and the mean of all vectors in V. ATM(v, V) = cosine v, 1 |V| x∈V x We also define 'conicity' of a set V as the mean ATM of all vectors in V. By this definition, a high value of Conicity(V) would imply that the vectors in V lie in a narrow cone centered at origin.",
"In other words, the vectors in the set V are highly aligned with each other.",
"In addition to that, we define the variance of ATM across all vectors in V, as the 'vector spread'(VS) of set V, For each method, a plot averaged across entity frequency bins is shown.",
"From these plots, we conclude that entity embeddings from additive models tend to have low (positive as well as negative) ATM and thereby low Conicity and high vector spread.",
"Interestingly, this is reversed in case of multiplicative methods.",
"Please see Section 6.1 for more details.",
"Conicity(V) = 1 |V| v∈V ATM(v, V) 1 cosine(u, v) = u v u v Dataset VS(V) = 1 |V| v∈V ATM(v, V)−Conicity(V) Experimental Setup Datasets: We run our experiments on subsets of two widely used datasets, viz., Freebase (Bollacker et al., 2008) and WordNet (Miller, 1995) , called FB15k and WN18 (Bordes et al., 2013) , respectively.",
"We detail the characteristics of these datasets in Table 2 .",
"Please note that while the results presented in Section 6 are on the FB15K dataset, we reach the same conclusions on WN18.",
"The plots for our experiments on WN18 can be found in the Supplementary Section.",
"Hyperparameters: We experiment with multiple values of hyperparameters to understand their effect on the geometry of KG embeddings.",
"Specifically, we vary the dimension of the generated vectors between {50, 100, 200} and the number of negative samples used during training between {1, 50, 100}.",
"For more details on algorithm specific hyperparameters, we refer the reader to the Supplementary Section.",
"2 2 For training, we used codes from https://github.",
"Frequency Bins: We follow (Mimno and Thompson, 2017) for entity and relation samples used in the analysis.",
"Multiple bins of entities and relations are created based on their frequencies and 100 randomly sampled vectors are taken from each bin.",
"These set of sampled vectors are then used for our analysis.",
"For more information about sampling vectors, please refer to (Mimno and Thompson, 2017) .",
"Results and Analysis In this section, we evaluate the following questions.",
"• Does model type (e.g., additive vs multiplicative) have any effect on the geometry of embeddings?",
"(Section 6.",
"For each method, a plot averaged across entity frequency bins is shown.",
"Trends in these plots are similar to those in Figure 2 .",
"Main findings from these plots are summarized in Section 6.1.",
"• Does negative sampling have any effect on the embedding geometry?",
"(Section 6.2) • Does the dimension of embedding have any effect on its geometry?",
"(Section 6.3) • How is task performance related to embedding geometry?",
"(Section 6.4) In each subsection, we summarize the main findings at the beginning, followed by evidence supporting those findings.",
"Effect of Model Type on Geometry Summary of Findings: Additive: Low conicity and high vector spread.",
"Multiplicative: High conicity and low vector spread.",
"In this section, we explore whether the type of the score function optimized during the training has any effect on the geometry of the resulting embedding.",
"For this experiment, we set the number of negative samples to 1 and the vector dimension to 100 (we got similar results for 50-dimensional vectors).",
"Figure 2 and Figure 3 show the distribution of ATMs of these sampled entity and relation vectors, respectively.",
"3 Entity Embeddings: As seen in Figure 2 , there is a stark difference between the geometries of entity vectors produced by additive and multiplicative models.",
"The ATMs of all entity vectors produced by multiplicative models are positive with very low vector spread.",
"Their high conicity suggests that they are not uniformly dispersed in the vector space, but lie in a narrow cone along the mean vector.",
"This is in contrast to the entity vectors obtained from additive models which are both positive and negative with higher vector spread.",
"From the lower values of conicity, we conclude that entity vectors from additive models are evenly dispersed in the vector space.",
"This observation is also reinforced by looking at the high vector spread of additive models in comparison to that of multiplicative models.",
"We also observed that additive models are sensitive to the frequency of entities, with high frequency bins having higher conicity than low frequency bins.",
"However, no such pattern was observed for multiplicative models and In each bar group, first three models are additive, while the last three are multiplicative.",
"Main findings from these plots are summarized in Section 6.2 conicity was consistently similar across frequency bins.",
"For clarity, we have not shown different plots for individual frequency bins.",
"Relation Embeddings: As in entity embeddings, we observe a similar trend when we look at the distribution of ATMs for relation vectors in Figure 3 .",
"The conicity of relation vectors generated using additive models is almost zero across frequency bands.",
"This coupled with the high vector spread observed, suggests that these vectors are scattered throughout the vector space.",
"Relation vectors from multiplicative models exhibit high conicity and low vector spread, suggesting that they lie in a narrow cone centered at origin, like their entity counterparts.",
"Effect of Number of Negative Samples on Geometry Summary of Findings: Additive: Conicity and average length are invariant to changes in #NegativeSamples for both entities and relations.",
"Multiplicative: Conicity increases while average vector length decrease with increasing #NegativeSamples for entities.",
"Conicity decreases, while average vector length remains constant (except HolE) for relations.",
"For experiments in this section, we keep the vector dimension constant at 100.",
"Entity Embeddings: As seen in Figure 4 (left) , the conicity of entity vectors increases as the number of negative samples is increased for multiplicative models.",
"In contrast, conicity of the entity vectors generated by additive models is unaffected by change in number of negative samples and they continue to be dispersed throughout the vector space.",
"From Figure 4 (right), we observe that the average length of entity vectors produced by additive models is also invariant of any changes in number of negative samples.",
"On the other hand, increase in negative sampling decreases the average entity vector length for all multiplicative models except HolE.",
"The average entity vector length for HolE is nearly 1 for any number of negative samples, which is understandable considering it constrains the entity vectors to lie inside a unit ball (Nickel et al., 2016) .",
"This constraint is also enforced by the additive models: TransE, TransR, and STransE.",
"Relation Embeddings: Similar to entity embeddings, in case of relation vectors trained using additive models, the average length and conicity do not change while varying the number of negative samples.",
"However, the conicity of relation vectors from multiplicative models decreases with increase in negative sampling.",
"The average relation vector length is invariant for all multiplicative methods, except for HolE.",
"We see a surprisingly big jump in average relation vector length for HolE going from 1 to 50 negative samples, but it does not change after that.",
"Due to space constraints in the paper, we refer the reader to the Supplementary Section for plots discussing the effect of number of negative samples on geometry of relation vectors.",
"We note that the multiplicative score between two vectors may be increased by either increasing the alignment between the two vectors (i.e., increasing Conicity and reducing vector spread between them), or by increasing their lengths.",
"It is interesting to note that we see exactly these effects in the geometry of multiplicative methods In each bar group, first three models are additive, while the last three are multiplicative.",
"Main findings from these plots are summarized in Section 6.3. analyzed above.",
"Correlation with Geometry of Word Embeddings Our conclusions from the geometrical analysis of entity vectors produced by multiplicative models are similar to the results in (Mimno and Thompson, 2017) , where increase in negative sampling leads to increased conicity of word vectors trained using the skip-gram with negative sampling (SGNS) method.",
"On the other hand, additive models remain unaffected by these changes.",
"SGNS tries to maximize a score function of the form w T · c for positive word context pairs, where w is the word vector and c is the context vector (Mikolov et al., 2013) .",
"This is very similar to the score function of multiplicative models as seen in Table 1 .",
"Hence, SGNS can be considered as a multiplicative model in the word domain.",
"Hence, we argue that our result on the increase in negative samples increasing the conicity of vectors trained using a multiplicative score function can be considered as a generalization of the one in (Mimno and Thompson, 2017) .",
"Effect of Vector Dimension on Geometry Summary of Findings: Additive: Conicity and average length are invariant to changes in dimension for both entities and relations.",
"Multiplicative: Conicity decreases for both entities and relations with increasing dimension.",
"Average vector length increases for both entities and relations, except for HolE entities.",
"Entity Embeddings: To study the effect of vec-tor dimension on conicity and length, we set the number of negative samples to 1, while varying the vector dimension.",
"From Figure 5 (left), we observe that the conicity for entity vectors generated by any additive model is almost invariant of increase in dimension, though STransE exhibits a slight decrease.",
"In contrast, entity vector from multiplicative models show a clear decreasing pattern with increasing dimension.",
"As seen in Figure 5 (right) , the average lengths of entity vectors from multiplicative models increase sharply with increasing vector dimension, except for HolE.",
"In case of HolE, the average vector length remains constant at one.",
"Deviation involving HolE is expected as it enforces entity vectors to fall within a unit ball.",
"Similar constraints are enforced on entity vectors for additive models as well.",
"Thus, the average entity vector lengths are not affected by increasing vector dimension for all additive models.",
"Relation Embeddings: We reach similar conclusion when analyzing against increasing dimension the change in geometry of relation vectors produced using these KG embedding methods.",
"In this setting, the average length of relation vectors learned by HolE also increases as dimension is increased.",
"This is consistent with the other methods in the multiplicative family.",
"This is because, unlike entity vectors, the lengths of relation vectors of HolE are not constrained to be less than unit length.",
"Due to lack of space, we are unable to show plots for relation vectors here, but the same can be found in the Supplementary Section.",
"Relating Geometry to Performance Summary of Findings: Additive: Neither entites nor relations exhibit correlation between geometry and performance.",
"Multiplicative: Keeping negative samples fixed, lower conicity or higher average vector length for entities leads to improved performance.",
"No relationship for relations.",
"In this section, we analyze the relationship between geometry and performance on the Link prediction task, using the same setting as in (Bordes et al., 2013) .",
"Figure 6 (left) presents the effects of conicity of entity vectors on performance, while Figure 6 (right) shows the effects of average entity vector length.",
"4 As we see from Figure 6 (left), for fixed number of negative samples, the multiplicative model with lower conicity of entity vectors achieves better performance.",
"This performance gain is larger for higher numbers of negative samples (N).",
"Additive models don't exhibit any relationship between performance and conicity, as they are all clustered around zero conicity, which is in-line with our observations in previous sections.",
"In Figure 6 (right) , for all multiplicative models except HolE, a higher average entity vector length translates to better performance, while the number of negative samples is kept fixed.",
"Additive models and HolE don't exhibit any such patterns, as they are all clustered just below unit average entity vector length.",
"The above two observations for multiplicative models make intuitive sense, as lower conicity and higher average vector length would both translate to vectors being more dispersed in the space.",
"We see another interesting observation regarding the high sensitivity of HolE to the number of negative samples used during training.",
"Using a large number of negative examples (e.g., N = 50 or 100) leads to very high conicity in case of HolE.",
"Figure 6 (right) shows that average entity vector length of HolE is always one.",
"These two observations point towards HolE's entity vectors lying in a tiny part of the space.",
"This translates to HolE performing poorer than all other models in case of high numbers of negative sampling.",
"We also did a similar study for relation vectors, but did not see any discernible patterns.",
"Conclusion In this paper, we have initiated a systematic study into the important but unexplored problem of analyzing geometry of various Knowledge Graph (KG) embedding methods.",
"To the best of our knowledge, this is the first study of its kind.",
"Through extensive experiments on multiple realworld datasets, we are able to identify several insights into the geometry of KG embeddings.",
"We have also explored the relationship between KG embedding geometry and its task performance.",
"We have shared all our source code to foster further research in this area."
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"4",
"5",
"6",
"6.2",
"6.2.1",
"7"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Overview of KG Embedding Methods",
"Additive KG Embedding Methods",
"Multiplicative KG Embedding Methods",
"Metrics",
"Experimental Setup",
"Results and Analysis",
"Effect of Number of Negative Samples on Geometry",
"Correlation with Geometry of Word Embeddings",
"Conclusion"
]
} | GEM-SciDuet-train-132#paper-1355#slide-10 | Effect of Negative Samples Entity Vectors | Model Type Vector Type Conicity AVL
Entity No Change No Change
Relation No Change No Change
Multiplicative Relation Decreases No Change except HolE | Model Type Vector Type Conicity AVL
Entity No Change No Change
Relation No Change No Change
Multiplicative Relation Decreases No Change except HolE | [] |
GEM-SciDuet-train-132#paper-1355#slide-11 | 1355 | Towards Understanding the Geometry of Knowledge Graph Embeddings | Knowledge Graph (KG) embedding has emerged as a very active area of research over the last few years, resulting in the development of several embedding methods. These KG embedding methods represent KG entities and relations as vectors in a high-dimensional space. Despite this popularity and effectiveness of KG embeddings in various tasks (e.g., link prediction), geometric understanding of such embeddings (i.e., arrangement of entity and relation vectors in vector space) is unexplored -we fill this gap in the paper. We initiate a study to analyze the geometry of KG embeddings and correlate it with task performance and other hyperparameters. To the best of our knowledge, this is the first study of its kind. Through extensive experiments on real-world datasets, we discover several insights. For example, we find that there are sharp differences between the geometry of embeddings learnt by different classes of KG embeddings methods. We hope that this initial study will inspire other follow-up research on this important but unexplored problem. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188
],
"paper_content_text": [
"Introduction Knowledge Graphs (KGs) are multi-relational graphs where nodes represent entities and typededges represent relationships among entities.",
"Recent research in this area has resulted in the development of several large KGs, such as NELL (Mitchell et al., 2015) , YAGO (Suchanek et al., 2007) , and Freebase (Bollacker et al., 2008) , among others.",
"These KGs contain thousands of predicates (e.g., person, city, mayorOf(person, city), etc.",
"), and millions of triples involving such predicates, e.g., (Bill de Blasio, mayorOf, New York City) .",
"The problem of learning embeddings for Knowledge Graphs has received significant attention in recent years, with several methods being proposed (Bordes et al., 2013; Lin et al., 2015; Nguyen et al., 2016; Nickel et al., 2016; Trouillon et al., 2016) .",
"These methods represent entities and relations in a KG as vectors in high dimensional space.",
"These vectors can then be used for various tasks, such as, link prediction, entity classification etc.",
"Starting with TransE (Bordes et al., 2013) , there have been many KG embedding methods such as TransH (Wang et al., 2014) , TransR (Lin et al., 2015) and STransE (Nguyen et al., 2016) which represent relations as translation vectors from head entities to tail entities.",
"These are additive models, as the vectors interact via addition and subtraction.",
"Other KG embedding models, such as, DistMult (Yang et al., 2014) , HolE (Nickel et al., 2016) , and ComplEx (Trouillon et al., 2016) are multiplicative where entityrelation-entity triple likelihood is quantified by a multiplicative score function.",
"All these methods employ a score function for distinguishing correct triples from incorrect ones.",
"In spite of the existence of many KG embedding methods, our understanding of the geometry and structure of such embeddings is very shallow.",
"A recent work (Mimno and Thompson, 2017) analyzed the geometry of word embeddings.",
"However, the problem of analyzing geometry of KG embeddings is still unexplored -we fill this important gap.",
"In this paper, we analyze the geometry of such vectors in terms of their lengths and conicity, which, as defined in Section 4, describes their positions and orientations in the vector space.",
"We later study the effects of model type and training hyperparameters on the geometry of KG embeddings and correlate geometry with performance.",
"We make the following contributions: • We initiate a study to analyze the geometry of various Knowledge Graph (KG) embeddings.",
"To the best of our knowledge, this is the first study of its kind.",
"We also formalize various metrics which can be used to study geometry of a set of vectors.",
"• Through extensive analysis, we discover several interesting insights about the geometry of KG embeddings.",
"For example, we find systematic differences between the geometries of embeddings learned by additive and multiplicative KG embedding methods.",
"• We also study the relationship between geometric attributes and predictive performance of the embeddings, resulting in several new insights.",
"For example, in case of multiplicative models, we observe that for entity vectors generated with a fixed number of negative samples, lower conicity (as defined in Section 4) or higher average vector length lead to higher performance.",
"Source code of all the analysis tools developed as part of this paper is available at https://github.com/malllabiisc/ kg-geometry.",
"We are hoping that these resources will enable one to quickly analyze the geometry of any KG embedding, and potentially other embeddings as well.",
"Related Work In spite of the extensive and growing literature on both KG and non-KG embedding methods, very little attention has been paid towards understanding the geometry of the learned embeddings.",
"A recent work (Mimno and Thompson, 2017) is an exception to this which addresses this problem in the context of word vectors.",
"This work revealed a surprising correlation between word vector geometry and the number of negative samples used during training.",
"Instead of word vectors, in this paper we focus on understanding the geometry of KG embeddings.",
"In spite of this difference, the insights we discover in this paper generalizes some of the observations in the work of (Mimno and Thompson, 2017) .",
"Please see Section 6.2 for more details.",
"Since KGs contain only positive triples, negative sampling has been used for training KG embeddings.",
"Effect of the number of negative samples in KG embedding performance was studied by (Toutanova et al., 2015) .",
"In this paper, we study the effect of the number of negative samples on KG embedding geometry as well as performance.",
"In addition to the additive and multiplicative KG embedding methods already mentioned in Section 1, there is another set of methods where the entity and relation vectors interact via a neural network.",
"Examples of methods in this category include NTN (Socher et al., 2013) , CONV (Toutanova et al., 2015) , ConvE (Dettmers et al., 2017) , R-GCN (Schlichtkrull et al., 2017) , ER-MLP (Dong et al., 2014) and ER-MLP-2n (Ravishankar et al., 2017) .",
"Due to space limitations, in this paper we restrict our scope to the analysis of the geometry of additive and multiplicative KG embedding models only, and leave the analysis of the geometry of neural network-based methods as part of future work.",
"Overview of KG Embedding Methods For our analysis, we consider six representative KG embedding methods: TransE (Bordes et al., 2013) , TransR (Lin et al., 2015) , STransE (Nguyen et al., 2016) , DistMult (Yang et al., 2014) , HolE (Nickel et al., 2016) and ComplEx (Trouillon et al., 2016) .",
"We refer to TransE, TransR and STransE as additive methods because they learn embeddings by modeling relations as translation vectors from one entity to another, which results in vectors interacting via the addition operation during training.",
"On the other hand, we refer to Dist-Mult, HolE and ComplEx as multiplicative methods as they quantify the likelihood of a triple belonging to the KG through a multiplicative score function.",
"The score functions optimized by these methods are summarized in Table 1 .",
"Notation: Let G = (E, R, T ) be a Knowledge Graph (KG) where E is the set of entities, R is the set of relations and T ⊂ E × R × E is the set of triples stored in the graph.",
"Most of the KG embedding methods learn vectors e ∈ R de for e ∈ E, and r ∈ R dr for r ∈ R. Some methods also learn projection matrices M r ∈ R dr×de for relations.",
"The correctness of a triple is evaluated using a model specific score function σ : E × R × E → R. For learning the embeddings, a loss function L(T , T ; θ), defined over a set of positive triples T , set of (sampled) negative triples T , and the parameters θ is optimized.",
"We use small italics characters (e.g., h, r) to represent entities and relations, and correspond-Type Model Score Function σ(h, r, t) Additive TransE (Bordes et al., 2013) − h + r − t 1 TransR (Lin et al., 2015) − Mrh + r − Mrt 1 STransE (Nguyen et al., 2016) − M 1 r h + r − M 2 r t 1 Multiplicative DistMult (Yang et al., 2014) r (h t) HolE (Nickel et al., 2016) r (h t) ComplEx (Trouillon et al., 2016) Re(r (h t )) Table 1 : Summary of various Knowledge Graph (KG) embedding methods used in the paper.",
"Please see Section 3 for more details.",
"ing bold characters to represent their vector embeddings (e.g., h, r).",
"We use bold capitalization (e.g., V) to represent a set of vectors.",
"Matrices are represented by capital italics characters (e.g., M ).",
"Additive KG Embedding Methods This is the set of methods where entity and relation vectors interact via additive operations.",
"The score function for these models can be expressed as below σ(h, r, t) = − M 1 r h + r − M 2 r t 1 (1) where h, t ∈ R de and r ∈ R dr are vectors for head entity, tail entity and relation respectively.",
"M 1 r , M 2 r ∈ R dr×de are projection matrices from entity space R de to relation space R dr .",
"TransE (Bordes et al., 2013) is the simplest additive model where the entity and relation vectors lie in same d−dimensional space, i.e., d e = d r = d. The projection matrices M 1 r = M 2 r = I d are identity matrices.",
"The relation vectors are modeled as translation vectors from head entity vectors to tail entity vectors.",
"Pairwise ranking loss is then used to learn these vectors.",
"Since the model is simple, it has limited capability in capturing many-to-one, one-to-many and many-to-many relations.",
"TransR (Lin et al., 2015) is another translationbased model which uses separate spaces for entity and relation vectors allowing it to address the shortcomings of TransE.",
"Entity vectors are projected into a relation specific space using the corresponding projection matrix M 1 r = M 2 r = M r .",
"The training is similar to TransE.",
"STransE (Nguyen et al., 2016) is a generalization of TransR and uses different projection matrices for head and tail entity vectors.",
"The training is similar to TransE.",
"STransE achieves better performance than the previous methods but at the cost of more number of parameters.",
"Equation 1 is the score function used in STransE.",
"TransE and TransR are special cases of STransE with M 1 r = M 2 r = I d and M 1 r = M 2 r = M r , respectively.",
"Multiplicative KG Embedding Methods This is the set of methods where the vectors interact via multiplicative operations (usually dot product).",
"The score function for these models can be expressed as σ(h, r, t) = r f (h, t) (2) where h, t, r ∈ F d are vectors for head entity, tail entity and relation respectively.",
"f (h, t) ∈ F d measures compatibility of head and tail entities and is specific to the model.",
"F is either real space R or complex space C. Detailed descriptions of the models we consider are as follows.",
"DistMult (Yang et al., 2014 ) models entities and relations as vectors in R d .",
"It uses an entry-wise product ( ) to measure compatibility between head and tail entities, while using logistic loss for training the model.",
"σ DistM ult (h, r, t) = r (h t) (3) Since the entry-wise product in (3) is symmetric, DistMult is not suitable for asymmetric and antisymmetric relations.",
"HolE (Nickel et al., 2016) also models entities and relations as vectors in R d .",
"It uses circular correlation operator ( ) as compatibility function defined as [h t] k = d−1 i=0 h i t (k+i) mod d The score function is given as σ HolE (h, r, t) = r (h t) (4) The circular correlation operator being asymmetric, can capture asymmetric and anti-symmetric relations, but at the cost of higher time complexity The points are sampled from 3d Spherical Gaussian with mean (1,1,1) and standard deviation 0.1 (left) and 1.3 (right).",
"Please refer to Section 4 for more details.",
"(O (d log d) ).",
"For training, we use pairwise ranking loss.",
"ComplEx (Trouillon et al., 2016) represents entities and relations as vectors in C d .",
"The compatibility of entity pairs is measured using entry-wise product between head and complex conjugate of tail entity vectors.",
"σ ComplEx (h, r, t) = Re(r (h t )) (5) In contrast to (3), using complex vectors in (5) allows ComplEx to handle symmetric, asymmetric and anti-symmetric relations using the same score function.",
"Similar to DistMult, logistic loss is used for training the model.",
"Metrics For our geometrical analysis, we first define a term 'alignment to mean' (ATM) of a vector v belonging to a set of vectors V, as the cosine similarity 1 between v and the mean of all vectors in V. ATM(v, V) = cosine v, 1 |V| x∈V x We also define 'conicity' of a set V as the mean ATM of all vectors in V. By this definition, a high value of Conicity(V) would imply that the vectors in V lie in a narrow cone centered at origin.",
"In other words, the vectors in the set V are highly aligned with each other.",
"In addition to that, we define the variance of ATM across all vectors in V, as the 'vector spread'(VS) of set V, For each method, a plot averaged across entity frequency bins is shown.",
"From these plots, we conclude that entity embeddings from additive models tend to have low (positive as well as negative) ATM and thereby low Conicity and high vector spread.",
"Interestingly, this is reversed in case of multiplicative methods.",
"Please see Section 6.1 for more details.",
"Conicity(V) = 1 |V| v∈V ATM(v, V) 1 cosine(u, v) = u v u v Dataset VS(V) = 1 |V| v∈V ATM(v, V)−Conicity(V) Experimental Setup Datasets: We run our experiments on subsets of two widely used datasets, viz., Freebase (Bollacker et al., 2008) and WordNet (Miller, 1995) , called FB15k and WN18 (Bordes et al., 2013) , respectively.",
"We detail the characteristics of these datasets in Table 2 .",
"Please note that while the results presented in Section 6 are on the FB15K dataset, we reach the same conclusions on WN18.",
"The plots for our experiments on WN18 can be found in the Supplementary Section.",
"Hyperparameters: We experiment with multiple values of hyperparameters to understand their effect on the geometry of KG embeddings.",
"Specifically, we vary the dimension of the generated vectors between {50, 100, 200} and the number of negative samples used during training between {1, 50, 100}.",
"For more details on algorithm specific hyperparameters, we refer the reader to the Supplementary Section.",
"2 2 For training, we used codes from https://github.",
"Frequency Bins: We follow (Mimno and Thompson, 2017) for entity and relation samples used in the analysis.",
"Multiple bins of entities and relations are created based on their frequencies and 100 randomly sampled vectors are taken from each bin.",
"These set of sampled vectors are then used for our analysis.",
"For more information about sampling vectors, please refer to (Mimno and Thompson, 2017) .",
"Results and Analysis In this section, we evaluate the following questions.",
"• Does model type (e.g., additive vs multiplicative) have any effect on the geometry of embeddings?",
"(Section 6.",
"For each method, a plot averaged across entity frequency bins is shown.",
"Trends in these plots are similar to those in Figure 2 .",
"Main findings from these plots are summarized in Section 6.1.",
"• Does negative sampling have any effect on the embedding geometry?",
"(Section 6.2) • Does the dimension of embedding have any effect on its geometry?",
"(Section 6.3) • How is task performance related to embedding geometry?",
"(Section 6.4) In each subsection, we summarize the main findings at the beginning, followed by evidence supporting those findings.",
"Effect of Model Type on Geometry Summary of Findings: Additive: Low conicity and high vector spread.",
"Multiplicative: High conicity and low vector spread.",
"In this section, we explore whether the type of the score function optimized during the training has any effect on the geometry of the resulting embedding.",
"For this experiment, we set the number of negative samples to 1 and the vector dimension to 100 (we got similar results for 50-dimensional vectors).",
"Figure 2 and Figure 3 show the distribution of ATMs of these sampled entity and relation vectors, respectively.",
"3 Entity Embeddings: As seen in Figure 2 , there is a stark difference between the geometries of entity vectors produced by additive and multiplicative models.",
"The ATMs of all entity vectors produced by multiplicative models are positive with very low vector spread.",
"Their high conicity suggests that they are not uniformly dispersed in the vector space, but lie in a narrow cone along the mean vector.",
"This is in contrast to the entity vectors obtained from additive models which are both positive and negative with higher vector spread.",
"From the lower values of conicity, we conclude that entity vectors from additive models are evenly dispersed in the vector space.",
"This observation is also reinforced by looking at the high vector spread of additive models in comparison to that of multiplicative models.",
"We also observed that additive models are sensitive to the frequency of entities, with high frequency bins having higher conicity than low frequency bins.",
"However, no such pattern was observed for multiplicative models and In each bar group, first three models are additive, while the last three are multiplicative.",
"Main findings from these plots are summarized in Section 6.2 conicity was consistently similar across frequency bins.",
"For clarity, we have not shown different plots for individual frequency bins.",
"Relation Embeddings: As in entity embeddings, we observe a similar trend when we look at the distribution of ATMs for relation vectors in Figure 3 .",
"The conicity of relation vectors generated using additive models is almost zero across frequency bands.",
"This coupled with the high vector spread observed, suggests that these vectors are scattered throughout the vector space.",
"Relation vectors from multiplicative models exhibit high conicity and low vector spread, suggesting that they lie in a narrow cone centered at origin, like their entity counterparts.",
"Effect of Number of Negative Samples on Geometry Summary of Findings: Additive: Conicity and average length are invariant to changes in #NegativeSamples for both entities and relations.",
"Multiplicative: Conicity increases while average vector length decrease with increasing #NegativeSamples for entities.",
"Conicity decreases, while average vector length remains constant (except HolE) for relations.",
"For experiments in this section, we keep the vector dimension constant at 100.",
"Entity Embeddings: As seen in Figure 4 (left) , the conicity of entity vectors increases as the number of negative samples is increased for multiplicative models.",
"In contrast, conicity of the entity vectors generated by additive models is unaffected by change in number of negative samples and they continue to be dispersed throughout the vector space.",
"From Figure 4 (right), we observe that the average length of entity vectors produced by additive models is also invariant of any changes in number of negative samples.",
"On the other hand, increase in negative sampling decreases the average entity vector length for all multiplicative models except HolE.",
"The average entity vector length for HolE is nearly 1 for any number of negative samples, which is understandable considering it constrains the entity vectors to lie inside a unit ball (Nickel et al., 2016) .",
"This constraint is also enforced by the additive models: TransE, TransR, and STransE.",
"Relation Embeddings: Similar to entity embeddings, in case of relation vectors trained using additive models, the average length and conicity do not change while varying the number of negative samples.",
"However, the conicity of relation vectors from multiplicative models decreases with increase in negative sampling.",
"The average relation vector length is invariant for all multiplicative methods, except for HolE.",
"We see a surprisingly big jump in average relation vector length for HolE going from 1 to 50 negative samples, but it does not change after that.",
"Due to space constraints in the paper, we refer the reader to the Supplementary Section for plots discussing the effect of number of negative samples on geometry of relation vectors.",
"We note that the multiplicative score between two vectors may be increased by either increasing the alignment between the two vectors (i.e., increasing Conicity and reducing vector spread between them), or by increasing their lengths.",
"It is interesting to note that we see exactly these effects in the geometry of multiplicative methods In each bar group, first three models are additive, while the last three are multiplicative.",
"Main findings from these plots are summarized in Section 6.3. analyzed above.",
"Correlation with Geometry of Word Embeddings Our conclusions from the geometrical analysis of entity vectors produced by multiplicative models are similar to the results in (Mimno and Thompson, 2017) , where increase in negative sampling leads to increased conicity of word vectors trained using the skip-gram with negative sampling (SGNS) method.",
"On the other hand, additive models remain unaffected by these changes.",
"SGNS tries to maximize a score function of the form w T · c for positive word context pairs, where w is the word vector and c is the context vector (Mikolov et al., 2013) .",
"This is very similar to the score function of multiplicative models as seen in Table 1 .",
"Hence, SGNS can be considered as a multiplicative model in the word domain.",
"Hence, we argue that our result on the increase in negative samples increasing the conicity of vectors trained using a multiplicative score function can be considered as a generalization of the one in (Mimno and Thompson, 2017) .",
"Effect of Vector Dimension on Geometry Summary of Findings: Additive: Conicity and average length are invariant to changes in dimension for both entities and relations.",
"Multiplicative: Conicity decreases for both entities and relations with increasing dimension.",
"Average vector length increases for both entities and relations, except for HolE entities.",
"Entity Embeddings: To study the effect of vec-tor dimension on conicity and length, we set the number of negative samples to 1, while varying the vector dimension.",
"From Figure 5 (left), we observe that the conicity for entity vectors generated by any additive model is almost invariant of increase in dimension, though STransE exhibits a slight decrease.",
"In contrast, entity vector from multiplicative models show a clear decreasing pattern with increasing dimension.",
"As seen in Figure 5 (right) , the average lengths of entity vectors from multiplicative models increase sharply with increasing vector dimension, except for HolE.",
"In case of HolE, the average vector length remains constant at one.",
"Deviation involving HolE is expected as it enforces entity vectors to fall within a unit ball.",
"Similar constraints are enforced on entity vectors for additive models as well.",
"Thus, the average entity vector lengths are not affected by increasing vector dimension for all additive models.",
"Relation Embeddings: We reach similar conclusion when analyzing against increasing dimension the change in geometry of relation vectors produced using these KG embedding methods.",
"In this setting, the average length of relation vectors learned by HolE also increases as dimension is increased.",
"This is consistent with the other methods in the multiplicative family.",
"This is because, unlike entity vectors, the lengths of relation vectors of HolE are not constrained to be less than unit length.",
"Due to lack of space, we are unable to show plots for relation vectors here, but the same can be found in the Supplementary Section.",
"Relating Geometry to Performance Summary of Findings: Additive: Neither entites nor relations exhibit correlation between geometry and performance.",
"Multiplicative: Keeping negative samples fixed, lower conicity or higher average vector length for entities leads to improved performance.",
"No relationship for relations.",
"In this section, we analyze the relationship between geometry and performance on the Link prediction task, using the same setting as in (Bordes et al., 2013) .",
"Figure 6 (left) presents the effects of conicity of entity vectors on performance, while Figure 6 (right) shows the effects of average entity vector length.",
"4 As we see from Figure 6 (left), for fixed number of negative samples, the multiplicative model with lower conicity of entity vectors achieves better performance.",
"This performance gain is larger for higher numbers of negative samples (N).",
"Additive models don't exhibit any relationship between performance and conicity, as they are all clustered around zero conicity, which is in-line with our observations in previous sections.",
"In Figure 6 (right) , for all multiplicative models except HolE, a higher average entity vector length translates to better performance, while the number of negative samples is kept fixed.",
"Additive models and HolE don't exhibit any such patterns, as they are all clustered just below unit average entity vector length.",
"The above two observations for multiplicative models make intuitive sense, as lower conicity and higher average vector length would both translate to vectors being more dispersed in the space.",
"We see another interesting observation regarding the high sensitivity of HolE to the number of negative samples used during training.",
"Using a large number of negative examples (e.g., N = 50 or 100) leads to very high conicity in case of HolE.",
"Figure 6 (right) shows that average entity vector length of HolE is always one.",
"These two observations point towards HolE's entity vectors lying in a tiny part of the space.",
"This translates to HolE performing poorer than all other models in case of high numbers of negative sampling.",
"We also did a similar study for relation vectors, but did not see any discernible patterns.",
"Conclusion In this paper, we have initiated a systematic study into the important but unexplored problem of analyzing geometry of various Knowledge Graph (KG) embedding methods.",
"To the best of our knowledge, this is the first study of its kind.",
"Through extensive experiments on multiple realworld datasets, we are able to identify several insights into the geometry of KG embeddings.",
"We have also explored the relationship between KG embedding geometry and its task performance.",
"We have shared all our source code to foster further research in this area."
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"4",
"5",
"6",
"6.2",
"6.2.1",
"7"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Overview of KG Embedding Methods",
"Additive KG Embedding Methods",
"Multiplicative KG Embedding Methods",
"Metrics",
"Experimental Setup",
"Results and Analysis",
"Effect of Number of Negative Samples on Geometry",
"Correlation with Geometry of Word Embeddings",
"Conclusion"
]
} | GEM-SciDuet-train-132#paper-1355#slide-11 | SGNS Word2Vec1 as Multiplicative Model | Similar observation was made by (Mimno and Thompson, 2017)2 for
SGNS based word embeddings where higher #negatives resulted in higher conicity.
Word2Vec1 maximizes word and context vector dot product for positive word-context pairs.
This behavior is consistent with that of multiplicative models.
1. Distributed representations of words and phrases and their compositionality, Mikolov et al. NIPS 2013 2. The strange geometry of skip-gram with negative sampling, Mimno and Thompson, EMNLP 2017 | Similar observation was made by (Mimno and Thompson, 2017)2 for
SGNS based word embeddings where higher #negatives resulted in higher conicity.
Word2Vec1 maximizes word and context vector dot product for positive word-context pairs.
This behavior is consistent with that of multiplicative models.
1. Distributed representations of words and phrases and their compositionality, Mikolov et al. NIPS 2013 2. The strange geometry of skip-gram with negative sampling, Mimno and Thompson, EMNLP 2017 | [] |
GEM-SciDuet-train-132#paper-1355#slide-13 | 1355 | Towards Understanding the Geometry of Knowledge Graph Embeddings | Knowledge Graph (KG) embedding has emerged as a very active area of research over the last few years, resulting in the development of several embedding methods. These KG embedding methods represent KG entities and relations as vectors in a high-dimensional space. Despite this popularity and effectiveness of KG embeddings in various tasks (e.g., link prediction), geometric understanding of such embeddings (i.e., arrangement of entity and relation vectors in vector space) is unexplored -we fill this gap in the paper. We initiate a study to analyze the geometry of KG embeddings and correlate it with task performance and other hyperparameters. To the best of our knowledge, this is the first study of its kind. Through extensive experiments on real-world datasets, we discover several insights. For example, we find that there are sharp differences between the geometry of embeddings learnt by different classes of KG embeddings methods. We hope that this initial study will inspire other follow-up research on this important but unexplored problem. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188
],
"paper_content_text": [
"Introduction Knowledge Graphs (KGs) are multi-relational graphs where nodes represent entities and typededges represent relationships among entities.",
"Recent research in this area has resulted in the development of several large KGs, such as NELL (Mitchell et al., 2015) , YAGO (Suchanek et al., 2007) , and Freebase (Bollacker et al., 2008) , among others.",
"These KGs contain thousands of predicates (e.g., person, city, mayorOf(person, city), etc.",
"), and millions of triples involving such predicates, e.g., (Bill de Blasio, mayorOf, New York City) .",
"The problem of learning embeddings for Knowledge Graphs has received significant attention in recent years, with several methods being proposed (Bordes et al., 2013; Lin et al., 2015; Nguyen et al., 2016; Nickel et al., 2016; Trouillon et al., 2016) .",
"These methods represent entities and relations in a KG as vectors in high dimensional space.",
"These vectors can then be used for various tasks, such as, link prediction, entity classification etc.",
"Starting with TransE (Bordes et al., 2013) , there have been many KG embedding methods such as TransH (Wang et al., 2014) , TransR (Lin et al., 2015) and STransE (Nguyen et al., 2016) which represent relations as translation vectors from head entities to tail entities.",
"These are additive models, as the vectors interact via addition and subtraction.",
"Other KG embedding models, such as, DistMult (Yang et al., 2014) , HolE (Nickel et al., 2016) , and ComplEx (Trouillon et al., 2016) are multiplicative where entityrelation-entity triple likelihood is quantified by a multiplicative score function.",
"All these methods employ a score function for distinguishing correct triples from incorrect ones.",
"In spite of the existence of many KG embedding methods, our understanding of the geometry and structure of such embeddings is very shallow.",
"A recent work (Mimno and Thompson, 2017) analyzed the geometry of word embeddings.",
"However, the problem of analyzing geometry of KG embeddings is still unexplored -we fill this important gap.",
"In this paper, we analyze the geometry of such vectors in terms of their lengths and conicity, which, as defined in Section 4, describes their positions and orientations in the vector space.",
"We later study the effects of model type and training hyperparameters on the geometry of KG embeddings and correlate geometry with performance.",
"We make the following contributions: • We initiate a study to analyze the geometry of various Knowledge Graph (KG) embeddings.",
"To the best of our knowledge, this is the first study of its kind.",
"We also formalize various metrics which can be used to study geometry of a set of vectors.",
"• Through extensive analysis, we discover several interesting insights about the geometry of KG embeddings.",
"For example, we find systematic differences between the geometries of embeddings learned by additive and multiplicative KG embedding methods.",
"• We also study the relationship between geometric attributes and predictive performance of the embeddings, resulting in several new insights.",
"For example, in case of multiplicative models, we observe that for entity vectors generated with a fixed number of negative samples, lower conicity (as defined in Section 4) or higher average vector length lead to higher performance.",
"Source code of all the analysis tools developed as part of this paper is available at https://github.com/malllabiisc/ kg-geometry.",
"We are hoping that these resources will enable one to quickly analyze the geometry of any KG embedding, and potentially other embeddings as well.",
"Related Work In spite of the extensive and growing literature on both KG and non-KG embedding methods, very little attention has been paid towards understanding the geometry of the learned embeddings.",
"A recent work (Mimno and Thompson, 2017) is an exception to this which addresses this problem in the context of word vectors.",
"This work revealed a surprising correlation between word vector geometry and the number of negative samples used during training.",
"Instead of word vectors, in this paper we focus on understanding the geometry of KG embeddings.",
"In spite of this difference, the insights we discover in this paper generalizes some of the observations in the work of (Mimno and Thompson, 2017) .",
"Please see Section 6.2 for more details.",
"Since KGs contain only positive triples, negative sampling has been used for training KG embeddings.",
"Effect of the number of negative samples in KG embedding performance was studied by (Toutanova et al., 2015) .",
"In this paper, we study the effect of the number of negative samples on KG embedding geometry as well as performance.",
"In addition to the additive and multiplicative KG embedding methods already mentioned in Section 1, there is another set of methods where the entity and relation vectors interact via a neural network.",
"Examples of methods in this category include NTN (Socher et al., 2013) , CONV (Toutanova et al., 2015) , ConvE (Dettmers et al., 2017) , R-GCN (Schlichtkrull et al., 2017) , ER-MLP (Dong et al., 2014) and ER-MLP-2n (Ravishankar et al., 2017) .",
"Due to space limitations, in this paper we restrict our scope to the analysis of the geometry of additive and multiplicative KG embedding models only, and leave the analysis of the geometry of neural network-based methods as part of future work.",
"Overview of KG Embedding Methods For our analysis, we consider six representative KG embedding methods: TransE (Bordes et al., 2013) , TransR (Lin et al., 2015) , STransE (Nguyen et al., 2016) , DistMult (Yang et al., 2014) , HolE (Nickel et al., 2016) and ComplEx (Trouillon et al., 2016) .",
"We refer to TransE, TransR and STransE as additive methods because they learn embeddings by modeling relations as translation vectors from one entity to another, which results in vectors interacting via the addition operation during training.",
"On the other hand, we refer to Dist-Mult, HolE and ComplEx as multiplicative methods as they quantify the likelihood of a triple belonging to the KG through a multiplicative score function.",
"The score functions optimized by these methods are summarized in Table 1 .",
"Notation: Let G = (E, R, T ) be a Knowledge Graph (KG) where E is the set of entities, R is the set of relations and T ⊂ E × R × E is the set of triples stored in the graph.",
"Most of the KG embedding methods learn vectors e ∈ R de for e ∈ E, and r ∈ R dr for r ∈ R. Some methods also learn projection matrices M r ∈ R dr×de for relations.",
"The correctness of a triple is evaluated using a model specific score function σ : E × R × E → R. For learning the embeddings, a loss function L(T , T ; θ), defined over a set of positive triples T , set of (sampled) negative triples T , and the parameters θ is optimized.",
"We use small italics characters (e.g., h, r) to represent entities and relations, and correspond-Type Model Score Function σ(h, r, t) Additive TransE (Bordes et al., 2013) − h + r − t 1 TransR (Lin et al., 2015) − Mrh + r − Mrt 1 STransE (Nguyen et al., 2016) − M 1 r h + r − M 2 r t 1 Multiplicative DistMult (Yang et al., 2014) r (h t) HolE (Nickel et al., 2016) r (h t) ComplEx (Trouillon et al., 2016) Re(r (h t )) Table 1 : Summary of various Knowledge Graph (KG) embedding methods used in the paper.",
"Please see Section 3 for more details.",
"ing bold characters to represent their vector embeddings (e.g., h, r).",
"We use bold capitalization (e.g., V) to represent a set of vectors.",
"Matrices are represented by capital italics characters (e.g., M ).",
"Additive KG Embedding Methods This is the set of methods where entity and relation vectors interact via additive operations.",
"The score function for these models can be expressed as below σ(h, r, t) = − M 1 r h + r − M 2 r t 1 (1) where h, t ∈ R de and r ∈ R dr are vectors for head entity, tail entity and relation respectively.",
"M 1 r , M 2 r ∈ R dr×de are projection matrices from entity space R de to relation space R dr .",
"TransE (Bordes et al., 2013) is the simplest additive model where the entity and relation vectors lie in same d−dimensional space, i.e., d e = d r = d. The projection matrices M 1 r = M 2 r = I d are identity matrices.",
"The relation vectors are modeled as translation vectors from head entity vectors to tail entity vectors.",
"Pairwise ranking loss is then used to learn these vectors.",
"Since the model is simple, it has limited capability in capturing many-to-one, one-to-many and many-to-many relations.",
"TransR (Lin et al., 2015) is another translationbased model which uses separate spaces for entity and relation vectors allowing it to address the shortcomings of TransE.",
"Entity vectors are projected into a relation specific space using the corresponding projection matrix M 1 r = M 2 r = M r .",
"The training is similar to TransE.",
"STransE (Nguyen et al., 2016) is a generalization of TransR and uses different projection matrices for head and tail entity vectors.",
"The training is similar to TransE.",
"STransE achieves better performance than the previous methods but at the cost of more number of parameters.",
"Equation 1 is the score function used in STransE.",
"TransE and TransR are special cases of STransE with M 1 r = M 2 r = I d and M 1 r = M 2 r = M r , respectively.",
"Multiplicative KG Embedding Methods This is the set of methods where the vectors interact via multiplicative operations (usually dot product).",
"The score function for these models can be expressed as σ(h, r, t) = r f (h, t) (2) where h, t, r ∈ F d are vectors for head entity, tail entity and relation respectively.",
"f (h, t) ∈ F d measures compatibility of head and tail entities and is specific to the model.",
"F is either real space R or complex space C. Detailed descriptions of the models we consider are as follows.",
"DistMult (Yang et al., 2014 ) models entities and relations as vectors in R d .",
"It uses an entry-wise product ( ) to measure compatibility between head and tail entities, while using logistic loss for training the model.",
"σ DistM ult (h, r, t) = r (h t) (3) Since the entry-wise product in (3) is symmetric, DistMult is not suitable for asymmetric and antisymmetric relations.",
"HolE (Nickel et al., 2016) also models entities and relations as vectors in R d .",
"It uses circular correlation operator ( ) as compatibility function defined as [h t] k = d−1 i=0 h i t (k+i) mod d The score function is given as σ HolE (h, r, t) = r (h t) (4) The circular correlation operator being asymmetric, can capture asymmetric and anti-symmetric relations, but at the cost of higher time complexity The points are sampled from 3d Spherical Gaussian with mean (1,1,1) and standard deviation 0.1 (left) and 1.3 (right).",
"Please refer to Section 4 for more details.",
"(O (d log d) ).",
"For training, we use pairwise ranking loss.",
"ComplEx (Trouillon et al., 2016) represents entities and relations as vectors in C d .",
"The compatibility of entity pairs is measured using entry-wise product between head and complex conjugate of tail entity vectors.",
"σ ComplEx (h, r, t) = Re(r (h t )) (5) In contrast to (3), using complex vectors in (5) allows ComplEx to handle symmetric, asymmetric and anti-symmetric relations using the same score function.",
"Similar to DistMult, logistic loss is used for training the model.",
"Metrics For our geometrical analysis, we first define a term 'alignment to mean' (ATM) of a vector v belonging to a set of vectors V, as the cosine similarity 1 between v and the mean of all vectors in V. ATM(v, V) = cosine v, 1 |V| x∈V x We also define 'conicity' of a set V as the mean ATM of all vectors in V. By this definition, a high value of Conicity(V) would imply that the vectors in V lie in a narrow cone centered at origin.",
"In other words, the vectors in the set V are highly aligned with each other.",
"In addition to that, we define the variance of ATM across all vectors in V, as the 'vector spread'(VS) of set V, For each method, a plot averaged across entity frequency bins is shown.",
"From these plots, we conclude that entity embeddings from additive models tend to have low (positive as well as negative) ATM and thereby low Conicity and high vector spread.",
"Interestingly, this is reversed in case of multiplicative methods.",
"Please see Section 6.1 for more details.",
"Conicity(V) = 1 |V| v∈V ATM(v, V) 1 cosine(u, v) = u v u v Dataset VS(V) = 1 |V| v∈V ATM(v, V)−Conicity(V) Experimental Setup Datasets: We run our experiments on subsets of two widely used datasets, viz., Freebase (Bollacker et al., 2008) and WordNet (Miller, 1995) , called FB15k and WN18 (Bordes et al., 2013) , respectively.",
"We detail the characteristics of these datasets in Table 2 .",
"Please note that while the results presented in Section 6 are on the FB15K dataset, we reach the same conclusions on WN18.",
"The plots for our experiments on WN18 can be found in the Supplementary Section.",
"Hyperparameters: We experiment with multiple values of hyperparameters to understand their effect on the geometry of KG embeddings.",
"Specifically, we vary the dimension of the generated vectors between {50, 100, 200} and the number of negative samples used during training between {1, 50, 100}.",
"For more details on algorithm specific hyperparameters, we refer the reader to the Supplementary Section.",
"2 2 For training, we used codes from https://github.",
"Frequency Bins: We follow (Mimno and Thompson, 2017) for entity and relation samples used in the analysis.",
"Multiple bins of entities and relations are created based on their frequencies and 100 randomly sampled vectors are taken from each bin.",
"These set of sampled vectors are then used for our analysis.",
"For more information about sampling vectors, please refer to (Mimno and Thompson, 2017) .",
"Results and Analysis In this section, we evaluate the following questions.",
"• Does model type (e.g., additive vs multiplicative) have any effect on the geometry of embeddings?",
"(Section 6.",
"For each method, a plot averaged across entity frequency bins is shown.",
"Trends in these plots are similar to those in Figure 2 .",
"Main findings from these plots are summarized in Section 6.1.",
"• Does negative sampling have any effect on the embedding geometry?",
"(Section 6.2) • Does the dimension of embedding have any effect on its geometry?",
"(Section 6.3) • How is task performance related to embedding geometry?",
"(Section 6.4) In each subsection, we summarize the main findings at the beginning, followed by evidence supporting those findings.",
"Effect of Model Type on Geometry Summary of Findings: Additive: Low conicity and high vector spread.",
"Multiplicative: High conicity and low vector spread.",
"In this section, we explore whether the type of the score function optimized during the training has any effect on the geometry of the resulting embedding.",
"For this experiment, we set the number of negative samples to 1 and the vector dimension to 100 (we got similar results for 50-dimensional vectors).",
"Figure 2 and Figure 3 show the distribution of ATMs of these sampled entity and relation vectors, respectively.",
"3 Entity Embeddings: As seen in Figure 2 , there is a stark difference between the geometries of entity vectors produced by additive and multiplicative models.",
"The ATMs of all entity vectors produced by multiplicative models are positive with very low vector spread.",
"Their high conicity suggests that they are not uniformly dispersed in the vector space, but lie in a narrow cone along the mean vector.",
"This is in contrast to the entity vectors obtained from additive models which are both positive and negative with higher vector spread.",
"From the lower values of conicity, we conclude that entity vectors from additive models are evenly dispersed in the vector space.",
"This observation is also reinforced by looking at the high vector spread of additive models in comparison to that of multiplicative models.",
"We also observed that additive models are sensitive to the frequency of entities, with high frequency bins having higher conicity than low frequency bins.",
"However, no such pattern was observed for multiplicative models and In each bar group, first three models are additive, while the last three are multiplicative.",
"Main findings from these plots are summarized in Section 6.2 conicity was consistently similar across frequency bins.",
"For clarity, we have not shown different plots for individual frequency bins.",
"Relation Embeddings: As in entity embeddings, we observe a similar trend when we look at the distribution of ATMs for relation vectors in Figure 3 .",
"The conicity of relation vectors generated using additive models is almost zero across frequency bands.",
"This coupled with the high vector spread observed, suggests that these vectors are scattered throughout the vector space.",
"Relation vectors from multiplicative models exhibit high conicity and low vector spread, suggesting that they lie in a narrow cone centered at origin, like their entity counterparts.",
"Effect of Number of Negative Samples on Geometry Summary of Findings: Additive: Conicity and average length are invariant to changes in #NegativeSamples for both entities and relations.",
"Multiplicative: Conicity increases while average vector length decrease with increasing #NegativeSamples for entities.",
"Conicity decreases, while average vector length remains constant (except HolE) for relations.",
"For experiments in this section, we keep the vector dimension constant at 100.",
"Entity Embeddings: As seen in Figure 4 (left) , the conicity of entity vectors increases as the number of negative samples is increased for multiplicative models.",
"In contrast, conicity of the entity vectors generated by additive models is unaffected by change in number of negative samples and they continue to be dispersed throughout the vector space.",
"From Figure 4 (right), we observe that the average length of entity vectors produced by additive models is also invariant of any changes in number of negative samples.",
"On the other hand, increase in negative sampling decreases the average entity vector length for all multiplicative models except HolE.",
"The average entity vector length for HolE is nearly 1 for any number of negative samples, which is understandable considering it constrains the entity vectors to lie inside a unit ball (Nickel et al., 2016) .",
"This constraint is also enforced by the additive models: TransE, TransR, and STransE.",
"Relation Embeddings: Similar to entity embeddings, in case of relation vectors trained using additive models, the average length and conicity do not change while varying the number of negative samples.",
"However, the conicity of relation vectors from multiplicative models decreases with increase in negative sampling.",
"The average relation vector length is invariant for all multiplicative methods, except for HolE.",
"We see a surprisingly big jump in average relation vector length for HolE going from 1 to 50 negative samples, but it does not change after that.",
"Due to space constraints in the paper, we refer the reader to the Supplementary Section for plots discussing the effect of number of negative samples on geometry of relation vectors.",
"We note that the multiplicative score between two vectors may be increased by either increasing the alignment between the two vectors (i.e., increasing Conicity and reducing vector spread between them), or by increasing their lengths.",
"It is interesting to note that we see exactly these effects in the geometry of multiplicative methods In each bar group, first three models are additive, while the last three are multiplicative.",
"Main findings from these plots are summarized in Section 6.3. analyzed above.",
"Correlation with Geometry of Word Embeddings Our conclusions from the geometrical analysis of entity vectors produced by multiplicative models are similar to the results in (Mimno and Thompson, 2017) , where increase in negative sampling leads to increased conicity of word vectors trained using the skip-gram with negative sampling (SGNS) method.",
"On the other hand, additive models remain unaffected by these changes.",
"SGNS tries to maximize a score function of the form w T · c for positive word context pairs, where w is the word vector and c is the context vector (Mikolov et al., 2013) .",
"This is very similar to the score function of multiplicative models as seen in Table 1 .",
"Hence, SGNS can be considered as a multiplicative model in the word domain.",
"Hence, we argue that our result on the increase in negative samples increasing the conicity of vectors trained using a multiplicative score function can be considered as a generalization of the one in (Mimno and Thompson, 2017) .",
"Effect of Vector Dimension on Geometry Summary of Findings: Additive: Conicity and average length are invariant to changes in dimension for both entities and relations.",
"Multiplicative: Conicity decreases for both entities and relations with increasing dimension.",
"Average vector length increases for both entities and relations, except for HolE entities.",
"Entity Embeddings: To study the effect of vec-tor dimension on conicity and length, we set the number of negative samples to 1, while varying the vector dimension.",
"From Figure 5 (left), we observe that the conicity for entity vectors generated by any additive model is almost invariant of increase in dimension, though STransE exhibits a slight decrease.",
"In contrast, entity vector from multiplicative models show a clear decreasing pattern with increasing dimension.",
"As seen in Figure 5 (right) , the average lengths of entity vectors from multiplicative models increase sharply with increasing vector dimension, except for HolE.",
"In case of HolE, the average vector length remains constant at one.",
"Deviation involving HolE is expected as it enforces entity vectors to fall within a unit ball.",
"Similar constraints are enforced on entity vectors for additive models as well.",
"Thus, the average entity vector lengths are not affected by increasing vector dimension for all additive models.",
"Relation Embeddings: We reach similar conclusion when analyzing against increasing dimension the change in geometry of relation vectors produced using these KG embedding methods.",
"In this setting, the average length of relation vectors learned by HolE also increases as dimension is increased.",
"This is consistent with the other methods in the multiplicative family.",
"This is because, unlike entity vectors, the lengths of relation vectors of HolE are not constrained to be less than unit length.",
"Due to lack of space, we are unable to show plots for relation vectors here, but the same can be found in the Supplementary Section.",
"Relating Geometry to Performance Summary of Findings: Additive: Neither entites nor relations exhibit correlation between geometry and performance.",
"Multiplicative: Keeping negative samples fixed, lower conicity or higher average vector length for entities leads to improved performance.",
"No relationship for relations.",
"In this section, we analyze the relationship between geometry and performance on the Link prediction task, using the same setting as in (Bordes et al., 2013) .",
"Figure 6 (left) presents the effects of conicity of entity vectors on performance, while Figure 6 (right) shows the effects of average entity vector length.",
"4 As we see from Figure 6 (left), for fixed number of negative samples, the multiplicative model with lower conicity of entity vectors achieves better performance.",
"This performance gain is larger for higher numbers of negative samples (N).",
"Additive models don't exhibit any relationship between performance and conicity, as they are all clustered around zero conicity, which is in-line with our observations in previous sections.",
"In Figure 6 (right) , for all multiplicative models except HolE, a higher average entity vector length translates to better performance, while the number of negative samples is kept fixed.",
"Additive models and HolE don't exhibit any such patterns, as they are all clustered just below unit average entity vector length.",
"The above two observations for multiplicative models make intuitive sense, as lower conicity and higher average vector length would both translate to vectors being more dispersed in the space.",
"We see another interesting observation regarding the high sensitivity of HolE to the number of negative samples used during training.",
"Using a large number of negative examples (e.g., N = 50 or 100) leads to very high conicity in case of HolE.",
"Figure 6 (right) shows that average entity vector length of HolE is always one.",
"These two observations point towards HolE's entity vectors lying in a tiny part of the space.",
"This translates to HolE performing poorer than all other models in case of high numbers of negative sampling.",
"We also did a similar study for relation vectors, but did not see any discernible patterns.",
"Conclusion In this paper, we have initiated a systematic study into the important but unexplored problem of analyzing geometry of various Knowledge Graph (KG) embedding methods.",
"To the best of our knowledge, this is the first study of its kind.",
"Through extensive experiments on multiple realworld datasets, we are able to identify several insights into the geometry of KG embeddings.",
"We have also explored the relationship between KG embedding geometry and its task performance.",
"We have shared all our source code to foster further research in this area."
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"4",
"5",
"6",
"6.2",
"6.2.1",
"7"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Overview of KG Embedding Methods",
"Additive KG Embedding Methods",
"Multiplicative KG Embedding Methods",
"Metrics",
"Experimental Setup",
"Results and Analysis",
"Effect of Number of Negative Samples on Geometry",
"Correlation with Geometry of Word Embeddings",
"Conclusion"
]
} | GEM-SciDuet-train-132#paper-1355#slide-13 | Effect of Dimensions | Model Type Vector Type Conicity AVL
Entity No Change No Change
Relation No Change No Change | Model Type Vector Type Conicity AVL
Entity No Change No Change
Relation No Change No Change | [] |
GEM-SciDuet-train-132#paper-1355#slide-14 | 1355 | Towards Understanding the Geometry of Knowledge Graph Embeddings | Knowledge Graph (KG) embedding has emerged as a very active area of research over the last few years, resulting in the development of several embedding methods. These KG embedding methods represent KG entities and relations as vectors in a high-dimensional space. Despite this popularity and effectiveness of KG embeddings in various tasks (e.g., link prediction), geometric understanding of such embeddings (i.e., arrangement of entity and relation vectors in vector space) is unexplored -we fill this gap in the paper. We initiate a study to analyze the geometry of KG embeddings and correlate it with task performance and other hyperparameters. To the best of our knowledge, this is the first study of its kind. Through extensive experiments on real-world datasets, we discover several insights. For example, we find that there are sharp differences between the geometry of embeddings learnt by different classes of KG embeddings methods. We hope that this initial study will inspire other follow-up research on this important but unexplored problem. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188
],
"paper_content_text": [
"Introduction Knowledge Graphs (KGs) are multi-relational graphs where nodes represent entities and typededges represent relationships among entities.",
"Recent research in this area has resulted in the development of several large KGs, such as NELL (Mitchell et al., 2015) , YAGO (Suchanek et al., 2007) , and Freebase (Bollacker et al., 2008) , among others.",
"These KGs contain thousands of predicates (e.g., person, city, mayorOf(person, city), etc.",
"), and millions of triples involving such predicates, e.g., (Bill de Blasio, mayorOf, New York City) .",
"The problem of learning embeddings for Knowledge Graphs has received significant attention in recent years, with several methods being proposed (Bordes et al., 2013; Lin et al., 2015; Nguyen et al., 2016; Nickel et al., 2016; Trouillon et al., 2016) .",
"These methods represent entities and relations in a KG as vectors in high dimensional space.",
"These vectors can then be used for various tasks, such as, link prediction, entity classification etc.",
"Starting with TransE (Bordes et al., 2013) , there have been many KG embedding methods such as TransH (Wang et al., 2014) , TransR (Lin et al., 2015) and STransE (Nguyen et al., 2016) which represent relations as translation vectors from head entities to tail entities.",
"These are additive models, as the vectors interact via addition and subtraction.",
"Other KG embedding models, such as, DistMult (Yang et al., 2014) , HolE (Nickel et al., 2016) , and ComplEx (Trouillon et al., 2016) are multiplicative where entityrelation-entity triple likelihood is quantified by a multiplicative score function.",
"All these methods employ a score function for distinguishing correct triples from incorrect ones.",
"In spite of the existence of many KG embedding methods, our understanding of the geometry and structure of such embeddings is very shallow.",
"A recent work (Mimno and Thompson, 2017) analyzed the geometry of word embeddings.",
"However, the problem of analyzing geometry of KG embeddings is still unexplored -we fill this important gap.",
"In this paper, we analyze the geometry of such vectors in terms of their lengths and conicity, which, as defined in Section 4, describes their positions and orientations in the vector space.",
"We later study the effects of model type and training hyperparameters on the geometry of KG embeddings and correlate geometry with performance.",
"We make the following contributions: • We initiate a study to analyze the geometry of various Knowledge Graph (KG) embeddings.",
"To the best of our knowledge, this is the first study of its kind.",
"We also formalize various metrics which can be used to study geometry of a set of vectors.",
"• Through extensive analysis, we discover several interesting insights about the geometry of KG embeddings.",
"For example, we find systematic differences between the geometries of embeddings learned by additive and multiplicative KG embedding methods.",
"• We also study the relationship between geometric attributes and predictive performance of the embeddings, resulting in several new insights.",
"For example, in case of multiplicative models, we observe that for entity vectors generated with a fixed number of negative samples, lower conicity (as defined in Section 4) or higher average vector length lead to higher performance.",
"Source code of all the analysis tools developed as part of this paper is available at https://github.com/malllabiisc/ kg-geometry.",
"We are hoping that these resources will enable one to quickly analyze the geometry of any KG embedding, and potentially other embeddings as well.",
"Related Work In spite of the extensive and growing literature on both KG and non-KG embedding methods, very little attention has been paid towards understanding the geometry of the learned embeddings.",
"A recent work (Mimno and Thompson, 2017) is an exception to this which addresses this problem in the context of word vectors.",
"This work revealed a surprising correlation between word vector geometry and the number of negative samples used during training.",
"Instead of word vectors, in this paper we focus on understanding the geometry of KG embeddings.",
"In spite of this difference, the insights we discover in this paper generalizes some of the observations in the work of (Mimno and Thompson, 2017) .",
"Please see Section 6.2 for more details.",
"Since KGs contain only positive triples, negative sampling has been used for training KG embeddings.",
"Effect of the number of negative samples in KG embedding performance was studied by (Toutanova et al., 2015) .",
"In this paper, we study the effect of the number of negative samples on KG embedding geometry as well as performance.",
"In addition to the additive and multiplicative KG embedding methods already mentioned in Section 1, there is another set of methods where the entity and relation vectors interact via a neural network.",
"Examples of methods in this category include NTN (Socher et al., 2013) , CONV (Toutanova et al., 2015) , ConvE (Dettmers et al., 2017) , R-GCN (Schlichtkrull et al., 2017) , ER-MLP (Dong et al., 2014) and ER-MLP-2n (Ravishankar et al., 2017) .",
"Due to space limitations, in this paper we restrict our scope to the analysis of the geometry of additive and multiplicative KG embedding models only, and leave the analysis of the geometry of neural network-based methods as part of future work.",
"Overview of KG Embedding Methods For our analysis, we consider six representative KG embedding methods: TransE (Bordes et al., 2013) , TransR (Lin et al., 2015) , STransE (Nguyen et al., 2016) , DistMult (Yang et al., 2014) , HolE (Nickel et al., 2016) and ComplEx (Trouillon et al., 2016) .",
"We refer to TransE, TransR and STransE as additive methods because they learn embeddings by modeling relations as translation vectors from one entity to another, which results in vectors interacting via the addition operation during training.",
"On the other hand, we refer to Dist-Mult, HolE and ComplEx as multiplicative methods as they quantify the likelihood of a triple belonging to the KG through a multiplicative score function.",
"The score functions optimized by these methods are summarized in Table 1 .",
"Notation: Let G = (E, R, T ) be a Knowledge Graph (KG) where E is the set of entities, R is the set of relations and T ⊂ E × R × E is the set of triples stored in the graph.",
"Most of the KG embedding methods learn vectors e ∈ R de for e ∈ E, and r ∈ R dr for r ∈ R. Some methods also learn projection matrices M r ∈ R dr×de for relations.",
"The correctness of a triple is evaluated using a model specific score function σ : E × R × E → R. For learning the embeddings, a loss function L(T , T ; θ), defined over a set of positive triples T , set of (sampled) negative triples T , and the parameters θ is optimized.",
"We use small italics characters (e.g., h, r) to represent entities and relations, and correspond-Type Model Score Function σ(h, r, t) Additive TransE (Bordes et al., 2013) − h + r − t 1 TransR (Lin et al., 2015) − Mrh + r − Mrt 1 STransE (Nguyen et al., 2016) − M 1 r h + r − M 2 r t 1 Multiplicative DistMult (Yang et al., 2014) r (h t) HolE (Nickel et al., 2016) r (h t) ComplEx (Trouillon et al., 2016) Re(r (h t )) Table 1 : Summary of various Knowledge Graph (KG) embedding methods used in the paper.",
"Please see Section 3 for more details.",
"ing bold characters to represent their vector embeddings (e.g., h, r).",
"We use bold capitalization (e.g., V) to represent a set of vectors.",
"Matrices are represented by capital italics characters (e.g., M ).",
"Additive KG Embedding Methods This is the set of methods where entity and relation vectors interact via additive operations.",
"The score function for these models can be expressed as below σ(h, r, t) = − M 1 r h + r − M 2 r t 1 (1) where h, t ∈ R de and r ∈ R dr are vectors for head entity, tail entity and relation respectively.",
"M 1 r , M 2 r ∈ R dr×de are projection matrices from entity space R de to relation space R dr .",
"TransE (Bordes et al., 2013) is the simplest additive model where the entity and relation vectors lie in same d−dimensional space, i.e., d e = d r = d. The projection matrices M 1 r = M 2 r = I d are identity matrices.",
"The relation vectors are modeled as translation vectors from head entity vectors to tail entity vectors.",
"Pairwise ranking loss is then used to learn these vectors.",
"Since the model is simple, it has limited capability in capturing many-to-one, one-to-many and many-to-many relations.",
"TransR (Lin et al., 2015) is another translationbased model which uses separate spaces for entity and relation vectors allowing it to address the shortcomings of TransE.",
"Entity vectors are projected into a relation specific space using the corresponding projection matrix M 1 r = M 2 r = M r .",
"The training is similar to TransE.",
"STransE (Nguyen et al., 2016) is a generalization of TransR and uses different projection matrices for head and tail entity vectors.",
"The training is similar to TransE.",
"STransE achieves better performance than the previous methods but at the cost of more number of parameters.",
"Equation 1 is the score function used in STransE.",
"TransE and TransR are special cases of STransE with M 1 r = M 2 r = I d and M 1 r = M 2 r = M r , respectively.",
"Multiplicative KG Embedding Methods This is the set of methods where the vectors interact via multiplicative operations (usually dot product).",
"The score function for these models can be expressed as σ(h, r, t) = r f (h, t) (2) where h, t, r ∈ F d are vectors for head entity, tail entity and relation respectively.",
"f (h, t) ∈ F d measures compatibility of head and tail entities and is specific to the model.",
"F is either real space R or complex space C. Detailed descriptions of the models we consider are as follows.",
"DistMult (Yang et al., 2014 ) models entities and relations as vectors in R d .",
"It uses an entry-wise product ( ) to measure compatibility between head and tail entities, while using logistic loss for training the model.",
"σ DistM ult (h, r, t) = r (h t) (3) Since the entry-wise product in (3) is symmetric, DistMult is not suitable for asymmetric and antisymmetric relations.",
"HolE (Nickel et al., 2016) also models entities and relations as vectors in R d .",
"It uses circular correlation operator ( ) as compatibility function defined as [h t] k = d−1 i=0 h i t (k+i) mod d The score function is given as σ HolE (h, r, t) = r (h t) (4) The circular correlation operator being asymmetric, can capture asymmetric and anti-symmetric relations, but at the cost of higher time complexity The points are sampled from 3d Spherical Gaussian with mean (1,1,1) and standard deviation 0.1 (left) and 1.3 (right).",
"Please refer to Section 4 for more details.",
"(O (d log d) ).",
"For training, we use pairwise ranking loss.",
"ComplEx (Trouillon et al., 2016) represents entities and relations as vectors in C d .",
"The compatibility of entity pairs is measured using entry-wise product between head and complex conjugate of tail entity vectors.",
"σ ComplEx (h, r, t) = Re(r (h t )) (5) In contrast to (3), using complex vectors in (5) allows ComplEx to handle symmetric, asymmetric and anti-symmetric relations using the same score function.",
"Similar to DistMult, logistic loss is used for training the model.",
"Metrics For our geometrical analysis, we first define a term 'alignment to mean' (ATM) of a vector v belonging to a set of vectors V, as the cosine similarity 1 between v and the mean of all vectors in V. ATM(v, V) = cosine v, 1 |V| x∈V x We also define 'conicity' of a set V as the mean ATM of all vectors in V. By this definition, a high value of Conicity(V) would imply that the vectors in V lie in a narrow cone centered at origin.",
"In other words, the vectors in the set V are highly aligned with each other.",
"In addition to that, we define the variance of ATM across all vectors in V, as the 'vector spread'(VS) of set V, For each method, a plot averaged across entity frequency bins is shown.",
"From these plots, we conclude that entity embeddings from additive models tend to have low (positive as well as negative) ATM and thereby low Conicity and high vector spread.",
"Interestingly, this is reversed in case of multiplicative methods.",
"Please see Section 6.1 for more details.",
"Conicity(V) = 1 |V| v∈V ATM(v, V) 1 cosine(u, v) = u v u v Dataset VS(V) = 1 |V| v∈V ATM(v, V)−Conicity(V) Experimental Setup Datasets: We run our experiments on subsets of two widely used datasets, viz., Freebase (Bollacker et al., 2008) and WordNet (Miller, 1995) , called FB15k and WN18 (Bordes et al., 2013) , respectively.",
"We detail the characteristics of these datasets in Table 2 .",
"Please note that while the results presented in Section 6 are on the FB15K dataset, we reach the same conclusions on WN18.",
"The plots for our experiments on WN18 can be found in the Supplementary Section.",
"Hyperparameters: We experiment with multiple values of hyperparameters to understand their effect on the geometry of KG embeddings.",
"Specifically, we vary the dimension of the generated vectors between {50, 100, 200} and the number of negative samples used during training between {1, 50, 100}.",
"For more details on algorithm specific hyperparameters, we refer the reader to the Supplementary Section.",
"2 2 For training, we used codes from https://github.",
"Frequency Bins: We follow (Mimno and Thompson, 2017) for entity and relation samples used in the analysis.",
"Multiple bins of entities and relations are created based on their frequencies and 100 randomly sampled vectors are taken from each bin.",
"These set of sampled vectors are then used for our analysis.",
"For more information about sampling vectors, please refer to (Mimno and Thompson, 2017) .",
"Results and Analysis In this section, we evaluate the following questions.",
"• Does model type (e.g., additive vs multiplicative) have any effect on the geometry of embeddings?",
"(Section 6.",
"For each method, a plot averaged across entity frequency bins is shown.",
"Trends in these plots are similar to those in Figure 2 .",
"Main findings from these plots are summarized in Section 6.1.",
"• Does negative sampling have any effect on the embedding geometry?",
"(Section 6.2) • Does the dimension of embedding have any effect on its geometry?",
"(Section 6.3) • How is task performance related to embedding geometry?",
"(Section 6.4) In each subsection, we summarize the main findings at the beginning, followed by evidence supporting those findings.",
"Effect of Model Type on Geometry Summary of Findings: Additive: Low conicity and high vector spread.",
"Multiplicative: High conicity and low vector spread.",
"In this section, we explore whether the type of the score function optimized during the training has any effect on the geometry of the resulting embedding.",
"For this experiment, we set the number of negative samples to 1 and the vector dimension to 100 (we got similar results for 50-dimensional vectors).",
"Figure 2 and Figure 3 show the distribution of ATMs of these sampled entity and relation vectors, respectively.",
"3 Entity Embeddings: As seen in Figure 2 , there is a stark difference between the geometries of entity vectors produced by additive and multiplicative models.",
"The ATMs of all entity vectors produced by multiplicative models are positive with very low vector spread.",
"Their high conicity suggests that they are not uniformly dispersed in the vector space, but lie in a narrow cone along the mean vector.",
"This is in contrast to the entity vectors obtained from additive models which are both positive and negative with higher vector spread.",
"From the lower values of conicity, we conclude that entity vectors from additive models are evenly dispersed in the vector space.",
"This observation is also reinforced by looking at the high vector spread of additive models in comparison to that of multiplicative models.",
"We also observed that additive models are sensitive to the frequency of entities, with high frequency bins having higher conicity than low frequency bins.",
"However, no such pattern was observed for multiplicative models and In each bar group, first three models are additive, while the last three are multiplicative.",
"Main findings from these plots are summarized in Section 6.2 conicity was consistently similar across frequency bins.",
"For clarity, we have not shown different plots for individual frequency bins.",
"Relation Embeddings: As in entity embeddings, we observe a similar trend when we look at the distribution of ATMs for relation vectors in Figure 3 .",
"The conicity of relation vectors generated using additive models is almost zero across frequency bands.",
"This coupled with the high vector spread observed, suggests that these vectors are scattered throughout the vector space.",
"Relation vectors from multiplicative models exhibit high conicity and low vector spread, suggesting that they lie in a narrow cone centered at origin, like their entity counterparts.",
"Effect of Number of Negative Samples on Geometry Summary of Findings: Additive: Conicity and average length are invariant to changes in #NegativeSamples for both entities and relations.",
"Multiplicative: Conicity increases while average vector length decrease with increasing #NegativeSamples for entities.",
"Conicity decreases, while average vector length remains constant (except HolE) for relations.",
"For experiments in this section, we keep the vector dimension constant at 100.",
"Entity Embeddings: As seen in Figure 4 (left) , the conicity of entity vectors increases as the number of negative samples is increased for multiplicative models.",
"In contrast, conicity of the entity vectors generated by additive models is unaffected by change in number of negative samples and they continue to be dispersed throughout the vector space.",
"From Figure 4 (right), we observe that the average length of entity vectors produced by additive models is also invariant of any changes in number of negative samples.",
"On the other hand, increase in negative sampling decreases the average entity vector length for all multiplicative models except HolE.",
"The average entity vector length for HolE is nearly 1 for any number of negative samples, which is understandable considering it constrains the entity vectors to lie inside a unit ball (Nickel et al., 2016) .",
"This constraint is also enforced by the additive models: TransE, TransR, and STransE.",
"Relation Embeddings: Similar to entity embeddings, in case of relation vectors trained using additive models, the average length and conicity do not change while varying the number of negative samples.",
"However, the conicity of relation vectors from multiplicative models decreases with increase in negative sampling.",
"The average relation vector length is invariant for all multiplicative methods, except for HolE.",
"We see a surprisingly big jump in average relation vector length for HolE going from 1 to 50 negative samples, but it does not change after that.",
"Due to space constraints in the paper, we refer the reader to the Supplementary Section for plots discussing the effect of number of negative samples on geometry of relation vectors.",
"We note that the multiplicative score between two vectors may be increased by either increasing the alignment between the two vectors (i.e., increasing Conicity and reducing vector spread between them), or by increasing their lengths.",
"It is interesting to note that we see exactly these effects in the geometry of multiplicative methods In each bar group, first three models are additive, while the last three are multiplicative.",
"Main findings from these plots are summarized in Section 6.3. analyzed above.",
"Correlation with Geometry of Word Embeddings Our conclusions from the geometrical analysis of entity vectors produced by multiplicative models are similar to the results in (Mimno and Thompson, 2017) , where increase in negative sampling leads to increased conicity of word vectors trained using the skip-gram with negative sampling (SGNS) method.",
"On the other hand, additive models remain unaffected by these changes.",
"SGNS tries to maximize a score function of the form w T · c for positive word context pairs, where w is the word vector and c is the context vector (Mikolov et al., 2013) .",
"This is very similar to the score function of multiplicative models as seen in Table 1 .",
"Hence, SGNS can be considered as a multiplicative model in the word domain.",
"Hence, we argue that our result on the increase in negative samples increasing the conicity of vectors trained using a multiplicative score function can be considered as a generalization of the one in (Mimno and Thompson, 2017) .",
"Effect of Vector Dimension on Geometry Summary of Findings: Additive: Conicity and average length are invariant to changes in dimension for both entities and relations.",
"Multiplicative: Conicity decreases for both entities and relations with increasing dimension.",
"Average vector length increases for both entities and relations, except for HolE entities.",
"Entity Embeddings: To study the effect of vec-tor dimension on conicity and length, we set the number of negative samples to 1, while varying the vector dimension.",
"From Figure 5 (left), we observe that the conicity for entity vectors generated by any additive model is almost invariant of increase in dimension, though STransE exhibits a slight decrease.",
"In contrast, entity vector from multiplicative models show a clear decreasing pattern with increasing dimension.",
"As seen in Figure 5 (right) , the average lengths of entity vectors from multiplicative models increase sharply with increasing vector dimension, except for HolE.",
"In case of HolE, the average vector length remains constant at one.",
"Deviation involving HolE is expected as it enforces entity vectors to fall within a unit ball.",
"Similar constraints are enforced on entity vectors for additive models as well.",
"Thus, the average entity vector lengths are not affected by increasing vector dimension for all additive models.",
"Relation Embeddings: We reach similar conclusion when analyzing against increasing dimension the change in geometry of relation vectors produced using these KG embedding methods.",
"In this setting, the average length of relation vectors learned by HolE also increases as dimension is increased.",
"This is consistent with the other methods in the multiplicative family.",
"This is because, unlike entity vectors, the lengths of relation vectors of HolE are not constrained to be less than unit length.",
"Due to lack of space, we are unable to show plots for relation vectors here, but the same can be found in the Supplementary Section.",
"Relating Geometry to Performance Summary of Findings: Additive: Neither entites nor relations exhibit correlation between geometry and performance.",
"Multiplicative: Keeping negative samples fixed, lower conicity or higher average vector length for entities leads to improved performance.",
"No relationship for relations.",
"In this section, we analyze the relationship between geometry and performance on the Link prediction task, using the same setting as in (Bordes et al., 2013) .",
"Figure 6 (left) presents the effects of conicity of entity vectors on performance, while Figure 6 (right) shows the effects of average entity vector length.",
"4 As we see from Figure 6 (left), for fixed number of negative samples, the multiplicative model with lower conicity of entity vectors achieves better performance.",
"This performance gain is larger for higher numbers of negative samples (N).",
"Additive models don't exhibit any relationship between performance and conicity, as they are all clustered around zero conicity, which is in-line with our observations in previous sections.",
"In Figure 6 (right) , for all multiplicative models except HolE, a higher average entity vector length translates to better performance, while the number of negative samples is kept fixed.",
"Additive models and HolE don't exhibit any such patterns, as they are all clustered just below unit average entity vector length.",
"The above two observations for multiplicative models make intuitive sense, as lower conicity and higher average vector length would both translate to vectors being more dispersed in the space.",
"We see another interesting observation regarding the high sensitivity of HolE to the number of negative samples used during training.",
"Using a large number of negative examples (e.g., N = 50 or 100) leads to very high conicity in case of HolE.",
"Figure 6 (right) shows that average entity vector length of HolE is always one.",
"These two observations point towards HolE's entity vectors lying in a tiny part of the space.",
"This translates to HolE performing poorer than all other models in case of high numbers of negative sampling.",
"We also did a similar study for relation vectors, but did not see any discernible patterns.",
"Conclusion In this paper, we have initiated a systematic study into the important but unexplored problem of analyzing geometry of various Knowledge Graph (KG) embedding methods.",
"To the best of our knowledge, this is the first study of its kind.",
"Through extensive experiments on multiple realworld datasets, we are able to identify several insights into the geometry of KG embeddings.",
"We have also explored the relationship between KG embedding geometry and its task performance.",
"We have shared all our source code to foster further research in this area."
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"4",
"5",
"6",
"6.2",
"6.2.1",
"7"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Overview of KG Embedding Methods",
"Additive KG Embedding Methods",
"Multiplicative KG Embedding Methods",
"Metrics",
"Experimental Setup",
"Results and Analysis",
"Effect of Number of Negative Samples on Geometry",
"Correlation with Geometry of Word Embeddings",
"Conclusion"
]
} | GEM-SciDuet-train-132#paper-1355#slide-14 | Correlation b w Geometry and Performance | No correlation between geometry and performance.
For fixed number of negative samples,
Conicity has negative correlation with performance
AVL has positive correlation with performance | No correlation between geometry and performance.
For fixed number of negative samples,
Conicity has negative correlation with performance
AVL has positive correlation with performance | [] |
GEM-SciDuet-train-132#paper-1355#slide-15 | 1355 | Towards Understanding the Geometry of Knowledge Graph Embeddings | Knowledge Graph (KG) embedding has emerged as a very active area of research over the last few years, resulting in the development of several embedding methods. These KG embedding methods represent KG entities and relations as vectors in a high-dimensional space. Despite this popularity and effectiveness of KG embeddings in various tasks (e.g., link prediction), geometric understanding of such embeddings (i.e., arrangement of entity and relation vectors in vector space) is unexplored -we fill this gap in the paper. We initiate a study to analyze the geometry of KG embeddings and correlate it with task performance and other hyperparameters. To the best of our knowledge, this is the first study of its kind. Through extensive experiments on real-world datasets, we discover several insights. For example, we find that there are sharp differences between the geometry of embeddings learnt by different classes of KG embeddings methods. We hope that this initial study will inspire other follow-up research on this important but unexplored problem. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188
],
"paper_content_text": [
"Introduction Knowledge Graphs (KGs) are multi-relational graphs where nodes represent entities and typededges represent relationships among entities.",
"Recent research in this area has resulted in the development of several large KGs, such as NELL (Mitchell et al., 2015) , YAGO (Suchanek et al., 2007) , and Freebase (Bollacker et al., 2008) , among others.",
"These KGs contain thousands of predicates (e.g., person, city, mayorOf(person, city), etc.",
"), and millions of triples involving such predicates, e.g., (Bill de Blasio, mayorOf, New York City) .",
"The problem of learning embeddings for Knowledge Graphs has received significant attention in recent years, with several methods being proposed (Bordes et al., 2013; Lin et al., 2015; Nguyen et al., 2016; Nickel et al., 2016; Trouillon et al., 2016) .",
"These methods represent entities and relations in a KG as vectors in high dimensional space.",
"These vectors can then be used for various tasks, such as, link prediction, entity classification etc.",
"Starting with TransE (Bordes et al., 2013) , there have been many KG embedding methods such as TransH (Wang et al., 2014) , TransR (Lin et al., 2015) and STransE (Nguyen et al., 2016) which represent relations as translation vectors from head entities to tail entities.",
"These are additive models, as the vectors interact via addition and subtraction.",
"Other KG embedding models, such as, DistMult (Yang et al., 2014) , HolE (Nickel et al., 2016) , and ComplEx (Trouillon et al., 2016) are multiplicative where entityrelation-entity triple likelihood is quantified by a multiplicative score function.",
"All these methods employ a score function for distinguishing correct triples from incorrect ones.",
"In spite of the existence of many KG embedding methods, our understanding of the geometry and structure of such embeddings is very shallow.",
"A recent work (Mimno and Thompson, 2017) analyzed the geometry of word embeddings.",
"However, the problem of analyzing geometry of KG embeddings is still unexplored -we fill this important gap.",
"In this paper, we analyze the geometry of such vectors in terms of their lengths and conicity, which, as defined in Section 4, describes their positions and orientations in the vector space.",
"We later study the effects of model type and training hyperparameters on the geometry of KG embeddings and correlate geometry with performance.",
"We make the following contributions: • We initiate a study to analyze the geometry of various Knowledge Graph (KG) embeddings.",
"To the best of our knowledge, this is the first study of its kind.",
"We also formalize various metrics which can be used to study geometry of a set of vectors.",
"• Through extensive analysis, we discover several interesting insights about the geometry of KG embeddings.",
"For example, we find systematic differences between the geometries of embeddings learned by additive and multiplicative KG embedding methods.",
"• We also study the relationship between geometric attributes and predictive performance of the embeddings, resulting in several new insights.",
"For example, in case of multiplicative models, we observe that for entity vectors generated with a fixed number of negative samples, lower conicity (as defined in Section 4) or higher average vector length lead to higher performance.",
"Source code of all the analysis tools developed as part of this paper is available at https://github.com/malllabiisc/ kg-geometry.",
"We are hoping that these resources will enable one to quickly analyze the geometry of any KG embedding, and potentially other embeddings as well.",
"Related Work In spite of the extensive and growing literature on both KG and non-KG embedding methods, very little attention has been paid towards understanding the geometry of the learned embeddings.",
"A recent work (Mimno and Thompson, 2017) is an exception to this which addresses this problem in the context of word vectors.",
"This work revealed a surprising correlation between word vector geometry and the number of negative samples used during training.",
"Instead of word vectors, in this paper we focus on understanding the geometry of KG embeddings.",
"In spite of this difference, the insights we discover in this paper generalizes some of the observations in the work of (Mimno and Thompson, 2017) .",
"Please see Section 6.2 for more details.",
"Since KGs contain only positive triples, negative sampling has been used for training KG embeddings.",
"Effect of the number of negative samples in KG embedding performance was studied by (Toutanova et al., 2015) .",
"In this paper, we study the effect of the number of negative samples on KG embedding geometry as well as performance.",
"In addition to the additive and multiplicative KG embedding methods already mentioned in Section 1, there is another set of methods where the entity and relation vectors interact via a neural network.",
"Examples of methods in this category include NTN (Socher et al., 2013) , CONV (Toutanova et al., 2015) , ConvE (Dettmers et al., 2017) , R-GCN (Schlichtkrull et al., 2017) , ER-MLP (Dong et al., 2014) and ER-MLP-2n (Ravishankar et al., 2017) .",
"Due to space limitations, in this paper we restrict our scope to the analysis of the geometry of additive and multiplicative KG embedding models only, and leave the analysis of the geometry of neural network-based methods as part of future work.",
"Overview of KG Embedding Methods For our analysis, we consider six representative KG embedding methods: TransE (Bordes et al., 2013) , TransR (Lin et al., 2015) , STransE (Nguyen et al., 2016) , DistMult (Yang et al., 2014) , HolE (Nickel et al., 2016) and ComplEx (Trouillon et al., 2016) .",
"We refer to TransE, TransR and STransE as additive methods because they learn embeddings by modeling relations as translation vectors from one entity to another, which results in vectors interacting via the addition operation during training.",
"On the other hand, we refer to Dist-Mult, HolE and ComplEx as multiplicative methods as they quantify the likelihood of a triple belonging to the KG through a multiplicative score function.",
"The score functions optimized by these methods are summarized in Table 1 .",
"Notation: Let G = (E, R, T ) be a Knowledge Graph (KG) where E is the set of entities, R is the set of relations and T ⊂ E × R × E is the set of triples stored in the graph.",
"Most of the KG embedding methods learn vectors e ∈ R de for e ∈ E, and r ∈ R dr for r ∈ R. Some methods also learn projection matrices M r ∈ R dr×de for relations.",
"The correctness of a triple is evaluated using a model specific score function σ : E × R × E → R. For learning the embeddings, a loss function L(T , T ; θ), defined over a set of positive triples T , set of (sampled) negative triples T , and the parameters θ is optimized.",
"We use small italics characters (e.g., h, r) to represent entities and relations, and correspond-Type Model Score Function σ(h, r, t) Additive TransE (Bordes et al., 2013) − h + r − t 1 TransR (Lin et al., 2015) − Mrh + r − Mrt 1 STransE (Nguyen et al., 2016) − M 1 r h + r − M 2 r t 1 Multiplicative DistMult (Yang et al., 2014) r (h t) HolE (Nickel et al., 2016) r (h t) ComplEx (Trouillon et al., 2016) Re(r (h t )) Table 1 : Summary of various Knowledge Graph (KG) embedding methods used in the paper.",
"Please see Section 3 for more details.",
"ing bold characters to represent their vector embeddings (e.g., h, r).",
"We use bold capitalization (e.g., V) to represent a set of vectors.",
"Matrices are represented by capital italics characters (e.g., M ).",
"Additive KG Embedding Methods This is the set of methods where entity and relation vectors interact via additive operations.",
"The score function for these models can be expressed as below σ(h, r, t) = − M 1 r h + r − M 2 r t 1 (1) where h, t ∈ R de and r ∈ R dr are vectors for head entity, tail entity and relation respectively.",
"M 1 r , M 2 r ∈ R dr×de are projection matrices from entity space R de to relation space R dr .",
"TransE (Bordes et al., 2013) is the simplest additive model where the entity and relation vectors lie in same d−dimensional space, i.e., d e = d r = d. The projection matrices M 1 r = M 2 r = I d are identity matrices.",
"The relation vectors are modeled as translation vectors from head entity vectors to tail entity vectors.",
"Pairwise ranking loss is then used to learn these vectors.",
"Since the model is simple, it has limited capability in capturing many-to-one, one-to-many and many-to-many relations.",
"TransR (Lin et al., 2015) is another translationbased model which uses separate spaces for entity and relation vectors allowing it to address the shortcomings of TransE.",
"Entity vectors are projected into a relation specific space using the corresponding projection matrix M 1 r = M 2 r = M r .",
"The training is similar to TransE.",
"STransE (Nguyen et al., 2016) is a generalization of TransR and uses different projection matrices for head and tail entity vectors.",
"The training is similar to TransE.",
"STransE achieves better performance than the previous methods but at the cost of more number of parameters.",
"Equation 1 is the score function used in STransE.",
"TransE and TransR are special cases of STransE with M 1 r = M 2 r = I d and M 1 r = M 2 r = M r , respectively.",
"Multiplicative KG Embedding Methods This is the set of methods where the vectors interact via multiplicative operations (usually dot product).",
"The score function for these models can be expressed as σ(h, r, t) = r f (h, t) (2) where h, t, r ∈ F d are vectors for head entity, tail entity and relation respectively.",
"f (h, t) ∈ F d measures compatibility of head and tail entities and is specific to the model.",
"F is either real space R or complex space C. Detailed descriptions of the models we consider are as follows.",
"DistMult (Yang et al., 2014 ) models entities and relations as vectors in R d .",
"It uses an entry-wise product ( ) to measure compatibility between head and tail entities, while using logistic loss for training the model.",
"σ DistM ult (h, r, t) = r (h t) (3) Since the entry-wise product in (3) is symmetric, DistMult is not suitable for asymmetric and antisymmetric relations.",
"HolE (Nickel et al., 2016) also models entities and relations as vectors in R d .",
"It uses circular correlation operator ( ) as compatibility function defined as [h t] k = d−1 i=0 h i t (k+i) mod d The score function is given as σ HolE (h, r, t) = r (h t) (4) The circular correlation operator being asymmetric, can capture asymmetric and anti-symmetric relations, but at the cost of higher time complexity The points are sampled from 3d Spherical Gaussian with mean (1,1,1) and standard deviation 0.1 (left) and 1.3 (right).",
"Please refer to Section 4 for more details.",
"(O (d log d) ).",
"For training, we use pairwise ranking loss.",
"ComplEx (Trouillon et al., 2016) represents entities and relations as vectors in C d .",
"The compatibility of entity pairs is measured using entry-wise product between head and complex conjugate of tail entity vectors.",
"σ ComplEx (h, r, t) = Re(r (h t )) (5) In contrast to (3), using complex vectors in (5) allows ComplEx to handle symmetric, asymmetric and anti-symmetric relations using the same score function.",
"Similar to DistMult, logistic loss is used for training the model.",
"Metrics For our geometrical analysis, we first define a term 'alignment to mean' (ATM) of a vector v belonging to a set of vectors V, as the cosine similarity 1 between v and the mean of all vectors in V. ATM(v, V) = cosine v, 1 |V| x∈V x We also define 'conicity' of a set V as the mean ATM of all vectors in V. By this definition, a high value of Conicity(V) would imply that the vectors in V lie in a narrow cone centered at origin.",
"In other words, the vectors in the set V are highly aligned with each other.",
"In addition to that, we define the variance of ATM across all vectors in V, as the 'vector spread'(VS) of set V, For each method, a plot averaged across entity frequency bins is shown.",
"From these plots, we conclude that entity embeddings from additive models tend to have low (positive as well as negative) ATM and thereby low Conicity and high vector spread.",
"Interestingly, this is reversed in case of multiplicative methods.",
"Please see Section 6.1 for more details.",
"Conicity(V) = 1 |V| v∈V ATM(v, V) 1 cosine(u, v) = u v u v Dataset VS(V) = 1 |V| v∈V ATM(v, V)−Conicity(V) Experimental Setup Datasets: We run our experiments on subsets of two widely used datasets, viz., Freebase (Bollacker et al., 2008) and WordNet (Miller, 1995) , called FB15k and WN18 (Bordes et al., 2013) , respectively.",
"We detail the characteristics of these datasets in Table 2 .",
"Please note that while the results presented in Section 6 are on the FB15K dataset, we reach the same conclusions on WN18.",
"The plots for our experiments on WN18 can be found in the Supplementary Section.",
"Hyperparameters: We experiment with multiple values of hyperparameters to understand their effect on the geometry of KG embeddings.",
"Specifically, we vary the dimension of the generated vectors between {50, 100, 200} and the number of negative samples used during training between {1, 50, 100}.",
"For more details on algorithm specific hyperparameters, we refer the reader to the Supplementary Section.",
"2 2 For training, we used codes from https://github.",
"Frequency Bins: We follow (Mimno and Thompson, 2017) for entity and relation samples used in the analysis.",
"Multiple bins of entities and relations are created based on their frequencies and 100 randomly sampled vectors are taken from each bin.",
"These set of sampled vectors are then used for our analysis.",
"For more information about sampling vectors, please refer to (Mimno and Thompson, 2017) .",
"Results and Analysis In this section, we evaluate the following questions.",
"• Does model type (e.g., additive vs multiplicative) have any effect on the geometry of embeddings?",
"(Section 6.",
"For each method, a plot averaged across entity frequency bins is shown.",
"Trends in these plots are similar to those in Figure 2 .",
"Main findings from these plots are summarized in Section 6.1.",
"• Does negative sampling have any effect on the embedding geometry?",
"(Section 6.2) • Does the dimension of embedding have any effect on its geometry?",
"(Section 6.3) • How is task performance related to embedding geometry?",
"(Section 6.4) In each subsection, we summarize the main findings at the beginning, followed by evidence supporting those findings.",
"Effect of Model Type on Geometry Summary of Findings: Additive: Low conicity and high vector spread.",
"Multiplicative: High conicity and low vector spread.",
"In this section, we explore whether the type of the score function optimized during the training has any effect on the geometry of the resulting embedding.",
"For this experiment, we set the number of negative samples to 1 and the vector dimension to 100 (we got similar results for 50-dimensional vectors).",
"Figure 2 and Figure 3 show the distribution of ATMs of these sampled entity and relation vectors, respectively.",
"3 Entity Embeddings: As seen in Figure 2 , there is a stark difference between the geometries of entity vectors produced by additive and multiplicative models.",
"The ATMs of all entity vectors produced by multiplicative models are positive with very low vector spread.",
"Their high conicity suggests that they are not uniformly dispersed in the vector space, but lie in a narrow cone along the mean vector.",
"This is in contrast to the entity vectors obtained from additive models which are both positive and negative with higher vector spread.",
"From the lower values of conicity, we conclude that entity vectors from additive models are evenly dispersed in the vector space.",
"This observation is also reinforced by looking at the high vector spread of additive models in comparison to that of multiplicative models.",
"We also observed that additive models are sensitive to the frequency of entities, with high frequency bins having higher conicity than low frequency bins.",
"However, no such pattern was observed for multiplicative models and In each bar group, first three models are additive, while the last three are multiplicative.",
"Main findings from these plots are summarized in Section 6.2 conicity was consistently similar across frequency bins.",
"For clarity, we have not shown different plots for individual frequency bins.",
"Relation Embeddings: As in entity embeddings, we observe a similar trend when we look at the distribution of ATMs for relation vectors in Figure 3 .",
"The conicity of relation vectors generated using additive models is almost zero across frequency bands.",
"This coupled with the high vector spread observed, suggests that these vectors are scattered throughout the vector space.",
"Relation vectors from multiplicative models exhibit high conicity and low vector spread, suggesting that they lie in a narrow cone centered at origin, like their entity counterparts.",
"Effect of Number of Negative Samples on Geometry Summary of Findings: Additive: Conicity and average length are invariant to changes in #NegativeSamples for both entities and relations.",
"Multiplicative: Conicity increases while average vector length decrease with increasing #NegativeSamples for entities.",
"Conicity decreases, while average vector length remains constant (except HolE) for relations.",
"For experiments in this section, we keep the vector dimension constant at 100.",
"Entity Embeddings: As seen in Figure 4 (left) , the conicity of entity vectors increases as the number of negative samples is increased for multiplicative models.",
"In contrast, conicity of the entity vectors generated by additive models is unaffected by change in number of negative samples and they continue to be dispersed throughout the vector space.",
"From Figure 4 (right), we observe that the average length of entity vectors produced by additive models is also invariant of any changes in number of negative samples.",
"On the other hand, increase in negative sampling decreases the average entity vector length for all multiplicative models except HolE.",
"The average entity vector length for HolE is nearly 1 for any number of negative samples, which is understandable considering it constrains the entity vectors to lie inside a unit ball (Nickel et al., 2016) .",
"This constraint is also enforced by the additive models: TransE, TransR, and STransE.",
"Relation Embeddings: Similar to entity embeddings, in case of relation vectors trained using additive models, the average length and conicity do not change while varying the number of negative samples.",
"However, the conicity of relation vectors from multiplicative models decreases with increase in negative sampling.",
"The average relation vector length is invariant for all multiplicative methods, except for HolE.",
"We see a surprisingly big jump in average relation vector length for HolE going from 1 to 50 negative samples, but it does not change after that.",
"Due to space constraints in the paper, we refer the reader to the Supplementary Section for plots discussing the effect of number of negative samples on geometry of relation vectors.",
"We note that the multiplicative score between two vectors may be increased by either increasing the alignment between the two vectors (i.e., increasing Conicity and reducing vector spread between them), or by increasing their lengths.",
"It is interesting to note that we see exactly these effects in the geometry of multiplicative methods In each bar group, first three models are additive, while the last three are multiplicative.",
"Main findings from these plots are summarized in Section 6.3. analyzed above.",
"Correlation with Geometry of Word Embeddings Our conclusions from the geometrical analysis of entity vectors produced by multiplicative models are similar to the results in (Mimno and Thompson, 2017) , where increase in negative sampling leads to increased conicity of word vectors trained using the skip-gram with negative sampling (SGNS) method.",
"On the other hand, additive models remain unaffected by these changes.",
"SGNS tries to maximize a score function of the form w T · c for positive word context pairs, where w is the word vector and c is the context vector (Mikolov et al., 2013) .",
"This is very similar to the score function of multiplicative models as seen in Table 1 .",
"Hence, SGNS can be considered as a multiplicative model in the word domain.",
"Hence, we argue that our result on the increase in negative samples increasing the conicity of vectors trained using a multiplicative score function can be considered as a generalization of the one in (Mimno and Thompson, 2017) .",
"Effect of Vector Dimension on Geometry Summary of Findings: Additive: Conicity and average length are invariant to changes in dimension for both entities and relations.",
"Multiplicative: Conicity decreases for both entities and relations with increasing dimension.",
"Average vector length increases for both entities and relations, except for HolE entities.",
"Entity Embeddings: To study the effect of vec-tor dimension on conicity and length, we set the number of negative samples to 1, while varying the vector dimension.",
"From Figure 5 (left), we observe that the conicity for entity vectors generated by any additive model is almost invariant of increase in dimension, though STransE exhibits a slight decrease.",
"In contrast, entity vector from multiplicative models show a clear decreasing pattern with increasing dimension.",
"As seen in Figure 5 (right) , the average lengths of entity vectors from multiplicative models increase sharply with increasing vector dimension, except for HolE.",
"In case of HolE, the average vector length remains constant at one.",
"Deviation involving HolE is expected as it enforces entity vectors to fall within a unit ball.",
"Similar constraints are enforced on entity vectors for additive models as well.",
"Thus, the average entity vector lengths are not affected by increasing vector dimension for all additive models.",
"Relation Embeddings: We reach similar conclusion when analyzing against increasing dimension the change in geometry of relation vectors produced using these KG embedding methods.",
"In this setting, the average length of relation vectors learned by HolE also increases as dimension is increased.",
"This is consistent with the other methods in the multiplicative family.",
"This is because, unlike entity vectors, the lengths of relation vectors of HolE are not constrained to be less than unit length.",
"Due to lack of space, we are unable to show plots for relation vectors here, but the same can be found in the Supplementary Section.",
"Relating Geometry to Performance Summary of Findings: Additive: Neither entites nor relations exhibit correlation between geometry and performance.",
"Multiplicative: Keeping negative samples fixed, lower conicity or higher average vector length for entities leads to improved performance.",
"No relationship for relations.",
"In this section, we analyze the relationship between geometry and performance on the Link prediction task, using the same setting as in (Bordes et al., 2013) .",
"Figure 6 (left) presents the effects of conicity of entity vectors on performance, while Figure 6 (right) shows the effects of average entity vector length.",
"4 As we see from Figure 6 (left), for fixed number of negative samples, the multiplicative model with lower conicity of entity vectors achieves better performance.",
"This performance gain is larger for higher numbers of negative samples (N).",
"Additive models don't exhibit any relationship between performance and conicity, as they are all clustered around zero conicity, which is in-line with our observations in previous sections.",
"In Figure 6 (right) , for all multiplicative models except HolE, a higher average entity vector length translates to better performance, while the number of negative samples is kept fixed.",
"Additive models and HolE don't exhibit any such patterns, as they are all clustered just below unit average entity vector length.",
"The above two observations for multiplicative models make intuitive sense, as lower conicity and higher average vector length would both translate to vectors being more dispersed in the space.",
"We see another interesting observation regarding the high sensitivity of HolE to the number of negative samples used during training.",
"Using a large number of negative examples (e.g., N = 50 or 100) leads to very high conicity in case of HolE.",
"Figure 6 (right) shows that average entity vector length of HolE is always one.",
"These two observations point towards HolE's entity vectors lying in a tiny part of the space.",
"This translates to HolE performing poorer than all other models in case of high numbers of negative sampling.",
"We also did a similar study for relation vectors, but did not see any discernible patterns.",
"Conclusion In this paper, we have initiated a systematic study into the important but unexplored problem of analyzing geometry of various Knowledge Graph (KG) embedding methods.",
"To the best of our knowledge, this is the first study of its kind.",
"Through extensive experiments on multiple realworld datasets, we are able to identify several insights into the geometry of KG embeddings.",
"We have also explored the relationship between KG embedding geometry and its task performance.",
"We have shared all our source code to foster further research in this area."
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.2",
"4",
"5",
"6",
"6.2",
"6.2.1",
"7"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Overview of KG Embedding Methods",
"Additive KG Embedding Methods",
"Multiplicative KG Embedding Methods",
"Metrics",
"Experimental Setup",
"Results and Analysis",
"Effect of Number of Negative Samples on Geometry",
"Correlation with Geometry of Word Embeddings",
"Conclusion"
]
} | GEM-SciDuet-train-132#paper-1355#slide-15 | Conclusion and Future Works | We initiated the study of geometrical behavior of KG embeddings and presented various insights.
Explore whether other entity/relation features (eg entity category) have any correlation with geometry.
Explore other geometrical metrics which have better correlation with performance and use it for learning better KG embeddings. | We initiated the study of geometrical behavior of KG embeddings and presented various insights.
Explore whether other entity/relation features (eg entity category) have any correlation with geometry.
Explore other geometrical metrics which have better correlation with performance and use it for learning better KG embeddings. | [] |
GEM-SciDuet-train-133#paper-1358#slide-0 | 1358 | What makes a good conversation? How controllable attributes affect human judgments | A good conversation requires balance -between simplicity and detail; staying on topic and changing it; asking questions and answering them. Although dialogue agents are commonly evaluated via human judgments of overall quality, the relationship between quality and these individual factors is less well-studied. In this work, we examine two controllable neural text generation methods, conditional training and weighted decoding, in order to control four important attributes for chitchat dialogue: repetition, specificity, response-relatedness and question-asking. We conduct a large-scale human evaluation to measure the effect of these control parameters on multi-turn interactive conversations on the PersonaChat task. We provide a detailed analysis of their relationship to high-level aspects of conversation, and show that by controlling combinations of these variables our models obtain clear improvements in human quality judgments. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237,
238,
239,
240,
241,
242,
243,
244,
245,
246,
247,
248,
249,
250,
251,
252,
253,
254
],
"paper_content_text": [
"Introduction Neural generation models for dialogue, despite their ubiquity in current research, are still poorly understood.",
"Well known problems, such as the genericness and repetitiveness of responses (Serban et al., 2016a) , remain without a de facto solution.",
"Strikingly, the factors that determine human judgments of overall conversation quality are almost entirely unexplored.",
"Most works have been limited to the next utterance prediction problem, whereas a multi-turn evaluation is necessary to evaluate the quality of a full conversation.",
"In this work we both (i) conduct a large-scale study to identify the fine-grained factors governing human judgments of full conversations, and (ii) develop models that apply our findings in practice, * A.S. completed most of this work at Facebook (FAIR).",
"leading to state-of-the-art performance.",
"Specifically, we identify and study eight aspects of conversation that can be measured by human judgments, while varying four types of low-level attributes that can be algorithmically controlled in neural models; see Figure 1 .",
"To control the lowlevel model attributes, we consider two simple but general algorithms: conditional training, in which the neural model is conditioned on additional control features, and weighted decoding, in which control features are added to the decoding scoring function at test time only.",
"One major result of our findings is that existing work has ignored the importance of conversational flow, as standard models (i) repeat or contradict previous statements, (ii) fail to balance specificity with genericness, and (iii) fail to balance asking questions with other dialogue acts.",
"Conducting experiments on the PersonaChat task (Zhang et al., 2018b) , we obtain significantly higher engagingness scores than the baseline by optimizing control of repetition, specificity and question-asking over multiple turns.",
"Using these findings, our best model matches the performance of the winning entry in the recent NeurIPS ConvAI2 competition (Dinan et al., 2019) , which was trained on much more data but had no control (see Section 8.1).",
"Our code, pretrained models, and full chatlogs, are available at https://parl.ai/projects/ controllable_dialogue.",
"Related Work Dialogue Dialogue evaluation is relatively well understood in goal-oriented tasks, where automated approaches can be coded by measuring task completion (Bordes et al., 2017; El Asri et al., 2017; Hastie, 2012; Henderson et al., 2014; Wen et al., 2017) .",
"Task success combined with dialogue cost can be linked to human judgments like user satisfaction via the PARADISE framework (Walker et al., 1997) .",
"However in chitchat tasks, which we study in this work, automatic metrics and their relation to human ratings are less well-understood.",
"While word-overlap metrics are effective for questionanswering and machine translation, for dialogue they have little to no correlation with human judgments (Liu et al., 2016; Novikova et al., 2017 )this is due to the open-ended nature of dialogue.",
"There are more recent attempts to find better automatic approaches, such as adversarial evaluation (Li et al., 2017b) and learning a scoring model (Lowe et al., 2017) , but their value is still unclear.",
"Nevertheless, a number of studies only use automatic metrics, with no human study at all (Lowe et al., 2015; Parthasarathi and Pineau, 2018; Serban et al., 2016b) .",
"Other works do use human evaluations (Dinan et al., 2018; Li et al., 2016a,b; Venkatesh et al., 2017; Vinyals and Le, 2015; Zhang et al., 2018b) , typically reporting just one type of judgment (either quality or appropriateness) via a Likert scale or pairwise comparison.",
"Most of those works only consider single turn evaluations, often with a shortened dialogue history, rather than full multi-turn dialogue.",
"A more comprehensive evaluation strategy has been studied within the scope of the Alexa prize (Venkatesh et al., 2017; Guo et al., 2018) by combining multiple automatic metrics designed to capture various conversational aspects (engagement, coherence, domain coverage, conversational depth and topical diversity).",
"Though these aspects have some similarity to the aspects studied here, we also focus on lower-level aspects (e.g.",
"avoiding repetition, fluency), to understand how they correspond to both our controllable attributes, and to overall quality judgments.",
"Controllable neural text generation Researchers have proposed several approaches to control aspects of RNN-based natural language generation such as sentiment, length, speaker style and tense (Fan et al., 2018; Ficler and Goldberg, 2017; Ghazvininejad et al., 2017; Hu et al., 2017; Kikuchi et al., 2016; Peng et al., 2018; Wang et al., 2017) .",
"In particular, several works use control to tackle the same common sequence-to-sequence problems we address here (particularly genericness and unrelated output), in the context of single-turn response generation (Baheti et al., 2018; Li et al., 2016a Li et al., , 2017a Shen et al., 2017; Xing et al., 2017; Zhang et al., 2018a; Zhou et al., 2017) .",
"By contrast, we focus on developing controls for, and human evaluation of, multi-turn interactive dialogue -this includes a new method (described in Section 5) to control attributes at the dialogue level rather than the utterance level.",
"In this work, we require a control method that is both general-purpose (one technique to simultaneously control many attributes) and easily tunable (the control setting is adjustable after training).",
"Given these constraints, we study two control methods: conditional training (variants of which have been described by Fan et al.",
"(2018) ; Kikuchi et al.",
"(2016) ; Peng et al.",
"(2018) ) and weighted decoding (described by Ghazvininejad et al.",
"(2017) as a general technique, and by Baheti et al.",
"(2018) to control response-relatedness).",
"To our knowledge, this work is the first to systematically compare the effectiveness of two general-purpose control methods across several attributes.",
"3 The PersonaChat dataset PersonaChat (Zhang et al., 2018b ) is a chitchat dialogue task involving two participants (two humans or a human and a bot).",
"Each participant is given a persona -a short collection of personal traits such as I'm left handed or My favorite season is spring -and are instructed to get to know each other by chatting naturally using their designated personas, for 6-8 turns.",
"The training set contains 8939 conversations and 955 personas, collected via crowdworkers, plus 1000 conversations and 100 personas for validation, and a similar number in the hidden test set.",
"The PersonaChat task was the subject of the NeurIPS 2018 ConvAI2 Challenge (Dinan et al., 2019) , in which competitors were first evaluated with respect to automatic met-rics (perplexity, hits@1 and F1 score), and then with respect to human judgment via the question \"How much did you enjoy talking to this user?\"",
"on a scale of 1-4.",
"Baseline model Our baseline model is a 2-layer LSTM sequenceto-sequence model with attention.",
"On any dialogue turn, the input x to the encoder is the entire dialogue history (separated using unique speakeridentifying tokens), with the model's own persona prepended.",
"Conditioned on this input sequence x, the decoder generates a response y.",
"Except when stated otherwise, all our models decode using beam search with beam size 20.",
"We initialized the word embedding matrix with 300-dimensional GloVe embeddings (Pennington et al., 2014) .",
"Using the ParlAI framework (Miller et al., 2017) , we pretrained the model on a dataset of 2.5 million Twitter message-response pairs, 1 then fine-tuned it on PersonaChat.",
"On the Per-sonaChat validation set, the baseline model has a perplexity of 26.83 and F1 of 17.02, which would have placed us 4th out of 26 models in the Con-vAI2 competition (Dinan et al., 2019) .",
"We attempt to improve over this baseline using control.",
"Controllable text generation methods Suppose we have a sequence-to-sequence model which gives P (y|x) = Π t P (y t |x, y 1 , .",
".",
".",
", y t−1 ), the conditional probability of a response y (the model's next utterance) given input x (the context, which in our case includes the model's own persona and the dialogue history).",
"Contrary to most previous work, which controls at the sentence level, we wish to control attributes of the output y at the dialogue levelmeaning that a single control setting is used for a whole dialogue.",
"For example, to control questionasking, we provide a control setting at the beginning of each dialogue (e.g.",
"20% questions or 70% questions) rather than providing a control setting for each utterance (e.g.",
"is a question or isn't a question).",
"With this approach, the sequence-tosequence model is able to choose what value the controlled attribute should take for any particular utterance, but we are able to choose the overall distribution.",
"We find that this approach works well -for example, the sequence-to-sequence model is generally good at detecting when to ask a question.",
"In particular, this is easier than the alternative: developing a separate process to decide, for each utterance, whether to ask a question.",
"In this section, we describe the two methods -which we call Conditional Training (CT) and Weighted Decoding (WD) -that we use to control attributes of the output y at the dialogue level.",
"Conditional Training (CT) Conditional Training (Fan et al., 2018; Kikuchi et al., 2016; Peng et al., 2018) is a method to learn a sequence-to-sequence model P (y|x, z), where z is a discrete control variable.",
"If the control attribute is naturally continuous (for example in our work, repetitiveness, specificity and response-relatedness), we use z to represent bucketed ranges.",
"For a binary attribute like questionasking, z represents an overall probability (as explained in Section 5).",
"To train a CT model, we first automatically annotate every (x, y) pair in the training set with the attribute we wish to control (for example, whether y contains a question mark).",
"During training, for each example we determine the corresponding z value (for continuous attributes, this simply means sorting into the correct bucket; for question-asking, see Section 6.4).",
"Next, the control variable z is represented via an embedding (each of the possible values of z has its own embedding).",
"For all our experiments, the embedding is of length 10; this was determined via hyperparameter tuning.",
"There are several possible ways to condition the sequence-to-sequence model on z -for example, append z to the end of the input sequence, or use z as the START symbol for the decoder.",
"We find it most effective to concatenate z to the decoder's input on every step.",
"2 Lastly, the CT model learns to produce y = y 1 , .",
".",
".",
", y T by optimizing the cross-entropy loss: loss CT = − 1 T T t=1 log P (y t |x, z, y 1 , .",
".",
".",
", y t−1 ) Our CT models are initialized with the parameters from the baseline sequence-to-sequence model P (y|x) (the new decoder parameters are initialized with small random values), then fine-tuned to optimize loss CT on the PersonaChat training set, until convergence of loss CT on the validation set.",
"Weighted Decoding (WD) Weighted Decoding (Ghazvininejad et al., 2017) is a decoding method that increases or decreases the probability of words with certain features.",
"The technique is applied only at test time, requiring no change to the training method.",
"A limitation of WD is that the controllable attribute must be defined at the word-level; any desired utterance-level attribute must be redefined via word-level features.",
"In weighted decoding, on the t th step of decoding, a partial hypothesis y <t = y 1 , .",
".",
".",
", y t−1 is expanded by computing the score for each possible next word w in the vocabulary: score(w, y <t ; x) = score(y <t ; x) + log P RNN (w|y <t , x) + i w i * f i (w; y <t , x).",
"Here, log P RNN (w|y <t , x) is the log-probability of the word w calculated by the RNN, score(y <t ; x) is the accumulated score of the already-generated words in the hypothesis y <t , and f i (w; y <t , x) are decoding features with associated weights w i .",
"There can be multiple features f i (to control multiple attributes), and the weights w i are hyperparameters to be chosen.",
"A decoding feature f i (w; y <t , x) assigns a real value to the word w, in the context of the text generated so far y <t and the context x.",
"The feature can be continuous (e.g.",
"the unigram probability of w), discrete (e.g.",
"the length of w in characters), or binary (e.g.",
"whether w starts with the same letter as the last word in y <t ).",
"A positive weight w i increases the probability of words w that score highly with respect to f i ; a negative weight decreases their probability.",
"Note that weighted decoding and conditional training can be applied simultaneously (i.e.",
"train a CT model then apply WD at test time) -a strategy we use in our experiments.",
"Controlling conversational attributes In this section, we describe how we use conditional training and weighted decoding to control four attributes: repetition, specificity, responserelatedness and question-asking.",
"We evaluate the effectiveness of both control methods via automatic metrics (i.e., measuring how well the attribute was controlled), and use our findings to select control methods and control settings to be explored further via human evaluation (Section 8).",
"Repetition Our baseline model exhibits three types of repetition, which we call external repetition (selfrepetition across utterances), internal repetition (self-repetition within utterances), and partner repetition (repeating the conversational partner).",
"To control repetition with weighted decoding, 3 we define five n-gram based decoding features (see Appendix D).",
"Three of these features (extrep bigram, intrep bigram and partnerrep bigram) identify repeating bigrams for the three repetition types.",
"The other two features (extrep unigram and intrep unigram) identify repeating content words.",
"By applying a negative weight to these features, we can reduce repetition.",
"In particular, if the weight is −∞, our method is equivalent to n-gram blocking as described by Kulikov et al.",
"(2018) .",
"We observe that repetition control is very important, thus all further control experiments include repetition control.",
"Specificity Like many sequence-to-sequence models using beam search decoding, our baseline frequently asks generic questions such as What music do you like?",
"and gives dull, unspecific responses, such as I like all kinds of music.",
"We control specificity using Normalized Inverse Document Frequency (NIDF) as a measure of word rareness.",
"4 The Inverse Document Frequency of a word w is IDF(w) = log(R/c w ) where R is the number of responses in the dataset, and c w is the number of those responses that contain w. Normalized IDF (which ranges from 0 to 1) is NIDF(w) = IDF(w) − min idf max idf − min idf (1) where min idf and max idf are the minimum and maximum IDFs, taken over all words in the vocabulary.",
"To control specificity with weighted decoding, we use NIDF as a decoding feature.",
"As shown in Table 1 , this method produces reasonable outputs when the feature weight is within a certain range, but at the extremes the outputs are nonsensical.",
"The boundary for nonsensical output differs from example to example.",
"To control specificity with conditional training, we define the specificity of an utterance y to be the mean NIDF of the words in y.",
"Thus our control variable z is mean NIDF (discretized into 10 equal-sized buckets).",
"As shown in Table 1 , this method gives outputs with a narrower NIDF range, but overall produces less nonsensical outputs.",
"Response-relatedness In conversation, it's generally desirable to produce a response that is related to the partner's last utterance; for example if the partner says My grandfather died last month, it is appropriate to say I'm so sorry.",
"Were you close to your grandfather?",
"However, our baseline model frequently responds with unrelated utterances like Do you have any pets?",
"To control response-relatedness with weighted decoding, we use the decoding feature resp rel: resp rel(w; y <t , x) = cos sim(word emb(w), sent emb( )) where word emb(w) is the GloVe embedding for the word w, sent emb( ) is the sentence embedding for the partner's last utterance (note is part of the context x), and cos sim is the cosine similarity between the two.",
"In particular, the sentence embedding sent emb(s) for an utterance s is a weighted average of the GloVe embeddings of the words in s, with the first principal component projected out; for full details, see Arora et al.",
"(2017) .",
"This method of controlling response-relatedness is similar to that described in (Baheti et al., 2018) .",
"We find that weighted decoding is effective to control the semantic relatedness of the model's response to the partner's last utterance (see Table 2 ).",
"As before, we find that extreme weights lead to nonsensical output.",
"To control response-relatedness with conditional training, we try defining the control variable z to be cos sim(sent emb(y), sent emb( )), the overall cosine similarity between the partner's last utterance and the model's response y (again, we discretize z).",
"However, we find this method ineffective -the CT model learns only a very weak connection between z and the semantic relatedness of the output (see Section 7 for more details).",
"Question-asking Considerate chitchat requires a reciprocal asking and answering of questions -asking too few or too many can appear self-centered or nosy.",
"We control question-asking in order to study these trade-offs.",
"To control question-asking with weighted decoding, we use the binary decoding feature is qn word(w), which is equal to 1 if and only if the word w is in a pre-defined list of interrogative words (how, what, when, where, which, who, whom, whose, why, ?)",
".",
"We find this is a somewhat effective method to encourage or discourage questions, but with unintended side-effects: a negative weight can discourage valid non-question utterances that happen to contain interrogative words (such as I'm learning how to knit) and a positive weight can result in degenerate utterances (such as For conditional training, we regard an utterance y as containing a question if and only if y contains a question mark.",
"We train our CT model on a control variable z with 11 possible values: {0, .",
".",
".",
", 10}.",
"As discussed in Section 5, we wish to control question-asking at the distributional, dialogue level, rather than at the binary, utterance level.",
"Thus the setting z = i means that the model should produce, on average, utterances containing '?'",
"with probability i/10.",
"During training we randomly assign examples to buckets such that each bucket i is trained on examples with the correct proportion of questions (i/10), and all buckets have the same amount of training examples.",
"We find that conditional training is effective to control question-asking -as shown in Figure 2 , by increasing z from 0 to 10, we obtain a range of question-asking rates from 1.40% to 97.72%.",
"However, when we introduce repetition control, question-asking is reduced -in particular, the z = 10 setting (which should produce 100% questions) now only produces 79.67% questions.",
"The primary problem is the weighted decoding feature extrep bigram, which discourages bigrams that have appeared in previous utterances -this prevents the model from producing bigrams that commonly occur in many questions, such as do you and what is.",
"To fix this, we introduce an extra setting z = 10 (boost), in which we do not use the feature extrep bigram for weighted decoding during beam search, but we do use it to rerank the candidates after beam search.",
"This setting, which allows the model to produce necessary questionasking bigrams, yields a 99.54% question-asking rate, at the cost of slightly increased external bigram repetition (see Appendix F).",
"For controlling question-asking, conditional training is preferable to weighted decoding for two reasons.",
"Firstly, it allows us to achieve (close to) 0% questions, 100% questions, or anything in between, without introducing the risk of degenerate output.",
"Secondly, presence-of-a-question-mark captures the true attribute of interest (questionasking) more exactly and directly than presence of interrogative words.",
"For these reasons, only the CT method is considered in the human evaluation.",
"Comparison of control methods The previous section shows that conditional training and weighted decoding are both useful techniques, with different strengths and weaknesses.",
"The primary disadvantage of conditional training is that it sometimes fails to learn the connection between the control variable z and the target output y.",
"In practice, we find the model can learn simple attributes of the output (such as the presence of '?",
"', and overall genericness), but not relationships between the input and output (such as semantic relatedness).",
"By contrast, weighted decoding can force the desired feature to appear in the output by raising the weight arbitrarily high (though this may have unintended side-effects).",
"The primary disadvantage of weighted decoding is that it risks going off-distribution when the weight is too strong.",
"By contrast, conditional training produces mostly well-formed, indistribution outputs.",
"This highlights the importance of learned control -it is safer to learn to produce output that both satisfies the control variable and is appropriate, than to alter the decoding process to satisfy the control variable, potentially trading off appropriateness in the process.",
"Other considerations include: (1) Convenience: conditional training requires retraining; weighted decoding doesn't, but is slower at test time.",
"Attribute definition: conditional training can control sentence-level attributes, but they must be discrete.",
"By contrast, weighted decoding requires word-level features, but they can be continuous.",
"Human evaluation results In order to study the effect of our controllable attributes, we conduct a large-scale human evalua-tion of 28 model configurations (see Appendix E), plus human-human conversations for comparison.",
"Approach In our evaluation, a crowdworker chats with a model (or in the human-human case, another crowdworker) for six conversational turns, then answers eight multiple-choice questions which each capture different aspects of conversational quality: avoiding repetition, interestingness, making sense, fluency, listening, inquisitiveness, humanness and engagingness.",
"The eight questions are Likert questions on a 1-4 scale, where higher is better.",
"5 To match the ConvAI2 Challenge, we also add a persona retrieval question, in which the crowdworker is asked to select which of two possible personas was the model's persona.",
"For full details of the evaluation design, see Appendix B.",
"Our evaluation is the same as the ConvAI2 Challenge evaluation, but more detailed -Con-vAI2 includes only engagingness and persona retrieval.",
"6 As in the ConvAI2 challenge, each of our 28 model configurations was evaluated by over 100 crowdworkers, and the results were adjusted for annotator variance via a Bayesian calibration (Kulikov et al., 2018) .",
"In designing our evaluation, we aimed to capture the four aspects we expected to directly improve via control (avoiding repetition, interestingness, listening, inquisitiveness), two important error classes we thought would be affected by our controls (fluency, making sense), and two overall quality measures (engagingness, humanness).",
"Main findings In this section we summarize the main findings of our human evaluation -whose full results can be found in Appendices G and H, with sample conversations in Appendix C. As Figure 3 shows, controlling for repetition, specificity and question-asking all lead to large 5 Exceptions: Avoiding repetition is a 1-3 scale, as we found this gave clearer instructions.",
"Inquisitiveness has an optimal score of 3; 1 and 2 represent too little questionasking, and 4 represents too much.",
"6 There are three other minor differences between our evaluation and ConvAI2's: (1) We fix capitalization and spacing before showing the chatbot's utterances to crowdworkers, while ConvAI2 show the raw lowercase tokenized form.",
"We found the latter interferes with fluency evaluation.",
"(2) We conduct 6 dialogue turns, while ConvAI2 conducts 4-6.",
"This was necessary to evaluate repetitiveness.",
"(3) We use (publicly-available) validation set personas, while ConvAI2 uses (hidden) test set personas.",
"This enables us to release our evaluation chatlogs.",
"engagingness improvements over the greedy and beam-search baseline models.",
"In particular, we find that controlling for multi-turn (self) repetition is important and should be incorporated alongside other attribute control methods.",
"We found no improvement by controlling response-relatedness.",
"To better understand these overall engagingness improvements, we consider the full set of human judgments, shown in Figure 4 .",
"We find that reducing repetition leads to improvements across all our aspects of conversational quality.",
"Increasing specificity shows improvements in interestingness and listening ability over the repetition-controlled baseline, while increasing question-asking shows improvements in inquisitiveness and interestingness over the repetition-controlled baseline.",
"Our most engaging model, which controls both repetition and question-asking -marked 'Question (CT)' in Figure 3 (left) -matches the engagingness of the winning entry in the ConvAI2 competition, as both models achieve a raw score 7 of 3.1 (Dinan et al., 2019) .",
"However, the Con-vAI2 winner, Lost in Conversation, was trained on approximately 12× as much data as our model.",
"Lost in Conversation is based on the OpenAI GPT Language Model (Radford et al., 2018) , which is pretrained on the BookCorpus (Zhu et al., 2015) , which contains approximately 985 million words, whereas our model is pretrained on the Twitter dataset (approximately 79 million words).",
"Altogether, our evaluation clearly shows that controlling low-level attributes over multiple turns leads to improved overall quality.",
"Effect of controlled attributes Repetition (WD) We observe that selfrepetition across utterances (external repetition) is by far the most severe form of repetition in our beam search baseline model.",
"We evaluate several settings of the extrep bigram weighted decoding feature, and find that an aggressive repetition-reduction setting (reducing bigram repetition rate to below gold data levels) is rated best.",
"We also find that blocking repeated content words improves the avoiding repetition score.",
"See Appendices E, F and G for full details.",
"As shown in Figure 3 Figure 3 : Calibrated human judgments of engagingness for the baselines and best controlled models (left); for different specificity control settings (middle); and for different question-asking control settings (right).",
"over the beam search baseline in all metrics, and achieves close-to-human scores on all metrics except humanness.",
"This striking result demonstrates that repetition is by far the biggest limiting quality factor for naive sequence-to-sequence dialogue agents.",
"The result also emphasizes the importance of multi-turn dialogue evaluation to detect the problem.",
"We refer to this model as the repetitioncontrolled baseline, and use it as a basis for all remaining experiments (i.e., we control specificity, response-relatedness and question-asking on top of these repetition-control settings).",
"Specificity (WD, CT) For our weighted decoding models, the extreme settings (very generic and very specific) score poorly in engagingness due to the frequent presence of degenerate output -see Figure 3 (middle).",
"We find that the weight = 4 setting (which is more specific than the repetitioncontrolled baseline and about as specific as the gold data) maximizes engagingness.",
"As shown in Figure 3 (left) and Figure 4 , this more-specific model is rated more interesting, engaging, and a better listener than the repetition-controlled baseline, but at the cost of reduced fluency and making sense.",
"Our CT model with z = 7 (which has a similar NIDF level as WD with weight = 4) shows similar results, but the improvements are smaller.",
"For further discussion on the interestingness of our specificity models, see Section 8.3.",
"Response-relatedness (WD) We evaluated several control settings (weight = −10, 5, 10, 13) and found that none scored better than weight = 0 (no response-relatedness control); see Appendix H. This is surprising -prior to running the human evaluation, we annotated 100 examples ourselves to determine the best control settings.",
"While we identified a more responsive setting (weight = 5) as less likely than the uncontrolled model to ignore the user, crowdworkers rated it as a slightly worse listener than the uncontrolled model.",
"One explanation for this discrepancy is that the more responsive model takes more risks, using more rare words (0.197 NIDF, up from 0.178), and thus receives a lower makes-sense score (3.41, down from 3.70).",
"We hypothesize that, compared to us, the crowdworkers are less tolerant of slightly nonsensical output, and more tolerant of generic unrelated utterances.",
"Question-asking (CT) As shown in Figure 3 (right), a question-asking rate of 65.7% (z = 7) maximizes engagingness.",
"This setting, which asks more questions than both the repetition-controlled baseline (50.0%) and the human-produced gold data (28.8%), brings us closest to human-level engagingness -see Figure 3 (left).",
"Although we find that a rate of approximately 65.7% questionasking is the most engaging, a lower level (48.9%, or z = 4) is rated the best listener.",
"Lastly, we find that although asking too many questions is less engaging, most crowdworkers will not directly criticize a chatbot that asks questions on every turnonly 11.9% of crowdworkers judged the z = 10 (boost) setting, which asks 99.5% questions, as asking too many questions.",
"8 For full details of these scores, see Appendix F and H. For time and budget reasons, we did not evaluate any models controlling both question-asking and specificity.",
"However, we expect it is possible to obtain further improvements by doing so.",
"A/B tests for interestingness Though our more-specific models yielded significant improvements in engagingness, we were surprised that they did not yield clearer improvements in interestingness.",
"To investigate further, we conducted an A/B interestingness evaluation of three specificity-controlled models, compared to the repetition-controlled baseline.",
"Crowdworkers were shown two conversations (from the main human evaluation) and asked to choose which model was more interesting (see Figure 7 for details).",
"We collected 500 samples per comparison, plus 200 additional human vs repetition-controlled baseline samples, which were used to filter for quality control.",
"After discarding low-quality crowdworkers, we have roughly 300 evaluations per comparison, with an average Cohen's κ = 0.6.",
"As shown in Table 3 , all three models were rated significantly more interesting than the repetitioncontrolled baseline.",
"This convincingly shows that producing utterances with more rare words is a valid strategy to improve interestingness.",
"We have two explanations for why these interestingness differences did not materialize in our main evaluation.",
"Firstly, interestingness is a particularly subjective metric (unlike more tangible metrics such as avoiding repetition and making sense) -this makes it hard to calibrate across crowdworkers.",
"Secondly, we suspect that in our original evaluation, the crowdworkers may have evaluated the interestingness of the task rather than the chatbot.",
"This could account for why subtle increases in conversational ability did not result in higher interestingness ratings -the PersonaChat task itself has a natural interestingness limit.",
"Conclusion What makes a good conversation?",
"Through our evaluation, we showed that a good conversation is about balance -controlling for the right level of repetition, specificity and question-asking is important for overall quality.",
"We also found that conversational aspects such as interestingness, listening, and inquisitiveness are all importantthough optimizing these can introduce a trade-off against certain types of errors (such as repetitive, disfluent, or nonsensical output).",
"Secondly, multiturn evaluation is essential to study what makes a good conversation -multiple turns are required to reveal issues such as repetition, consistency, and question-asking frequency.",
"Lastly, what do we mean by 'good'?",
"Although humanness and engagingness are both commonly used as overall quality metrics, the two are very different.",
"While our models achieved close-to-human scores on engagingness, they failed to get close on humannessshowing that a chatbot need not be human-like to be enjoyable.",
"This striking result also demonstrates the importance of measuring more than one quality metric when evaluating dialogue agents.",
"Outlook Our work shows that neural generative systems have systemic problems when applied to open-ended dialogue, some of which (e.g.",
"repetition) are only observable in the multi-turn setting.",
"Furthermore, control of low-level attributes offers a practical way to correct these problems, yielding large improvements to overall quality -in our case, comparable to systems trained on much more data.",
"Future work includes optimizing control settings automatically, and building more convincingly human-like chatbots.",
"Supplementary Material A Screenshots of human evaluation interface B Human evaluation questionnaire design Here are the questions and multiple-choice options used in the human evaluation, in the order presented: [Engagingness] How much did you enjoy talking to this user?",
"Adding w to the hypothesis y <t would create a 2-gram that appears in a previous utterance by the model extrep unigram(w, y <t , x) w is a non-stopword and w appears in a previous utterance by the model intrep bigram(w, y <t , x) Adding w to the hypothesis y <t would create a 2-gram that appears earlier in the hypothesis y <t intrep unigram(w, y <t , x) w is a non-stopword and w appears earlier in the hypothesis y <t partnerrep bigram(w, y <t , x) Adding w to the hypothesis y <t would create a 2-gram that appears in a previous utterance by the partner Repetition control (WD) Extrep bigram WD -0.5 wt -0.5 Extrep bigram WD -1.25 wt -1.25 Extrep bigram WD -3.5 wt -3.5 Extrep bigram WD -inf wt -∞ Repetition-controlled baseline wt -3.5 wt -∞ wt -∞ Question control (CT) Question-controlled CT 0 wt -3.5 wt -∞ wt -∞ z = 0 Question-controlled CT 1 wt -3.5 wt -∞ wt -∞ z = 1 Question-controlled CT 4 wt -3.5 wt -∞ wt -∞ z = 4 Question-controlled CT 7 wt -3.5 wt -∞ wt -∞ z = 7 Question-controlled CT 10 wt -3.5 wt -∞ wt -∞ z = 10 Question-controlled CT 10 (boost) wt 0 * wt -∞ wt -∞ z = 10 Specificity control (CT) Specificity-controlled CT 0 wt -3.5 wt -∞ wt -∞ z = 0 Specificity-controlled CT 2 wt -3.5 wt -∞ wt -∞ z = 2 Specificity-controlled CT 4 wt -3.5 wt -∞ wt -∞ z = 4 Specificity-controlled CT 7 wt -3.5 wt -∞ wt -∞ z = 7 Specificity-controlled CT 9 wt -3.5 wt -∞ wt -∞ z = 9 Specificity control (WD) Specificity-controlled WD -10 wt -3.5 wt -∞ wt -∞ wt -10 Specificity-controlled WD -4 wt -3.5 wt -∞ wt -∞ wt -4 Specificity-controlled WD 4 wt -3.5 wt -∞ wt -∞ wt 4 Specificity-controlled WD 6 wt -3.5 wt -∞ wt -∞ wt 6 Specificity-controlled WD 8 wt -3.5 wt -∞ wt -∞ wt 8 Response-related control (WD) ** Response-related controlled WD -10 wt -3.5 wt -∞ wt -∞ wt -∞ wt -∞ wt -10 Response-related controlled WD 0 wt -3.5 wt -∞ wt -∞ wt -∞ wt -∞ wt 0 Response-related controlled WD 5 wt -3.5 wt -∞ wt -∞ wt -∞ wt -∞ wt 5 Response-related controlled WD 10 wt -3.5 wt -∞ wt -∞ wt -∞ wt -∞ wt 10 Response-related controlled WD 13 wt -3.5 wt -∞ wt -∞ wt -∞ wt -∞ wt 13 Table 5 : Control settings for all configurations that were human-evaluated.",
"'wt' means the weight used for a weighted decoding feature and 'z =' means the setting (i.e.",
"bucket) for the control variable in conditional training.",
"* In the setting Question-controlled CT 10 (boost), the feature extrep bigram is not used for weighted decoding during beam search, but it is used to rerank the candidates after beam search.",
"See Section 6.4 for details.",
"** Note that the Response-related controlled models additionally introduce repetition controls to block internal bigram repetition and partner bigram repetition.",
"This was necessary to prevent the model from parroting the partner's last utterance.",
"In Table 8 , we find that just adding these extra repetition controls (here called Responserelated controlled WD 0, i.e.",
"increased repetition control but no response-relatedness control) outperforms our canonical Repetition-controlled baseline.",
"However, given that we discovered this later, our specificity and question controlled models are built on top of the canonical Repetition-controlled baseline.",
"Table 7 : Raw scores (mean ± std.)",
"for all models and human evaluation metrics.",
"The first eight columns are Likert metrics on a 1-4 scale (except Avoiding Repetition, which is a 1-3 scale), where higher is better (except Inquisitiveness, which has an optimal score of 3).",
"The last column, Persona Retrieval, is on a scale from 0 to 1 where higher is better.",
"The maximum of each column (excluding Human row) is in bold.",
"Model Avoiding Rep.",
"Engage Fluency Humanness Inquisitive Interesting Listening Make Sense Human and baselines * Human 2.79 ± 0.12 3.04 ± 0.11 3.36 ± 0.12 3.35 ± 0.11 2.44 ± 0.12 2.92 ± 0.11 3.32 ± 0.13 3.68 ± 0.11 * Greedy Search 2.08 ± 0.10 2.24 ± 0.11 3.03 ± 0.10 1.75 ± 0.12 1.95 ± 0.10 2.29 ± 0.13 2.62 ± 0.10 3.23 ± 0.10 * Beam Search (beam size 20) 2.08 ± 0.11 2.29 ± 0.11 3.09 ± 0.13 1.71 ± 0.13 2.42 ± 0.11 2.29 ± 0.14 2.47 ± 0.12 3.35 ± 0.13 Repetition control (WD) Extrep bigram WD -0.5 2.62 ± 0.10 2.54 ± 0.12 3.35 ± 0.12 2.13 ± 0.11 2.63 ± 0.11 2.56 ± 0.11 2.93 ± 0.11 3.48 ± 0.11 Extrep bigram WD -1.25 2.78 ± 0.09 2.82 ± 0.13 3.40 ± 0.12 2.27 ± 0.12 2.54 ± 0.09 2.76 ± 0.10 3.05 ± 0.11 3.53 ± 0.14 Extrep bigram WD -3.5 2.83 ± 0.11 2.93 ± 0.10 3.56 ± 0.10 2.43 ± 0.11 2.47 ± 0.11 2.83 ± 0.10 3.14 ± 0.10 3.62 ± 0.12 Extrep bigram WD -inf 2.74 ± 0.11 2.87 ± 0.14 3.49 ± 0.12 2.32 ± 0.13 2.56 ± 0.11 2.75 ± 0.12 3.13 ± 0.12 3.59 ± 0.12 * Repetition-controlled baseline 2.86 ± 0.12 2.82 ± 0.12 3.53 ± 0.10 2.40 ± 0.11 2.62 ± 0.13 2.84 ± 0.12 3.10 ± 0.11 3.58 ± 0.14 Question control (CT) Question-controlled CT 0 2.87 ± 0.12 2.84 ± 0.13 3.51 ± 0.10 2.46 ± 0.11 2.36 ± 0.09 2.76 ± 0.09 3.10 ± 0.10 3.49 ± 0.12 Question-controlled CT 1 2.82 ± 0.11 2.88 ± 0.11 3.42 ± 0.10 2.46 ± 0.12 2.47 ± 0.11 2.79 ± 0.13 3.14 ± 0.11 3.55 ± 0.10 Question-controlled CT 4 2.78 ± 0.12 2.88 ± 0.10 3.47 ± 0.11 2.40 ± 0.09 2.53 ± 0.13 2.83 ± 0.13 3.24 ± 0.11 3.59 ± 0.10 * Question-controlled CT 7 2.81 ± 0.10 2.99 ± 0.11 3.54 ± 0.09 2.35 ± 0.11 2.66 ± 0.12 2.92 ± 0.12 3.11 ± 0.10 3.47 ± 0.10 Question-controlled CT 10 2.67 ± 0.13 2.87 ± 0.11 3.52 ± 0.12 2.35 ± 0.12 2.63 ± 0.12 2.66 ± 0.10 2.94 ± 0.11 3.53 ± 0.12 Question-controlled CT 10 (boost) 2.68 ± 0.12 2.74 ± 0.09 3.42 ± 0.12 2.19 ± 0.13 2.79 ± 0.11 2.74 ± 0.11 3.00 ± 0.12 3.45 ± 0.13 Specificity control (CT) Specificity-controlled CT 0 2.79 ± 0.10 2.93 ± 0.09 3.44 ± 0.12 2.38 ± 0.11 2.56 ± 0.12 2.84 ± 0.12 3.12 ± 0.13 3.61 ± 0.11 Specificity-controlled CT 2 2.78 ± 0.12 2.74 ± 0.11 3.39 ± 0.13 2.31 ± 0.13 2.56 ± 0.13 2.74 ± 0.12 2.99 ± 0.11 3.47 ± 0.10 Specificity-controlled CT 4 2.82 ± 0.10 2.80 ± 0.13 3.44 ± 0.14 2.32 ± 0.13 2.51 ± 0.12 2.78 ± 0.15 3.09 ± 0.13 3.46 ± 0.13 Specificity-controlled CT 7 2.81 ± 0.12 2.91 ± 0.13 3.43 ± 0.11 2.45 ± 0.10 2.49 ± 0.11 2.81 ± 0.12 3.15 ± 0.12 3.55 ± 0.11 Specificity-controlled CT 9 2.80 ± 0.13 2.78 ± 0.10 3.41 ± 0.12 2.35 ± 0.13 2.28 ± 0.11 2.79 ± 0.11 2.91 ± 0.11 3.51 ± 0.12 Specificity control (WD) Specificity-controlled WD -10 2.76 ± 0.11 2.41 ± 0.12 3.19 ± 0.12 2.15 ± 0.11 2.28 ± 0.13 2.35 ± 0.12 2.89 ± 0.11 3.28 ± 0.12 Specificity-controlled WD -4 2.83 ± 0.10 2.76 ± 0.12 3.37 ± 0.10 2.36 ± 0.11 2.46 ± 0.11 2.62 ± 0.12 3.14 ± 0.09 3.52 ± 0.11 * Specificity-controlled WD 4 2.84 ± 0.10 2.96 ± 0.12 3.45 ± 0.13 2.44 ± 0.12 2.56 ± 0.09 2.94 ± 0.11 3.20 ± 0.10 3.54 ± 0.11 Specificity-controlled WD 6 2.81 ± 0.09 2.91 ± 0.10 3.34 ± 0.09 2.31 ± 0.11 2.53 ± 0.12 2.93 ± 0.12 3.09 ± 0.10 3.41 ± 0.12 Specificity-controlled WD 8 2.70 ± 0.11 2.39 ± 0.12 2.54 ± 0.12 1.80 ± 0.13 2.00 ± 0.10 2.49 ± 0.12 2.47 ± 0.10 2.87 ± 0.11 Response-related control (WD) Response-related controlled WD -10 2.77 ± 0.12 2.45 ± 0.12 3.26 ± 0.11 1.96 ± 0.10 2.31 ± 0.12 2.47 ± 0.12 2.73 ± 0.11 3.12 ± 0.12 Response-related controlled WD 0 2.87 ± 0.12 2.97 ± 0.11 3.55 ± 0.09 2.62 ± 0.11 2.48 ± 0.10 2.88 ± 0.12 3.21 ± 0.09 3.70 ± 0.10 Response-related controlled WD 5 2.79 ± 0.10 2.83 ± 0.09 3.35 ± 0.12 2.40 ± 0.12 2.51 ± 0.13 2.80 ± 0.13 3.13 ± 0.12 3.41 ± 0.12 Response-related controlled WD 10 2.74 ± 0.11 2.42 ± 0.12 2.93 ± 0.11 1.95 ± 0.12 2.20 ± 0.12 2.56 ± 0.12 2.90 ± 0.12 3.12 ± 0.10 Response-related controlled WD 13 2.63 ± 0.12 2.06 ± 0.11 2.40 ± 0.09 1.74 ± 0.11 2.07 ± 0.11 2.25 ± 0.12 2.49 ± 0.14 2.63 ± 0.10 Table 8 : Calibrated scores (mean ± std.)",
"for all models and human evaluation metrics.",
"The first eight columns are Likert metrics on a 1-4 scale (except Avoiding Repetition, which is a 1-3 scale), where higher is better (except Inquisitiveness, which has an optimal score of 3).",
"The last column, Persona Retrieval, is on a scale from 0 to 1 where higher is better.",
"The maximum of each column (excluding Human row) is in bold.",
"Rows marked with * are the six models included in Figure 3 Figure 9 : Calibrated human evaluation scores for all models.",
"This is the same data as in Table 8 .",
"Note: 'Repetition-controlled baseline+' in the rightmost column is 'Response-related controlled WD 0' in Table 8 .",
"See Table 5 for explanation."
]
} | {
"paper_header_number": [
"1",
"2",
"4",
"5",
"5.1",
"5.2",
"6",
"6.1",
"6.2",
"6.3",
"6.4",
"7",
"8",
"8.1",
"8.2",
"8.3",
"9"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Baseline model",
"Controllable text generation methods",
"Conditional Training (CT)",
"Weighted Decoding (WD)",
"Controlling conversational attributes",
"Repetition",
"Specificity",
"Response-relatedness",
"Question-asking",
"Comparison of control methods",
"Human evaluation results",
"Main findings",
"Effect of controlled attributes",
"A/B tests for interestingness",
"Conclusion"
]
} | GEM-SciDuet-train-133#paper-1358#slide-0 | Natural Language Generation task spectrum | Less open-ended More open-ended
Neural LMs more successful Neural LMs less successful
Makes errors like repetition and generic response (under certain decoding algorithms).
Neural LMs less successful
Difficulty learning to make high-level decisions.
Control = ability to specify desired attributes of the text at test time.
Control is less important Control is more important
We can use control to fix errors, and allow us to handle some high-level decisions.
Mostly word-level decisions Requires high-level decisions
Control is less important No automatic metric for overall quality. Control is more important
Eval is difficult Eval is fiendish
Dialogue is even more complex:
Single-turn or multi-turn eval?
Interactive or static conversation? | Less open-ended More open-ended
Neural LMs more successful Neural LMs less successful
Makes errors like repetition and generic response (under certain decoding algorithms).
Neural LMs less successful
Difficulty learning to make high-level decisions.
Control = ability to specify desired attributes of the text at test time.
Control is less important Control is more important
We can use control to fix errors, and allow us to handle some high-level decisions.
Mostly word-level decisions Requires high-level decisions
Control is less important No automatic metric for overall quality. Control is more important
Eval is difficult Eval is fiendish
Dialogue is even more complex:
Single-turn or multi-turn eval?
Interactive or static conversation? | [] |
GEM-SciDuet-train-133#paper-1358#slide-1 | 1358 | What makes a good conversation? How controllable attributes affect human judgments | A good conversation requires balance -between simplicity and detail; staying on topic and changing it; asking questions and answering them. Although dialogue agents are commonly evaluated via human judgments of overall quality, the relationship between quality and these individual factors is less well-studied. In this work, we examine two controllable neural text generation methods, conditional training and weighted decoding, in order to control four important attributes for chitchat dialogue: repetition, specificity, response-relatedness and question-asking. We conduct a large-scale human evaluation to measure the effect of these control parameters on multi-turn interactive conversations on the PersonaChat task. We provide a detailed analysis of their relationship to high-level aspects of conversation, and show that by controlling combinations of these variables our models obtain clear improvements in human quality judgments. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237,
238,
239,
240,
241,
242,
243,
244,
245,
246,
247,
248,
249,
250,
251,
252,
253,
254
],
"paper_content_text": [
"Introduction Neural generation models for dialogue, despite their ubiquity in current research, are still poorly understood.",
"Well known problems, such as the genericness and repetitiveness of responses (Serban et al., 2016a) , remain without a de facto solution.",
"Strikingly, the factors that determine human judgments of overall conversation quality are almost entirely unexplored.",
"Most works have been limited to the next utterance prediction problem, whereas a multi-turn evaluation is necessary to evaluate the quality of a full conversation.",
"In this work we both (i) conduct a large-scale study to identify the fine-grained factors governing human judgments of full conversations, and (ii) develop models that apply our findings in practice, * A.S. completed most of this work at Facebook (FAIR).",
"leading to state-of-the-art performance.",
"Specifically, we identify and study eight aspects of conversation that can be measured by human judgments, while varying four types of low-level attributes that can be algorithmically controlled in neural models; see Figure 1 .",
"To control the lowlevel model attributes, we consider two simple but general algorithms: conditional training, in which the neural model is conditioned on additional control features, and weighted decoding, in which control features are added to the decoding scoring function at test time only.",
"One major result of our findings is that existing work has ignored the importance of conversational flow, as standard models (i) repeat or contradict previous statements, (ii) fail to balance specificity with genericness, and (iii) fail to balance asking questions with other dialogue acts.",
"Conducting experiments on the PersonaChat task (Zhang et al., 2018b) , we obtain significantly higher engagingness scores than the baseline by optimizing control of repetition, specificity and question-asking over multiple turns.",
"Using these findings, our best model matches the performance of the winning entry in the recent NeurIPS ConvAI2 competition (Dinan et al., 2019) , which was trained on much more data but had no control (see Section 8.1).",
"Our code, pretrained models, and full chatlogs, are available at https://parl.ai/projects/ controllable_dialogue.",
"Related Work Dialogue Dialogue evaluation is relatively well understood in goal-oriented tasks, where automated approaches can be coded by measuring task completion (Bordes et al., 2017; El Asri et al., 2017; Hastie, 2012; Henderson et al., 2014; Wen et al., 2017) .",
"Task success combined with dialogue cost can be linked to human judgments like user satisfaction via the PARADISE framework (Walker et al., 1997) .",
"However in chitchat tasks, which we study in this work, automatic metrics and their relation to human ratings are less well-understood.",
"While word-overlap metrics are effective for questionanswering and machine translation, for dialogue they have little to no correlation with human judgments (Liu et al., 2016; Novikova et al., 2017 )this is due to the open-ended nature of dialogue.",
"There are more recent attempts to find better automatic approaches, such as adversarial evaluation (Li et al., 2017b) and learning a scoring model (Lowe et al., 2017) , but their value is still unclear.",
"Nevertheless, a number of studies only use automatic metrics, with no human study at all (Lowe et al., 2015; Parthasarathi and Pineau, 2018; Serban et al., 2016b) .",
"Other works do use human evaluations (Dinan et al., 2018; Li et al., 2016a,b; Venkatesh et al., 2017; Vinyals and Le, 2015; Zhang et al., 2018b) , typically reporting just one type of judgment (either quality or appropriateness) via a Likert scale or pairwise comparison.",
"Most of those works only consider single turn evaluations, often with a shortened dialogue history, rather than full multi-turn dialogue.",
"A more comprehensive evaluation strategy has been studied within the scope of the Alexa prize (Venkatesh et al., 2017; Guo et al., 2018) by combining multiple automatic metrics designed to capture various conversational aspects (engagement, coherence, domain coverage, conversational depth and topical diversity).",
"Though these aspects have some similarity to the aspects studied here, we also focus on lower-level aspects (e.g.",
"avoiding repetition, fluency), to understand how they correspond to both our controllable attributes, and to overall quality judgments.",
"Controllable neural text generation Researchers have proposed several approaches to control aspects of RNN-based natural language generation such as sentiment, length, speaker style and tense (Fan et al., 2018; Ficler and Goldberg, 2017; Ghazvininejad et al., 2017; Hu et al., 2017; Kikuchi et al., 2016; Peng et al., 2018; Wang et al., 2017) .",
"In particular, several works use control to tackle the same common sequence-to-sequence problems we address here (particularly genericness and unrelated output), in the context of single-turn response generation (Baheti et al., 2018; Li et al., 2016a Li et al., , 2017a Shen et al., 2017; Xing et al., 2017; Zhang et al., 2018a; Zhou et al., 2017) .",
"By contrast, we focus on developing controls for, and human evaluation of, multi-turn interactive dialogue -this includes a new method (described in Section 5) to control attributes at the dialogue level rather than the utterance level.",
"In this work, we require a control method that is both general-purpose (one technique to simultaneously control many attributes) and easily tunable (the control setting is adjustable after training).",
"Given these constraints, we study two control methods: conditional training (variants of which have been described by Fan et al.",
"(2018) ; Kikuchi et al.",
"(2016) ; Peng et al.",
"(2018) ) and weighted decoding (described by Ghazvininejad et al.",
"(2017) as a general technique, and by Baheti et al.",
"(2018) to control response-relatedness).",
"To our knowledge, this work is the first to systematically compare the effectiveness of two general-purpose control methods across several attributes.",
"3 The PersonaChat dataset PersonaChat (Zhang et al., 2018b ) is a chitchat dialogue task involving two participants (two humans or a human and a bot).",
"Each participant is given a persona -a short collection of personal traits such as I'm left handed or My favorite season is spring -and are instructed to get to know each other by chatting naturally using their designated personas, for 6-8 turns.",
"The training set contains 8939 conversations and 955 personas, collected via crowdworkers, plus 1000 conversations and 100 personas for validation, and a similar number in the hidden test set.",
"The PersonaChat task was the subject of the NeurIPS 2018 ConvAI2 Challenge (Dinan et al., 2019) , in which competitors were first evaluated with respect to automatic met-rics (perplexity, hits@1 and F1 score), and then with respect to human judgment via the question \"How much did you enjoy talking to this user?\"",
"on a scale of 1-4.",
"Baseline model Our baseline model is a 2-layer LSTM sequenceto-sequence model with attention.",
"On any dialogue turn, the input x to the encoder is the entire dialogue history (separated using unique speakeridentifying tokens), with the model's own persona prepended.",
"Conditioned on this input sequence x, the decoder generates a response y.",
"Except when stated otherwise, all our models decode using beam search with beam size 20.",
"We initialized the word embedding matrix with 300-dimensional GloVe embeddings (Pennington et al., 2014) .",
"Using the ParlAI framework (Miller et al., 2017) , we pretrained the model on a dataset of 2.5 million Twitter message-response pairs, 1 then fine-tuned it on PersonaChat.",
"On the Per-sonaChat validation set, the baseline model has a perplexity of 26.83 and F1 of 17.02, which would have placed us 4th out of 26 models in the Con-vAI2 competition (Dinan et al., 2019) .",
"We attempt to improve over this baseline using control.",
"Controllable text generation methods Suppose we have a sequence-to-sequence model which gives P (y|x) = Π t P (y t |x, y 1 , .",
".",
".",
", y t−1 ), the conditional probability of a response y (the model's next utterance) given input x (the context, which in our case includes the model's own persona and the dialogue history).",
"Contrary to most previous work, which controls at the sentence level, we wish to control attributes of the output y at the dialogue levelmeaning that a single control setting is used for a whole dialogue.",
"For example, to control questionasking, we provide a control setting at the beginning of each dialogue (e.g.",
"20% questions or 70% questions) rather than providing a control setting for each utterance (e.g.",
"is a question or isn't a question).",
"With this approach, the sequence-tosequence model is able to choose what value the controlled attribute should take for any particular utterance, but we are able to choose the overall distribution.",
"We find that this approach works well -for example, the sequence-to-sequence model is generally good at detecting when to ask a question.",
"In particular, this is easier than the alternative: developing a separate process to decide, for each utterance, whether to ask a question.",
"In this section, we describe the two methods -which we call Conditional Training (CT) and Weighted Decoding (WD) -that we use to control attributes of the output y at the dialogue level.",
"Conditional Training (CT) Conditional Training (Fan et al., 2018; Kikuchi et al., 2016; Peng et al., 2018) is a method to learn a sequence-to-sequence model P (y|x, z), where z is a discrete control variable.",
"If the control attribute is naturally continuous (for example in our work, repetitiveness, specificity and response-relatedness), we use z to represent bucketed ranges.",
"For a binary attribute like questionasking, z represents an overall probability (as explained in Section 5).",
"To train a CT model, we first automatically annotate every (x, y) pair in the training set with the attribute we wish to control (for example, whether y contains a question mark).",
"During training, for each example we determine the corresponding z value (for continuous attributes, this simply means sorting into the correct bucket; for question-asking, see Section 6.4).",
"Next, the control variable z is represented via an embedding (each of the possible values of z has its own embedding).",
"For all our experiments, the embedding is of length 10; this was determined via hyperparameter tuning.",
"There are several possible ways to condition the sequence-to-sequence model on z -for example, append z to the end of the input sequence, or use z as the START symbol for the decoder.",
"We find it most effective to concatenate z to the decoder's input on every step.",
"2 Lastly, the CT model learns to produce y = y 1 , .",
".",
".",
", y T by optimizing the cross-entropy loss: loss CT = − 1 T T t=1 log P (y t |x, z, y 1 , .",
".",
".",
", y t−1 ) Our CT models are initialized with the parameters from the baseline sequence-to-sequence model P (y|x) (the new decoder parameters are initialized with small random values), then fine-tuned to optimize loss CT on the PersonaChat training set, until convergence of loss CT on the validation set.",
"Weighted Decoding (WD) Weighted Decoding (Ghazvininejad et al., 2017) is a decoding method that increases or decreases the probability of words with certain features.",
"The technique is applied only at test time, requiring no change to the training method.",
"A limitation of WD is that the controllable attribute must be defined at the word-level; any desired utterance-level attribute must be redefined via word-level features.",
"In weighted decoding, on the t th step of decoding, a partial hypothesis y <t = y 1 , .",
".",
".",
", y t−1 is expanded by computing the score for each possible next word w in the vocabulary: score(w, y <t ; x) = score(y <t ; x) + log P RNN (w|y <t , x) + i w i * f i (w; y <t , x).",
"Here, log P RNN (w|y <t , x) is the log-probability of the word w calculated by the RNN, score(y <t ; x) is the accumulated score of the already-generated words in the hypothesis y <t , and f i (w; y <t , x) are decoding features with associated weights w i .",
"There can be multiple features f i (to control multiple attributes), and the weights w i are hyperparameters to be chosen.",
"A decoding feature f i (w; y <t , x) assigns a real value to the word w, in the context of the text generated so far y <t and the context x.",
"The feature can be continuous (e.g.",
"the unigram probability of w), discrete (e.g.",
"the length of w in characters), or binary (e.g.",
"whether w starts with the same letter as the last word in y <t ).",
"A positive weight w i increases the probability of words w that score highly with respect to f i ; a negative weight decreases their probability.",
"Note that weighted decoding and conditional training can be applied simultaneously (i.e.",
"train a CT model then apply WD at test time) -a strategy we use in our experiments.",
"Controlling conversational attributes In this section, we describe how we use conditional training and weighted decoding to control four attributes: repetition, specificity, responserelatedness and question-asking.",
"We evaluate the effectiveness of both control methods via automatic metrics (i.e., measuring how well the attribute was controlled), and use our findings to select control methods and control settings to be explored further via human evaluation (Section 8).",
"Repetition Our baseline model exhibits three types of repetition, which we call external repetition (selfrepetition across utterances), internal repetition (self-repetition within utterances), and partner repetition (repeating the conversational partner).",
"To control repetition with weighted decoding, 3 we define five n-gram based decoding features (see Appendix D).",
"Three of these features (extrep bigram, intrep bigram and partnerrep bigram) identify repeating bigrams for the three repetition types.",
"The other two features (extrep unigram and intrep unigram) identify repeating content words.",
"By applying a negative weight to these features, we can reduce repetition.",
"In particular, if the weight is −∞, our method is equivalent to n-gram blocking as described by Kulikov et al.",
"(2018) .",
"We observe that repetition control is very important, thus all further control experiments include repetition control.",
"Specificity Like many sequence-to-sequence models using beam search decoding, our baseline frequently asks generic questions such as What music do you like?",
"and gives dull, unspecific responses, such as I like all kinds of music.",
"We control specificity using Normalized Inverse Document Frequency (NIDF) as a measure of word rareness.",
"4 The Inverse Document Frequency of a word w is IDF(w) = log(R/c w ) where R is the number of responses in the dataset, and c w is the number of those responses that contain w. Normalized IDF (which ranges from 0 to 1) is NIDF(w) = IDF(w) − min idf max idf − min idf (1) where min idf and max idf are the minimum and maximum IDFs, taken over all words in the vocabulary.",
"To control specificity with weighted decoding, we use NIDF as a decoding feature.",
"As shown in Table 1 , this method produces reasonable outputs when the feature weight is within a certain range, but at the extremes the outputs are nonsensical.",
"The boundary for nonsensical output differs from example to example.",
"To control specificity with conditional training, we define the specificity of an utterance y to be the mean NIDF of the words in y.",
"Thus our control variable z is mean NIDF (discretized into 10 equal-sized buckets).",
"As shown in Table 1 , this method gives outputs with a narrower NIDF range, but overall produces less nonsensical outputs.",
"Response-relatedness In conversation, it's generally desirable to produce a response that is related to the partner's last utterance; for example if the partner says My grandfather died last month, it is appropriate to say I'm so sorry.",
"Were you close to your grandfather?",
"However, our baseline model frequently responds with unrelated utterances like Do you have any pets?",
"To control response-relatedness with weighted decoding, we use the decoding feature resp rel: resp rel(w; y <t , x) = cos sim(word emb(w), sent emb( )) where word emb(w) is the GloVe embedding for the word w, sent emb( ) is the sentence embedding for the partner's last utterance (note is part of the context x), and cos sim is the cosine similarity between the two.",
"In particular, the sentence embedding sent emb(s) for an utterance s is a weighted average of the GloVe embeddings of the words in s, with the first principal component projected out; for full details, see Arora et al.",
"(2017) .",
"This method of controlling response-relatedness is similar to that described in (Baheti et al., 2018) .",
"We find that weighted decoding is effective to control the semantic relatedness of the model's response to the partner's last utterance (see Table 2 ).",
"As before, we find that extreme weights lead to nonsensical output.",
"To control response-relatedness with conditional training, we try defining the control variable z to be cos sim(sent emb(y), sent emb( )), the overall cosine similarity between the partner's last utterance and the model's response y (again, we discretize z).",
"However, we find this method ineffective -the CT model learns only a very weak connection between z and the semantic relatedness of the output (see Section 7 for more details).",
"Question-asking Considerate chitchat requires a reciprocal asking and answering of questions -asking too few or too many can appear self-centered or nosy.",
"We control question-asking in order to study these trade-offs.",
"To control question-asking with weighted decoding, we use the binary decoding feature is qn word(w), which is equal to 1 if and only if the word w is in a pre-defined list of interrogative words (how, what, when, where, which, who, whom, whose, why, ?)",
".",
"We find this is a somewhat effective method to encourage or discourage questions, but with unintended side-effects: a negative weight can discourage valid non-question utterances that happen to contain interrogative words (such as I'm learning how to knit) and a positive weight can result in degenerate utterances (such as For conditional training, we regard an utterance y as containing a question if and only if y contains a question mark.",
"We train our CT model on a control variable z with 11 possible values: {0, .",
".",
".",
", 10}.",
"As discussed in Section 5, we wish to control question-asking at the distributional, dialogue level, rather than at the binary, utterance level.",
"Thus the setting z = i means that the model should produce, on average, utterances containing '?'",
"with probability i/10.",
"During training we randomly assign examples to buckets such that each bucket i is trained on examples with the correct proportion of questions (i/10), and all buckets have the same amount of training examples.",
"We find that conditional training is effective to control question-asking -as shown in Figure 2 , by increasing z from 0 to 10, we obtain a range of question-asking rates from 1.40% to 97.72%.",
"However, when we introduce repetition control, question-asking is reduced -in particular, the z = 10 setting (which should produce 100% questions) now only produces 79.67% questions.",
"The primary problem is the weighted decoding feature extrep bigram, which discourages bigrams that have appeared in previous utterances -this prevents the model from producing bigrams that commonly occur in many questions, such as do you and what is.",
"To fix this, we introduce an extra setting z = 10 (boost), in which we do not use the feature extrep bigram for weighted decoding during beam search, but we do use it to rerank the candidates after beam search.",
"This setting, which allows the model to produce necessary questionasking bigrams, yields a 99.54% question-asking rate, at the cost of slightly increased external bigram repetition (see Appendix F).",
"For controlling question-asking, conditional training is preferable to weighted decoding for two reasons.",
"Firstly, it allows us to achieve (close to) 0% questions, 100% questions, or anything in between, without introducing the risk of degenerate output.",
"Secondly, presence-of-a-question-mark captures the true attribute of interest (questionasking) more exactly and directly than presence of interrogative words.",
"For these reasons, only the CT method is considered in the human evaluation.",
"Comparison of control methods The previous section shows that conditional training and weighted decoding are both useful techniques, with different strengths and weaknesses.",
"The primary disadvantage of conditional training is that it sometimes fails to learn the connection between the control variable z and the target output y.",
"In practice, we find the model can learn simple attributes of the output (such as the presence of '?",
"', and overall genericness), but not relationships between the input and output (such as semantic relatedness).",
"By contrast, weighted decoding can force the desired feature to appear in the output by raising the weight arbitrarily high (though this may have unintended side-effects).",
"The primary disadvantage of weighted decoding is that it risks going off-distribution when the weight is too strong.",
"By contrast, conditional training produces mostly well-formed, indistribution outputs.",
"This highlights the importance of learned control -it is safer to learn to produce output that both satisfies the control variable and is appropriate, than to alter the decoding process to satisfy the control variable, potentially trading off appropriateness in the process.",
"Other considerations include: (1) Convenience: conditional training requires retraining; weighted decoding doesn't, but is slower at test time.",
"Attribute definition: conditional training can control sentence-level attributes, but they must be discrete.",
"By contrast, weighted decoding requires word-level features, but they can be continuous.",
"Human evaluation results In order to study the effect of our controllable attributes, we conduct a large-scale human evalua-tion of 28 model configurations (see Appendix E), plus human-human conversations for comparison.",
"Approach In our evaluation, a crowdworker chats with a model (or in the human-human case, another crowdworker) for six conversational turns, then answers eight multiple-choice questions which each capture different aspects of conversational quality: avoiding repetition, interestingness, making sense, fluency, listening, inquisitiveness, humanness and engagingness.",
"The eight questions are Likert questions on a 1-4 scale, where higher is better.",
"5 To match the ConvAI2 Challenge, we also add a persona retrieval question, in which the crowdworker is asked to select which of two possible personas was the model's persona.",
"For full details of the evaluation design, see Appendix B.",
"Our evaluation is the same as the ConvAI2 Challenge evaluation, but more detailed -Con-vAI2 includes only engagingness and persona retrieval.",
"6 As in the ConvAI2 challenge, each of our 28 model configurations was evaluated by over 100 crowdworkers, and the results were adjusted for annotator variance via a Bayesian calibration (Kulikov et al., 2018) .",
"In designing our evaluation, we aimed to capture the four aspects we expected to directly improve via control (avoiding repetition, interestingness, listening, inquisitiveness), two important error classes we thought would be affected by our controls (fluency, making sense), and two overall quality measures (engagingness, humanness).",
"Main findings In this section we summarize the main findings of our human evaluation -whose full results can be found in Appendices G and H, with sample conversations in Appendix C. As Figure 3 shows, controlling for repetition, specificity and question-asking all lead to large 5 Exceptions: Avoiding repetition is a 1-3 scale, as we found this gave clearer instructions.",
"Inquisitiveness has an optimal score of 3; 1 and 2 represent too little questionasking, and 4 represents too much.",
"6 There are three other minor differences between our evaluation and ConvAI2's: (1) We fix capitalization and spacing before showing the chatbot's utterances to crowdworkers, while ConvAI2 show the raw lowercase tokenized form.",
"We found the latter interferes with fluency evaluation.",
"(2) We conduct 6 dialogue turns, while ConvAI2 conducts 4-6.",
"This was necessary to evaluate repetitiveness.",
"(3) We use (publicly-available) validation set personas, while ConvAI2 uses (hidden) test set personas.",
"This enables us to release our evaluation chatlogs.",
"engagingness improvements over the greedy and beam-search baseline models.",
"In particular, we find that controlling for multi-turn (self) repetition is important and should be incorporated alongside other attribute control methods.",
"We found no improvement by controlling response-relatedness.",
"To better understand these overall engagingness improvements, we consider the full set of human judgments, shown in Figure 4 .",
"We find that reducing repetition leads to improvements across all our aspects of conversational quality.",
"Increasing specificity shows improvements in interestingness and listening ability over the repetition-controlled baseline, while increasing question-asking shows improvements in inquisitiveness and interestingness over the repetition-controlled baseline.",
"Our most engaging model, which controls both repetition and question-asking -marked 'Question (CT)' in Figure 3 (left) -matches the engagingness of the winning entry in the ConvAI2 competition, as both models achieve a raw score 7 of 3.1 (Dinan et al., 2019) .",
"However, the Con-vAI2 winner, Lost in Conversation, was trained on approximately 12× as much data as our model.",
"Lost in Conversation is based on the OpenAI GPT Language Model (Radford et al., 2018) , which is pretrained on the BookCorpus (Zhu et al., 2015) , which contains approximately 985 million words, whereas our model is pretrained on the Twitter dataset (approximately 79 million words).",
"Altogether, our evaluation clearly shows that controlling low-level attributes over multiple turns leads to improved overall quality.",
"Effect of controlled attributes Repetition (WD) We observe that selfrepetition across utterances (external repetition) is by far the most severe form of repetition in our beam search baseline model.",
"We evaluate several settings of the extrep bigram weighted decoding feature, and find that an aggressive repetition-reduction setting (reducing bigram repetition rate to below gold data levels) is rated best.",
"We also find that blocking repeated content words improves the avoiding repetition score.",
"See Appendices E, F and G for full details.",
"As shown in Figure 3 Figure 3 : Calibrated human judgments of engagingness for the baselines and best controlled models (left); for different specificity control settings (middle); and for different question-asking control settings (right).",
"over the beam search baseline in all metrics, and achieves close-to-human scores on all metrics except humanness.",
"This striking result demonstrates that repetition is by far the biggest limiting quality factor for naive sequence-to-sequence dialogue agents.",
"The result also emphasizes the importance of multi-turn dialogue evaluation to detect the problem.",
"We refer to this model as the repetitioncontrolled baseline, and use it as a basis for all remaining experiments (i.e., we control specificity, response-relatedness and question-asking on top of these repetition-control settings).",
"Specificity (WD, CT) For our weighted decoding models, the extreme settings (very generic and very specific) score poorly in engagingness due to the frequent presence of degenerate output -see Figure 3 (middle).",
"We find that the weight = 4 setting (which is more specific than the repetitioncontrolled baseline and about as specific as the gold data) maximizes engagingness.",
"As shown in Figure 3 (left) and Figure 4 , this more-specific model is rated more interesting, engaging, and a better listener than the repetition-controlled baseline, but at the cost of reduced fluency and making sense.",
"Our CT model with z = 7 (which has a similar NIDF level as WD with weight = 4) shows similar results, but the improvements are smaller.",
"For further discussion on the interestingness of our specificity models, see Section 8.3.",
"Response-relatedness (WD) We evaluated several control settings (weight = −10, 5, 10, 13) and found that none scored better than weight = 0 (no response-relatedness control); see Appendix H. This is surprising -prior to running the human evaluation, we annotated 100 examples ourselves to determine the best control settings.",
"While we identified a more responsive setting (weight = 5) as less likely than the uncontrolled model to ignore the user, crowdworkers rated it as a slightly worse listener than the uncontrolled model.",
"One explanation for this discrepancy is that the more responsive model takes more risks, using more rare words (0.197 NIDF, up from 0.178), and thus receives a lower makes-sense score (3.41, down from 3.70).",
"We hypothesize that, compared to us, the crowdworkers are less tolerant of slightly nonsensical output, and more tolerant of generic unrelated utterances.",
"Question-asking (CT) As shown in Figure 3 (right), a question-asking rate of 65.7% (z = 7) maximizes engagingness.",
"This setting, which asks more questions than both the repetition-controlled baseline (50.0%) and the human-produced gold data (28.8%), brings us closest to human-level engagingness -see Figure 3 (left).",
"Although we find that a rate of approximately 65.7% questionasking is the most engaging, a lower level (48.9%, or z = 4) is rated the best listener.",
"Lastly, we find that although asking too many questions is less engaging, most crowdworkers will not directly criticize a chatbot that asks questions on every turnonly 11.9% of crowdworkers judged the z = 10 (boost) setting, which asks 99.5% questions, as asking too many questions.",
"8 For full details of these scores, see Appendix F and H. For time and budget reasons, we did not evaluate any models controlling both question-asking and specificity.",
"However, we expect it is possible to obtain further improvements by doing so.",
"A/B tests for interestingness Though our more-specific models yielded significant improvements in engagingness, we were surprised that they did not yield clearer improvements in interestingness.",
"To investigate further, we conducted an A/B interestingness evaluation of three specificity-controlled models, compared to the repetition-controlled baseline.",
"Crowdworkers were shown two conversations (from the main human evaluation) and asked to choose which model was more interesting (see Figure 7 for details).",
"We collected 500 samples per comparison, plus 200 additional human vs repetition-controlled baseline samples, which were used to filter for quality control.",
"After discarding low-quality crowdworkers, we have roughly 300 evaluations per comparison, with an average Cohen's κ = 0.6.",
"As shown in Table 3 , all three models were rated significantly more interesting than the repetitioncontrolled baseline.",
"This convincingly shows that producing utterances with more rare words is a valid strategy to improve interestingness.",
"We have two explanations for why these interestingness differences did not materialize in our main evaluation.",
"Firstly, interestingness is a particularly subjective metric (unlike more tangible metrics such as avoiding repetition and making sense) -this makes it hard to calibrate across crowdworkers.",
"Secondly, we suspect that in our original evaluation, the crowdworkers may have evaluated the interestingness of the task rather than the chatbot.",
"This could account for why subtle increases in conversational ability did not result in higher interestingness ratings -the PersonaChat task itself has a natural interestingness limit.",
"Conclusion What makes a good conversation?",
"Through our evaluation, we showed that a good conversation is about balance -controlling for the right level of repetition, specificity and question-asking is important for overall quality.",
"We also found that conversational aspects such as interestingness, listening, and inquisitiveness are all importantthough optimizing these can introduce a trade-off against certain types of errors (such as repetitive, disfluent, or nonsensical output).",
"Secondly, multiturn evaluation is essential to study what makes a good conversation -multiple turns are required to reveal issues such as repetition, consistency, and question-asking frequency.",
"Lastly, what do we mean by 'good'?",
"Although humanness and engagingness are both commonly used as overall quality metrics, the two are very different.",
"While our models achieved close-to-human scores on engagingness, they failed to get close on humannessshowing that a chatbot need not be human-like to be enjoyable.",
"This striking result also demonstrates the importance of measuring more than one quality metric when evaluating dialogue agents.",
"Outlook Our work shows that neural generative systems have systemic problems when applied to open-ended dialogue, some of which (e.g.",
"repetition) are only observable in the multi-turn setting.",
"Furthermore, control of low-level attributes offers a practical way to correct these problems, yielding large improvements to overall quality -in our case, comparable to systems trained on much more data.",
"Future work includes optimizing control settings automatically, and building more convincingly human-like chatbots.",
"Supplementary Material A Screenshots of human evaluation interface B Human evaluation questionnaire design Here are the questions and multiple-choice options used in the human evaluation, in the order presented: [Engagingness] How much did you enjoy talking to this user?",
"Adding w to the hypothesis y <t would create a 2-gram that appears in a previous utterance by the model extrep unigram(w, y <t , x) w is a non-stopword and w appears in a previous utterance by the model intrep bigram(w, y <t , x) Adding w to the hypothesis y <t would create a 2-gram that appears earlier in the hypothesis y <t intrep unigram(w, y <t , x) w is a non-stopword and w appears earlier in the hypothesis y <t partnerrep bigram(w, y <t , x) Adding w to the hypothesis y <t would create a 2-gram that appears in a previous utterance by the partner Repetition control (WD) Extrep bigram WD -0.5 wt -0.5 Extrep bigram WD -1.25 wt -1.25 Extrep bigram WD -3.5 wt -3.5 Extrep bigram WD -inf wt -∞ Repetition-controlled baseline wt -3.5 wt -∞ wt -∞ Question control (CT) Question-controlled CT 0 wt -3.5 wt -∞ wt -∞ z = 0 Question-controlled CT 1 wt -3.5 wt -∞ wt -∞ z = 1 Question-controlled CT 4 wt -3.5 wt -∞ wt -∞ z = 4 Question-controlled CT 7 wt -3.5 wt -∞ wt -∞ z = 7 Question-controlled CT 10 wt -3.5 wt -∞ wt -∞ z = 10 Question-controlled CT 10 (boost) wt 0 * wt -∞ wt -∞ z = 10 Specificity control (CT) Specificity-controlled CT 0 wt -3.5 wt -∞ wt -∞ z = 0 Specificity-controlled CT 2 wt -3.5 wt -∞ wt -∞ z = 2 Specificity-controlled CT 4 wt -3.5 wt -∞ wt -∞ z = 4 Specificity-controlled CT 7 wt -3.5 wt -∞ wt -∞ z = 7 Specificity-controlled CT 9 wt -3.5 wt -∞ wt -∞ z = 9 Specificity control (WD) Specificity-controlled WD -10 wt -3.5 wt -∞ wt -∞ wt -10 Specificity-controlled WD -4 wt -3.5 wt -∞ wt -∞ wt -4 Specificity-controlled WD 4 wt -3.5 wt -∞ wt -∞ wt 4 Specificity-controlled WD 6 wt -3.5 wt -∞ wt -∞ wt 6 Specificity-controlled WD 8 wt -3.5 wt -∞ wt -∞ wt 8 Response-related control (WD) ** Response-related controlled WD -10 wt -3.5 wt -∞ wt -∞ wt -∞ wt -∞ wt -10 Response-related controlled WD 0 wt -3.5 wt -∞ wt -∞ wt -∞ wt -∞ wt 0 Response-related controlled WD 5 wt -3.5 wt -∞ wt -∞ wt -∞ wt -∞ wt 5 Response-related controlled WD 10 wt -3.5 wt -∞ wt -∞ wt -∞ wt -∞ wt 10 Response-related controlled WD 13 wt -3.5 wt -∞ wt -∞ wt -∞ wt -∞ wt 13 Table 5 : Control settings for all configurations that were human-evaluated.",
"'wt' means the weight used for a weighted decoding feature and 'z =' means the setting (i.e.",
"bucket) for the control variable in conditional training.",
"* In the setting Question-controlled CT 10 (boost), the feature extrep bigram is not used for weighted decoding during beam search, but it is used to rerank the candidates after beam search.",
"See Section 6.4 for details.",
"** Note that the Response-related controlled models additionally introduce repetition controls to block internal bigram repetition and partner bigram repetition.",
"This was necessary to prevent the model from parroting the partner's last utterance.",
"In Table 8 , we find that just adding these extra repetition controls (here called Responserelated controlled WD 0, i.e.",
"increased repetition control but no response-relatedness control) outperforms our canonical Repetition-controlled baseline.",
"However, given that we discovered this later, our specificity and question controlled models are built on top of the canonical Repetition-controlled baseline.",
"Table 7 : Raw scores (mean ± std.)",
"for all models and human evaluation metrics.",
"The first eight columns are Likert metrics on a 1-4 scale (except Avoiding Repetition, which is a 1-3 scale), where higher is better (except Inquisitiveness, which has an optimal score of 3).",
"The last column, Persona Retrieval, is on a scale from 0 to 1 where higher is better.",
"The maximum of each column (excluding Human row) is in bold.",
"Model Avoiding Rep.",
"Engage Fluency Humanness Inquisitive Interesting Listening Make Sense Human and baselines * Human 2.79 ± 0.12 3.04 ± 0.11 3.36 ± 0.12 3.35 ± 0.11 2.44 ± 0.12 2.92 ± 0.11 3.32 ± 0.13 3.68 ± 0.11 * Greedy Search 2.08 ± 0.10 2.24 ± 0.11 3.03 ± 0.10 1.75 ± 0.12 1.95 ± 0.10 2.29 ± 0.13 2.62 ± 0.10 3.23 ± 0.10 * Beam Search (beam size 20) 2.08 ± 0.11 2.29 ± 0.11 3.09 ± 0.13 1.71 ± 0.13 2.42 ± 0.11 2.29 ± 0.14 2.47 ± 0.12 3.35 ± 0.13 Repetition control (WD) Extrep bigram WD -0.5 2.62 ± 0.10 2.54 ± 0.12 3.35 ± 0.12 2.13 ± 0.11 2.63 ± 0.11 2.56 ± 0.11 2.93 ± 0.11 3.48 ± 0.11 Extrep bigram WD -1.25 2.78 ± 0.09 2.82 ± 0.13 3.40 ± 0.12 2.27 ± 0.12 2.54 ± 0.09 2.76 ± 0.10 3.05 ± 0.11 3.53 ± 0.14 Extrep bigram WD -3.5 2.83 ± 0.11 2.93 ± 0.10 3.56 ± 0.10 2.43 ± 0.11 2.47 ± 0.11 2.83 ± 0.10 3.14 ± 0.10 3.62 ± 0.12 Extrep bigram WD -inf 2.74 ± 0.11 2.87 ± 0.14 3.49 ± 0.12 2.32 ± 0.13 2.56 ± 0.11 2.75 ± 0.12 3.13 ± 0.12 3.59 ± 0.12 * Repetition-controlled baseline 2.86 ± 0.12 2.82 ± 0.12 3.53 ± 0.10 2.40 ± 0.11 2.62 ± 0.13 2.84 ± 0.12 3.10 ± 0.11 3.58 ± 0.14 Question control (CT) Question-controlled CT 0 2.87 ± 0.12 2.84 ± 0.13 3.51 ± 0.10 2.46 ± 0.11 2.36 ± 0.09 2.76 ± 0.09 3.10 ± 0.10 3.49 ± 0.12 Question-controlled CT 1 2.82 ± 0.11 2.88 ± 0.11 3.42 ± 0.10 2.46 ± 0.12 2.47 ± 0.11 2.79 ± 0.13 3.14 ± 0.11 3.55 ± 0.10 Question-controlled CT 4 2.78 ± 0.12 2.88 ± 0.10 3.47 ± 0.11 2.40 ± 0.09 2.53 ± 0.13 2.83 ± 0.13 3.24 ± 0.11 3.59 ± 0.10 * Question-controlled CT 7 2.81 ± 0.10 2.99 ± 0.11 3.54 ± 0.09 2.35 ± 0.11 2.66 ± 0.12 2.92 ± 0.12 3.11 ± 0.10 3.47 ± 0.10 Question-controlled CT 10 2.67 ± 0.13 2.87 ± 0.11 3.52 ± 0.12 2.35 ± 0.12 2.63 ± 0.12 2.66 ± 0.10 2.94 ± 0.11 3.53 ± 0.12 Question-controlled CT 10 (boost) 2.68 ± 0.12 2.74 ± 0.09 3.42 ± 0.12 2.19 ± 0.13 2.79 ± 0.11 2.74 ± 0.11 3.00 ± 0.12 3.45 ± 0.13 Specificity control (CT) Specificity-controlled CT 0 2.79 ± 0.10 2.93 ± 0.09 3.44 ± 0.12 2.38 ± 0.11 2.56 ± 0.12 2.84 ± 0.12 3.12 ± 0.13 3.61 ± 0.11 Specificity-controlled CT 2 2.78 ± 0.12 2.74 ± 0.11 3.39 ± 0.13 2.31 ± 0.13 2.56 ± 0.13 2.74 ± 0.12 2.99 ± 0.11 3.47 ± 0.10 Specificity-controlled CT 4 2.82 ± 0.10 2.80 ± 0.13 3.44 ± 0.14 2.32 ± 0.13 2.51 ± 0.12 2.78 ± 0.15 3.09 ± 0.13 3.46 ± 0.13 Specificity-controlled CT 7 2.81 ± 0.12 2.91 ± 0.13 3.43 ± 0.11 2.45 ± 0.10 2.49 ± 0.11 2.81 ± 0.12 3.15 ± 0.12 3.55 ± 0.11 Specificity-controlled CT 9 2.80 ± 0.13 2.78 ± 0.10 3.41 ± 0.12 2.35 ± 0.13 2.28 ± 0.11 2.79 ± 0.11 2.91 ± 0.11 3.51 ± 0.12 Specificity control (WD) Specificity-controlled WD -10 2.76 ± 0.11 2.41 ± 0.12 3.19 ± 0.12 2.15 ± 0.11 2.28 ± 0.13 2.35 ± 0.12 2.89 ± 0.11 3.28 ± 0.12 Specificity-controlled WD -4 2.83 ± 0.10 2.76 ± 0.12 3.37 ± 0.10 2.36 ± 0.11 2.46 ± 0.11 2.62 ± 0.12 3.14 ± 0.09 3.52 ± 0.11 * Specificity-controlled WD 4 2.84 ± 0.10 2.96 ± 0.12 3.45 ± 0.13 2.44 ± 0.12 2.56 ± 0.09 2.94 ± 0.11 3.20 ± 0.10 3.54 ± 0.11 Specificity-controlled WD 6 2.81 ± 0.09 2.91 ± 0.10 3.34 ± 0.09 2.31 ± 0.11 2.53 ± 0.12 2.93 ± 0.12 3.09 ± 0.10 3.41 ± 0.12 Specificity-controlled WD 8 2.70 ± 0.11 2.39 ± 0.12 2.54 ± 0.12 1.80 ± 0.13 2.00 ± 0.10 2.49 ± 0.12 2.47 ± 0.10 2.87 ± 0.11 Response-related control (WD) Response-related controlled WD -10 2.77 ± 0.12 2.45 ± 0.12 3.26 ± 0.11 1.96 ± 0.10 2.31 ± 0.12 2.47 ± 0.12 2.73 ± 0.11 3.12 ± 0.12 Response-related controlled WD 0 2.87 ± 0.12 2.97 ± 0.11 3.55 ± 0.09 2.62 ± 0.11 2.48 ± 0.10 2.88 ± 0.12 3.21 ± 0.09 3.70 ± 0.10 Response-related controlled WD 5 2.79 ± 0.10 2.83 ± 0.09 3.35 ± 0.12 2.40 ± 0.12 2.51 ± 0.13 2.80 ± 0.13 3.13 ± 0.12 3.41 ± 0.12 Response-related controlled WD 10 2.74 ± 0.11 2.42 ± 0.12 2.93 ± 0.11 1.95 ± 0.12 2.20 ± 0.12 2.56 ± 0.12 2.90 ± 0.12 3.12 ± 0.10 Response-related controlled WD 13 2.63 ± 0.12 2.06 ± 0.11 2.40 ± 0.09 1.74 ± 0.11 2.07 ± 0.11 2.25 ± 0.12 2.49 ± 0.14 2.63 ± 0.10 Table 8 : Calibrated scores (mean ± std.)",
"for all models and human evaluation metrics.",
"The first eight columns are Likert metrics on a 1-4 scale (except Avoiding Repetition, which is a 1-3 scale), where higher is better (except Inquisitiveness, which has an optimal score of 3).",
"The last column, Persona Retrieval, is on a scale from 0 to 1 where higher is better.",
"The maximum of each column (excluding Human row) is in bold.",
"Rows marked with * are the six models included in Figure 3 Figure 9 : Calibrated human evaluation scores for all models.",
"This is the same data as in Table 8 .",
"Note: 'Repetition-controlled baseline+' in the rightmost column is 'Response-related controlled WD 0' in Table 8 .",
"See Table 5 for explanation."
]
} | {
"paper_header_number": [
"1",
"2",
"4",
"5",
"5.1",
"5.2",
"6",
"6.1",
"6.2",
"6.3",
"6.4",
"7",
"8",
"8.1",
"8.2",
"8.3",
"9"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Baseline model",
"Controllable text generation methods",
"Conditional Training (CT)",
"Weighted Decoding (WD)",
"Controlling conversational attributes",
"Repetition",
"Specificity",
"Response-relatedness",
"Question-asking",
"Comparison of control methods",
"Human evaluation results",
"Main findings",
"Effect of controlled attributes",
"A/B tests for interestingness",
"Conclusion"
]
} | GEM-SciDuet-train-133#paper-1358#slide-1 | Our research questions | By controlling multiple attributes of generated text and human-evaluating multiple aspects of conversational quality, we aim to answer the following:
1. How effectively can we control the different attributes?
Pretty well! But some control methods only work for some attributes.
2. How do the controllable attributes affect conversational quality aspects?
Strongly especially controlling repetition, question-asking, and specificity vs genericness.
3. Can we use control to make a better chatbot overall?
Yes! But we should be careful defining "better overall". | By controlling multiple attributes of generated text and human-evaluating multiple aspects of conversational quality, we aim to answer the following:
1. How effectively can we control the different attributes?
Pretty well! But some control methods only work for some attributes.
2. How do the controllable attributes affect conversational quality aspects?
Strongly especially controlling repetition, question-asking, and specificity vs genericness.
3. Can we use control to make a better chatbot overall?
Yes! But we should be careful defining "better overall". | [] |
GEM-SciDuet-train-133#paper-1358#slide-2 | 1358 | What makes a good conversation? How controllable attributes affect human judgments | A good conversation requires balance -between simplicity and detail; staying on topic and changing it; asking questions and answering them. Although dialogue agents are commonly evaluated via human judgments of overall quality, the relationship between quality and these individual factors is less well-studied. In this work, we examine two controllable neural text generation methods, conditional training and weighted decoding, in order to control four important attributes for chitchat dialogue: repetition, specificity, response-relatedness and question-asking. We conduct a large-scale human evaluation to measure the effect of these control parameters on multi-turn interactive conversations on the PersonaChat task. We provide a detailed analysis of their relationship to high-level aspects of conversation, and show that by controlling combinations of these variables our models obtain clear improvements in human quality judgments. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237,
238,
239,
240,
241,
242,
243,
244,
245,
246,
247,
248,
249,
250,
251,
252,
253,
254
],
"paper_content_text": [
"Introduction Neural generation models for dialogue, despite their ubiquity in current research, are still poorly understood.",
"Well known problems, such as the genericness and repetitiveness of responses (Serban et al., 2016a) , remain without a de facto solution.",
"Strikingly, the factors that determine human judgments of overall conversation quality are almost entirely unexplored.",
"Most works have been limited to the next utterance prediction problem, whereas a multi-turn evaluation is necessary to evaluate the quality of a full conversation.",
"In this work we both (i) conduct a large-scale study to identify the fine-grained factors governing human judgments of full conversations, and (ii) develop models that apply our findings in practice, * A.S. completed most of this work at Facebook (FAIR).",
"leading to state-of-the-art performance.",
"Specifically, we identify and study eight aspects of conversation that can be measured by human judgments, while varying four types of low-level attributes that can be algorithmically controlled in neural models; see Figure 1 .",
"To control the lowlevel model attributes, we consider two simple but general algorithms: conditional training, in which the neural model is conditioned on additional control features, and weighted decoding, in which control features are added to the decoding scoring function at test time only.",
"One major result of our findings is that existing work has ignored the importance of conversational flow, as standard models (i) repeat or contradict previous statements, (ii) fail to balance specificity with genericness, and (iii) fail to balance asking questions with other dialogue acts.",
"Conducting experiments on the PersonaChat task (Zhang et al., 2018b) , we obtain significantly higher engagingness scores than the baseline by optimizing control of repetition, specificity and question-asking over multiple turns.",
"Using these findings, our best model matches the performance of the winning entry in the recent NeurIPS ConvAI2 competition (Dinan et al., 2019) , which was trained on much more data but had no control (see Section 8.1).",
"Our code, pretrained models, and full chatlogs, are available at https://parl.ai/projects/ controllable_dialogue.",
"Related Work Dialogue Dialogue evaluation is relatively well understood in goal-oriented tasks, where automated approaches can be coded by measuring task completion (Bordes et al., 2017; El Asri et al., 2017; Hastie, 2012; Henderson et al., 2014; Wen et al., 2017) .",
"Task success combined with dialogue cost can be linked to human judgments like user satisfaction via the PARADISE framework (Walker et al., 1997) .",
"However in chitchat tasks, which we study in this work, automatic metrics and their relation to human ratings are less well-understood.",
"While word-overlap metrics are effective for questionanswering and machine translation, for dialogue they have little to no correlation with human judgments (Liu et al., 2016; Novikova et al., 2017 )this is due to the open-ended nature of dialogue.",
"There are more recent attempts to find better automatic approaches, such as adversarial evaluation (Li et al., 2017b) and learning a scoring model (Lowe et al., 2017) , but their value is still unclear.",
"Nevertheless, a number of studies only use automatic metrics, with no human study at all (Lowe et al., 2015; Parthasarathi and Pineau, 2018; Serban et al., 2016b) .",
"Other works do use human evaluations (Dinan et al., 2018; Li et al., 2016a,b; Venkatesh et al., 2017; Vinyals and Le, 2015; Zhang et al., 2018b) , typically reporting just one type of judgment (either quality or appropriateness) via a Likert scale or pairwise comparison.",
"Most of those works only consider single turn evaluations, often with a shortened dialogue history, rather than full multi-turn dialogue.",
"A more comprehensive evaluation strategy has been studied within the scope of the Alexa prize (Venkatesh et al., 2017; Guo et al., 2018) by combining multiple automatic metrics designed to capture various conversational aspects (engagement, coherence, domain coverage, conversational depth and topical diversity).",
"Though these aspects have some similarity to the aspects studied here, we also focus on lower-level aspects (e.g.",
"avoiding repetition, fluency), to understand how they correspond to both our controllable attributes, and to overall quality judgments.",
"Controllable neural text generation Researchers have proposed several approaches to control aspects of RNN-based natural language generation such as sentiment, length, speaker style and tense (Fan et al., 2018; Ficler and Goldberg, 2017; Ghazvininejad et al., 2017; Hu et al., 2017; Kikuchi et al., 2016; Peng et al., 2018; Wang et al., 2017) .",
"In particular, several works use control to tackle the same common sequence-to-sequence problems we address here (particularly genericness and unrelated output), in the context of single-turn response generation (Baheti et al., 2018; Li et al., 2016a Li et al., , 2017a Shen et al., 2017; Xing et al., 2017; Zhang et al., 2018a; Zhou et al., 2017) .",
"By contrast, we focus on developing controls for, and human evaluation of, multi-turn interactive dialogue -this includes a new method (described in Section 5) to control attributes at the dialogue level rather than the utterance level.",
"In this work, we require a control method that is both general-purpose (one technique to simultaneously control many attributes) and easily tunable (the control setting is adjustable after training).",
"Given these constraints, we study two control methods: conditional training (variants of which have been described by Fan et al.",
"(2018) ; Kikuchi et al.",
"(2016) ; Peng et al.",
"(2018) ) and weighted decoding (described by Ghazvininejad et al.",
"(2017) as a general technique, and by Baheti et al.",
"(2018) to control response-relatedness).",
"To our knowledge, this work is the first to systematically compare the effectiveness of two general-purpose control methods across several attributes.",
"3 The PersonaChat dataset PersonaChat (Zhang et al., 2018b ) is a chitchat dialogue task involving two participants (two humans or a human and a bot).",
"Each participant is given a persona -a short collection of personal traits such as I'm left handed or My favorite season is spring -and are instructed to get to know each other by chatting naturally using their designated personas, for 6-8 turns.",
"The training set contains 8939 conversations and 955 personas, collected via crowdworkers, plus 1000 conversations and 100 personas for validation, and a similar number in the hidden test set.",
"The PersonaChat task was the subject of the NeurIPS 2018 ConvAI2 Challenge (Dinan et al., 2019) , in which competitors were first evaluated with respect to automatic met-rics (perplexity, hits@1 and F1 score), and then with respect to human judgment via the question \"How much did you enjoy talking to this user?\"",
"on a scale of 1-4.",
"Baseline model Our baseline model is a 2-layer LSTM sequenceto-sequence model with attention.",
"On any dialogue turn, the input x to the encoder is the entire dialogue history (separated using unique speakeridentifying tokens), with the model's own persona prepended.",
"Conditioned on this input sequence x, the decoder generates a response y.",
"Except when stated otherwise, all our models decode using beam search with beam size 20.",
"We initialized the word embedding matrix with 300-dimensional GloVe embeddings (Pennington et al., 2014) .",
"Using the ParlAI framework (Miller et al., 2017) , we pretrained the model on a dataset of 2.5 million Twitter message-response pairs, 1 then fine-tuned it on PersonaChat.",
"On the Per-sonaChat validation set, the baseline model has a perplexity of 26.83 and F1 of 17.02, which would have placed us 4th out of 26 models in the Con-vAI2 competition (Dinan et al., 2019) .",
"We attempt to improve over this baseline using control.",
"Controllable text generation methods Suppose we have a sequence-to-sequence model which gives P (y|x) = Π t P (y t |x, y 1 , .",
".",
".",
", y t−1 ), the conditional probability of a response y (the model's next utterance) given input x (the context, which in our case includes the model's own persona and the dialogue history).",
"Contrary to most previous work, which controls at the sentence level, we wish to control attributes of the output y at the dialogue levelmeaning that a single control setting is used for a whole dialogue.",
"For example, to control questionasking, we provide a control setting at the beginning of each dialogue (e.g.",
"20% questions or 70% questions) rather than providing a control setting for each utterance (e.g.",
"is a question or isn't a question).",
"With this approach, the sequence-tosequence model is able to choose what value the controlled attribute should take for any particular utterance, but we are able to choose the overall distribution.",
"We find that this approach works well -for example, the sequence-to-sequence model is generally good at detecting when to ask a question.",
"In particular, this is easier than the alternative: developing a separate process to decide, for each utterance, whether to ask a question.",
"In this section, we describe the two methods -which we call Conditional Training (CT) and Weighted Decoding (WD) -that we use to control attributes of the output y at the dialogue level.",
"Conditional Training (CT) Conditional Training (Fan et al., 2018; Kikuchi et al., 2016; Peng et al., 2018) is a method to learn a sequence-to-sequence model P (y|x, z), where z is a discrete control variable.",
"If the control attribute is naturally continuous (for example in our work, repetitiveness, specificity and response-relatedness), we use z to represent bucketed ranges.",
"For a binary attribute like questionasking, z represents an overall probability (as explained in Section 5).",
"To train a CT model, we first automatically annotate every (x, y) pair in the training set with the attribute we wish to control (for example, whether y contains a question mark).",
"During training, for each example we determine the corresponding z value (for continuous attributes, this simply means sorting into the correct bucket; for question-asking, see Section 6.4).",
"Next, the control variable z is represented via an embedding (each of the possible values of z has its own embedding).",
"For all our experiments, the embedding is of length 10; this was determined via hyperparameter tuning.",
"There are several possible ways to condition the sequence-to-sequence model on z -for example, append z to the end of the input sequence, or use z as the START symbol for the decoder.",
"We find it most effective to concatenate z to the decoder's input on every step.",
"2 Lastly, the CT model learns to produce y = y 1 , .",
".",
".",
", y T by optimizing the cross-entropy loss: loss CT = − 1 T T t=1 log P (y t |x, z, y 1 , .",
".",
".",
", y t−1 ) Our CT models are initialized with the parameters from the baseline sequence-to-sequence model P (y|x) (the new decoder parameters are initialized with small random values), then fine-tuned to optimize loss CT on the PersonaChat training set, until convergence of loss CT on the validation set.",
"Weighted Decoding (WD) Weighted Decoding (Ghazvininejad et al., 2017) is a decoding method that increases or decreases the probability of words with certain features.",
"The technique is applied only at test time, requiring no change to the training method.",
"A limitation of WD is that the controllable attribute must be defined at the word-level; any desired utterance-level attribute must be redefined via word-level features.",
"In weighted decoding, on the t th step of decoding, a partial hypothesis y <t = y 1 , .",
".",
".",
", y t−1 is expanded by computing the score for each possible next word w in the vocabulary: score(w, y <t ; x) = score(y <t ; x) + log P RNN (w|y <t , x) + i w i * f i (w; y <t , x).",
"Here, log P RNN (w|y <t , x) is the log-probability of the word w calculated by the RNN, score(y <t ; x) is the accumulated score of the already-generated words in the hypothesis y <t , and f i (w; y <t , x) are decoding features with associated weights w i .",
"There can be multiple features f i (to control multiple attributes), and the weights w i are hyperparameters to be chosen.",
"A decoding feature f i (w; y <t , x) assigns a real value to the word w, in the context of the text generated so far y <t and the context x.",
"The feature can be continuous (e.g.",
"the unigram probability of w), discrete (e.g.",
"the length of w in characters), or binary (e.g.",
"whether w starts with the same letter as the last word in y <t ).",
"A positive weight w i increases the probability of words w that score highly with respect to f i ; a negative weight decreases their probability.",
"Note that weighted decoding and conditional training can be applied simultaneously (i.e.",
"train a CT model then apply WD at test time) -a strategy we use in our experiments.",
"Controlling conversational attributes In this section, we describe how we use conditional training and weighted decoding to control four attributes: repetition, specificity, responserelatedness and question-asking.",
"We evaluate the effectiveness of both control methods via automatic metrics (i.e., measuring how well the attribute was controlled), and use our findings to select control methods and control settings to be explored further via human evaluation (Section 8).",
"Repetition Our baseline model exhibits three types of repetition, which we call external repetition (selfrepetition across utterances), internal repetition (self-repetition within utterances), and partner repetition (repeating the conversational partner).",
"To control repetition with weighted decoding, 3 we define five n-gram based decoding features (see Appendix D).",
"Three of these features (extrep bigram, intrep bigram and partnerrep bigram) identify repeating bigrams for the three repetition types.",
"The other two features (extrep unigram and intrep unigram) identify repeating content words.",
"By applying a negative weight to these features, we can reduce repetition.",
"In particular, if the weight is −∞, our method is equivalent to n-gram blocking as described by Kulikov et al.",
"(2018) .",
"We observe that repetition control is very important, thus all further control experiments include repetition control.",
"Specificity Like many sequence-to-sequence models using beam search decoding, our baseline frequently asks generic questions such as What music do you like?",
"and gives dull, unspecific responses, such as I like all kinds of music.",
"We control specificity using Normalized Inverse Document Frequency (NIDF) as a measure of word rareness.",
"4 The Inverse Document Frequency of a word w is IDF(w) = log(R/c w ) where R is the number of responses in the dataset, and c w is the number of those responses that contain w. Normalized IDF (which ranges from 0 to 1) is NIDF(w) = IDF(w) − min idf max idf − min idf (1) where min idf and max idf are the minimum and maximum IDFs, taken over all words in the vocabulary.",
"To control specificity with weighted decoding, we use NIDF as a decoding feature.",
"As shown in Table 1 , this method produces reasonable outputs when the feature weight is within a certain range, but at the extremes the outputs are nonsensical.",
"The boundary for nonsensical output differs from example to example.",
"To control specificity with conditional training, we define the specificity of an utterance y to be the mean NIDF of the words in y.",
"Thus our control variable z is mean NIDF (discretized into 10 equal-sized buckets).",
"As shown in Table 1 , this method gives outputs with a narrower NIDF range, but overall produces less nonsensical outputs.",
"Response-relatedness In conversation, it's generally desirable to produce a response that is related to the partner's last utterance; for example if the partner says My grandfather died last month, it is appropriate to say I'm so sorry.",
"Were you close to your grandfather?",
"However, our baseline model frequently responds with unrelated utterances like Do you have any pets?",
"To control response-relatedness with weighted decoding, we use the decoding feature resp rel: resp rel(w; y <t , x) = cos sim(word emb(w), sent emb( )) where word emb(w) is the GloVe embedding for the word w, sent emb( ) is the sentence embedding for the partner's last utterance (note is part of the context x), and cos sim is the cosine similarity between the two.",
"In particular, the sentence embedding sent emb(s) for an utterance s is a weighted average of the GloVe embeddings of the words in s, with the first principal component projected out; for full details, see Arora et al.",
"(2017) .",
"This method of controlling response-relatedness is similar to that described in (Baheti et al., 2018) .",
"We find that weighted decoding is effective to control the semantic relatedness of the model's response to the partner's last utterance (see Table 2 ).",
"As before, we find that extreme weights lead to nonsensical output.",
"To control response-relatedness with conditional training, we try defining the control variable z to be cos sim(sent emb(y), sent emb( )), the overall cosine similarity between the partner's last utterance and the model's response y (again, we discretize z).",
"However, we find this method ineffective -the CT model learns only a very weak connection between z and the semantic relatedness of the output (see Section 7 for more details).",
"Question-asking Considerate chitchat requires a reciprocal asking and answering of questions -asking too few or too many can appear self-centered or nosy.",
"We control question-asking in order to study these trade-offs.",
"To control question-asking with weighted decoding, we use the binary decoding feature is qn word(w), which is equal to 1 if and only if the word w is in a pre-defined list of interrogative words (how, what, when, where, which, who, whom, whose, why, ?)",
".",
"We find this is a somewhat effective method to encourage or discourage questions, but with unintended side-effects: a negative weight can discourage valid non-question utterances that happen to contain interrogative words (such as I'm learning how to knit) and a positive weight can result in degenerate utterances (such as For conditional training, we regard an utterance y as containing a question if and only if y contains a question mark.",
"We train our CT model on a control variable z with 11 possible values: {0, .",
".",
".",
", 10}.",
"As discussed in Section 5, we wish to control question-asking at the distributional, dialogue level, rather than at the binary, utterance level.",
"Thus the setting z = i means that the model should produce, on average, utterances containing '?'",
"with probability i/10.",
"During training we randomly assign examples to buckets such that each bucket i is trained on examples with the correct proportion of questions (i/10), and all buckets have the same amount of training examples.",
"We find that conditional training is effective to control question-asking -as shown in Figure 2 , by increasing z from 0 to 10, we obtain a range of question-asking rates from 1.40% to 97.72%.",
"However, when we introduce repetition control, question-asking is reduced -in particular, the z = 10 setting (which should produce 100% questions) now only produces 79.67% questions.",
"The primary problem is the weighted decoding feature extrep bigram, which discourages bigrams that have appeared in previous utterances -this prevents the model from producing bigrams that commonly occur in many questions, such as do you and what is.",
"To fix this, we introduce an extra setting z = 10 (boost), in which we do not use the feature extrep bigram for weighted decoding during beam search, but we do use it to rerank the candidates after beam search.",
"This setting, which allows the model to produce necessary questionasking bigrams, yields a 99.54% question-asking rate, at the cost of slightly increased external bigram repetition (see Appendix F).",
"For controlling question-asking, conditional training is preferable to weighted decoding for two reasons.",
"Firstly, it allows us to achieve (close to) 0% questions, 100% questions, or anything in between, without introducing the risk of degenerate output.",
"Secondly, presence-of-a-question-mark captures the true attribute of interest (questionasking) more exactly and directly than presence of interrogative words.",
"For these reasons, only the CT method is considered in the human evaluation.",
"Comparison of control methods The previous section shows that conditional training and weighted decoding are both useful techniques, with different strengths and weaknesses.",
"The primary disadvantage of conditional training is that it sometimes fails to learn the connection between the control variable z and the target output y.",
"In practice, we find the model can learn simple attributes of the output (such as the presence of '?",
"', and overall genericness), but not relationships between the input and output (such as semantic relatedness).",
"By contrast, weighted decoding can force the desired feature to appear in the output by raising the weight arbitrarily high (though this may have unintended side-effects).",
"The primary disadvantage of weighted decoding is that it risks going off-distribution when the weight is too strong.",
"By contrast, conditional training produces mostly well-formed, indistribution outputs.",
"This highlights the importance of learned control -it is safer to learn to produce output that both satisfies the control variable and is appropriate, than to alter the decoding process to satisfy the control variable, potentially trading off appropriateness in the process.",
"Other considerations include: (1) Convenience: conditional training requires retraining; weighted decoding doesn't, but is slower at test time.",
"Attribute definition: conditional training can control sentence-level attributes, but they must be discrete.",
"By contrast, weighted decoding requires word-level features, but they can be continuous.",
"Human evaluation results In order to study the effect of our controllable attributes, we conduct a large-scale human evalua-tion of 28 model configurations (see Appendix E), plus human-human conversations for comparison.",
"Approach In our evaluation, a crowdworker chats with a model (or in the human-human case, another crowdworker) for six conversational turns, then answers eight multiple-choice questions which each capture different aspects of conversational quality: avoiding repetition, interestingness, making sense, fluency, listening, inquisitiveness, humanness and engagingness.",
"The eight questions are Likert questions on a 1-4 scale, where higher is better.",
"5 To match the ConvAI2 Challenge, we also add a persona retrieval question, in which the crowdworker is asked to select which of two possible personas was the model's persona.",
"For full details of the evaluation design, see Appendix B.",
"Our evaluation is the same as the ConvAI2 Challenge evaluation, but more detailed -Con-vAI2 includes only engagingness and persona retrieval.",
"6 As in the ConvAI2 challenge, each of our 28 model configurations was evaluated by over 100 crowdworkers, and the results were adjusted for annotator variance via a Bayesian calibration (Kulikov et al., 2018) .",
"In designing our evaluation, we aimed to capture the four aspects we expected to directly improve via control (avoiding repetition, interestingness, listening, inquisitiveness), two important error classes we thought would be affected by our controls (fluency, making sense), and two overall quality measures (engagingness, humanness).",
"Main findings In this section we summarize the main findings of our human evaluation -whose full results can be found in Appendices G and H, with sample conversations in Appendix C. As Figure 3 shows, controlling for repetition, specificity and question-asking all lead to large 5 Exceptions: Avoiding repetition is a 1-3 scale, as we found this gave clearer instructions.",
"Inquisitiveness has an optimal score of 3; 1 and 2 represent too little questionasking, and 4 represents too much.",
"6 There are three other minor differences between our evaluation and ConvAI2's: (1) We fix capitalization and spacing before showing the chatbot's utterances to crowdworkers, while ConvAI2 show the raw lowercase tokenized form.",
"We found the latter interferes with fluency evaluation.",
"(2) We conduct 6 dialogue turns, while ConvAI2 conducts 4-6.",
"This was necessary to evaluate repetitiveness.",
"(3) We use (publicly-available) validation set personas, while ConvAI2 uses (hidden) test set personas.",
"This enables us to release our evaluation chatlogs.",
"engagingness improvements over the greedy and beam-search baseline models.",
"In particular, we find that controlling for multi-turn (self) repetition is important and should be incorporated alongside other attribute control methods.",
"We found no improvement by controlling response-relatedness.",
"To better understand these overall engagingness improvements, we consider the full set of human judgments, shown in Figure 4 .",
"We find that reducing repetition leads to improvements across all our aspects of conversational quality.",
"Increasing specificity shows improvements in interestingness and listening ability over the repetition-controlled baseline, while increasing question-asking shows improvements in inquisitiveness and interestingness over the repetition-controlled baseline.",
"Our most engaging model, which controls both repetition and question-asking -marked 'Question (CT)' in Figure 3 (left) -matches the engagingness of the winning entry in the ConvAI2 competition, as both models achieve a raw score 7 of 3.1 (Dinan et al., 2019) .",
"However, the Con-vAI2 winner, Lost in Conversation, was trained on approximately 12× as much data as our model.",
"Lost in Conversation is based on the OpenAI GPT Language Model (Radford et al., 2018) , which is pretrained on the BookCorpus (Zhu et al., 2015) , which contains approximately 985 million words, whereas our model is pretrained on the Twitter dataset (approximately 79 million words).",
"Altogether, our evaluation clearly shows that controlling low-level attributes over multiple turns leads to improved overall quality.",
"Effect of controlled attributes Repetition (WD) We observe that selfrepetition across utterances (external repetition) is by far the most severe form of repetition in our beam search baseline model.",
"We evaluate several settings of the extrep bigram weighted decoding feature, and find that an aggressive repetition-reduction setting (reducing bigram repetition rate to below gold data levels) is rated best.",
"We also find that blocking repeated content words improves the avoiding repetition score.",
"See Appendices E, F and G for full details.",
"As shown in Figure 3 Figure 3 : Calibrated human judgments of engagingness for the baselines and best controlled models (left); for different specificity control settings (middle); and for different question-asking control settings (right).",
"over the beam search baseline in all metrics, and achieves close-to-human scores on all metrics except humanness.",
"This striking result demonstrates that repetition is by far the biggest limiting quality factor for naive sequence-to-sequence dialogue agents.",
"The result also emphasizes the importance of multi-turn dialogue evaluation to detect the problem.",
"We refer to this model as the repetitioncontrolled baseline, and use it as a basis for all remaining experiments (i.e., we control specificity, response-relatedness and question-asking on top of these repetition-control settings).",
"Specificity (WD, CT) For our weighted decoding models, the extreme settings (very generic and very specific) score poorly in engagingness due to the frequent presence of degenerate output -see Figure 3 (middle).",
"We find that the weight = 4 setting (which is more specific than the repetitioncontrolled baseline and about as specific as the gold data) maximizes engagingness.",
"As shown in Figure 3 (left) and Figure 4 , this more-specific model is rated more interesting, engaging, and a better listener than the repetition-controlled baseline, but at the cost of reduced fluency and making sense.",
"Our CT model with z = 7 (which has a similar NIDF level as WD with weight = 4) shows similar results, but the improvements are smaller.",
"For further discussion on the interestingness of our specificity models, see Section 8.3.",
"Response-relatedness (WD) We evaluated several control settings (weight = −10, 5, 10, 13) and found that none scored better than weight = 0 (no response-relatedness control); see Appendix H. This is surprising -prior to running the human evaluation, we annotated 100 examples ourselves to determine the best control settings.",
"While we identified a more responsive setting (weight = 5) as less likely than the uncontrolled model to ignore the user, crowdworkers rated it as a slightly worse listener than the uncontrolled model.",
"One explanation for this discrepancy is that the more responsive model takes more risks, using more rare words (0.197 NIDF, up from 0.178), and thus receives a lower makes-sense score (3.41, down from 3.70).",
"We hypothesize that, compared to us, the crowdworkers are less tolerant of slightly nonsensical output, and more tolerant of generic unrelated utterances.",
"Question-asking (CT) As shown in Figure 3 (right), a question-asking rate of 65.7% (z = 7) maximizes engagingness.",
"This setting, which asks more questions than both the repetition-controlled baseline (50.0%) and the human-produced gold data (28.8%), brings us closest to human-level engagingness -see Figure 3 (left).",
"Although we find that a rate of approximately 65.7% questionasking is the most engaging, a lower level (48.9%, or z = 4) is rated the best listener.",
"Lastly, we find that although asking too many questions is less engaging, most crowdworkers will not directly criticize a chatbot that asks questions on every turnonly 11.9% of crowdworkers judged the z = 10 (boost) setting, which asks 99.5% questions, as asking too many questions.",
"8 For full details of these scores, see Appendix F and H. For time and budget reasons, we did not evaluate any models controlling both question-asking and specificity.",
"However, we expect it is possible to obtain further improvements by doing so.",
"A/B tests for interestingness Though our more-specific models yielded significant improvements in engagingness, we were surprised that they did not yield clearer improvements in interestingness.",
"To investigate further, we conducted an A/B interestingness evaluation of three specificity-controlled models, compared to the repetition-controlled baseline.",
"Crowdworkers were shown two conversations (from the main human evaluation) and asked to choose which model was more interesting (see Figure 7 for details).",
"We collected 500 samples per comparison, plus 200 additional human vs repetition-controlled baseline samples, which were used to filter for quality control.",
"After discarding low-quality crowdworkers, we have roughly 300 evaluations per comparison, with an average Cohen's κ = 0.6.",
"As shown in Table 3 , all three models were rated significantly more interesting than the repetitioncontrolled baseline.",
"This convincingly shows that producing utterances with more rare words is a valid strategy to improve interestingness.",
"We have two explanations for why these interestingness differences did not materialize in our main evaluation.",
"Firstly, interestingness is a particularly subjective metric (unlike more tangible metrics such as avoiding repetition and making sense) -this makes it hard to calibrate across crowdworkers.",
"Secondly, we suspect that in our original evaluation, the crowdworkers may have evaluated the interestingness of the task rather than the chatbot.",
"This could account for why subtle increases in conversational ability did not result in higher interestingness ratings -the PersonaChat task itself has a natural interestingness limit.",
"Conclusion What makes a good conversation?",
"Through our evaluation, we showed that a good conversation is about balance -controlling for the right level of repetition, specificity and question-asking is important for overall quality.",
"We also found that conversational aspects such as interestingness, listening, and inquisitiveness are all importantthough optimizing these can introduce a trade-off against certain types of errors (such as repetitive, disfluent, or nonsensical output).",
"Secondly, multiturn evaluation is essential to study what makes a good conversation -multiple turns are required to reveal issues such as repetition, consistency, and question-asking frequency.",
"Lastly, what do we mean by 'good'?",
"Although humanness and engagingness are both commonly used as overall quality metrics, the two are very different.",
"While our models achieved close-to-human scores on engagingness, they failed to get close on humannessshowing that a chatbot need not be human-like to be enjoyable.",
"This striking result also demonstrates the importance of measuring more than one quality metric when evaluating dialogue agents.",
"Outlook Our work shows that neural generative systems have systemic problems when applied to open-ended dialogue, some of which (e.g.",
"repetition) are only observable in the multi-turn setting.",
"Furthermore, control of low-level attributes offers a practical way to correct these problems, yielding large improvements to overall quality -in our case, comparable to systems trained on much more data.",
"Future work includes optimizing control settings automatically, and building more convincingly human-like chatbots.",
"Supplementary Material A Screenshots of human evaluation interface B Human evaluation questionnaire design Here are the questions and multiple-choice options used in the human evaluation, in the order presented: [Engagingness] How much did you enjoy talking to this user?",
"Adding w to the hypothesis y <t would create a 2-gram that appears in a previous utterance by the model extrep unigram(w, y <t , x) w is a non-stopword and w appears in a previous utterance by the model intrep bigram(w, y <t , x) Adding w to the hypothesis y <t would create a 2-gram that appears earlier in the hypothesis y <t intrep unigram(w, y <t , x) w is a non-stopword and w appears earlier in the hypothesis y <t partnerrep bigram(w, y <t , x) Adding w to the hypothesis y <t would create a 2-gram that appears in a previous utterance by the partner Repetition control (WD) Extrep bigram WD -0.5 wt -0.5 Extrep bigram WD -1.25 wt -1.25 Extrep bigram WD -3.5 wt -3.5 Extrep bigram WD -inf wt -∞ Repetition-controlled baseline wt -3.5 wt -∞ wt -∞ Question control (CT) Question-controlled CT 0 wt -3.5 wt -∞ wt -∞ z = 0 Question-controlled CT 1 wt -3.5 wt -∞ wt -∞ z = 1 Question-controlled CT 4 wt -3.5 wt -∞ wt -∞ z = 4 Question-controlled CT 7 wt -3.5 wt -∞ wt -∞ z = 7 Question-controlled CT 10 wt -3.5 wt -∞ wt -∞ z = 10 Question-controlled CT 10 (boost) wt 0 * wt -∞ wt -∞ z = 10 Specificity control (CT) Specificity-controlled CT 0 wt -3.5 wt -∞ wt -∞ z = 0 Specificity-controlled CT 2 wt -3.5 wt -∞ wt -∞ z = 2 Specificity-controlled CT 4 wt -3.5 wt -∞ wt -∞ z = 4 Specificity-controlled CT 7 wt -3.5 wt -∞ wt -∞ z = 7 Specificity-controlled CT 9 wt -3.5 wt -∞ wt -∞ z = 9 Specificity control (WD) Specificity-controlled WD -10 wt -3.5 wt -∞ wt -∞ wt -10 Specificity-controlled WD -4 wt -3.5 wt -∞ wt -∞ wt -4 Specificity-controlled WD 4 wt -3.5 wt -∞ wt -∞ wt 4 Specificity-controlled WD 6 wt -3.5 wt -∞ wt -∞ wt 6 Specificity-controlled WD 8 wt -3.5 wt -∞ wt -∞ wt 8 Response-related control (WD) ** Response-related controlled WD -10 wt -3.5 wt -∞ wt -∞ wt -∞ wt -∞ wt -10 Response-related controlled WD 0 wt -3.5 wt -∞ wt -∞ wt -∞ wt -∞ wt 0 Response-related controlled WD 5 wt -3.5 wt -∞ wt -∞ wt -∞ wt -∞ wt 5 Response-related controlled WD 10 wt -3.5 wt -∞ wt -∞ wt -∞ wt -∞ wt 10 Response-related controlled WD 13 wt -3.5 wt -∞ wt -∞ wt -∞ wt -∞ wt 13 Table 5 : Control settings for all configurations that were human-evaluated.",
"'wt' means the weight used for a weighted decoding feature and 'z =' means the setting (i.e.",
"bucket) for the control variable in conditional training.",
"* In the setting Question-controlled CT 10 (boost), the feature extrep bigram is not used for weighted decoding during beam search, but it is used to rerank the candidates after beam search.",
"See Section 6.4 for details.",
"** Note that the Response-related controlled models additionally introduce repetition controls to block internal bigram repetition and partner bigram repetition.",
"This was necessary to prevent the model from parroting the partner's last utterance.",
"In Table 8 , we find that just adding these extra repetition controls (here called Responserelated controlled WD 0, i.e.",
"increased repetition control but no response-relatedness control) outperforms our canonical Repetition-controlled baseline.",
"However, given that we discovered this later, our specificity and question controlled models are built on top of the canonical Repetition-controlled baseline.",
"Table 7 : Raw scores (mean ± std.)",
"for all models and human evaluation metrics.",
"The first eight columns are Likert metrics on a 1-4 scale (except Avoiding Repetition, which is a 1-3 scale), where higher is better (except Inquisitiveness, which has an optimal score of 3).",
"The last column, Persona Retrieval, is on a scale from 0 to 1 where higher is better.",
"The maximum of each column (excluding Human row) is in bold.",
"Model Avoiding Rep.",
"Engage Fluency Humanness Inquisitive Interesting Listening Make Sense Human and baselines * Human 2.79 ± 0.12 3.04 ± 0.11 3.36 ± 0.12 3.35 ± 0.11 2.44 ± 0.12 2.92 ± 0.11 3.32 ± 0.13 3.68 ± 0.11 * Greedy Search 2.08 ± 0.10 2.24 ± 0.11 3.03 ± 0.10 1.75 ± 0.12 1.95 ± 0.10 2.29 ± 0.13 2.62 ± 0.10 3.23 ± 0.10 * Beam Search (beam size 20) 2.08 ± 0.11 2.29 ± 0.11 3.09 ± 0.13 1.71 ± 0.13 2.42 ± 0.11 2.29 ± 0.14 2.47 ± 0.12 3.35 ± 0.13 Repetition control (WD) Extrep bigram WD -0.5 2.62 ± 0.10 2.54 ± 0.12 3.35 ± 0.12 2.13 ± 0.11 2.63 ± 0.11 2.56 ± 0.11 2.93 ± 0.11 3.48 ± 0.11 Extrep bigram WD -1.25 2.78 ± 0.09 2.82 ± 0.13 3.40 ± 0.12 2.27 ± 0.12 2.54 ± 0.09 2.76 ± 0.10 3.05 ± 0.11 3.53 ± 0.14 Extrep bigram WD -3.5 2.83 ± 0.11 2.93 ± 0.10 3.56 ± 0.10 2.43 ± 0.11 2.47 ± 0.11 2.83 ± 0.10 3.14 ± 0.10 3.62 ± 0.12 Extrep bigram WD -inf 2.74 ± 0.11 2.87 ± 0.14 3.49 ± 0.12 2.32 ± 0.13 2.56 ± 0.11 2.75 ± 0.12 3.13 ± 0.12 3.59 ± 0.12 * Repetition-controlled baseline 2.86 ± 0.12 2.82 ± 0.12 3.53 ± 0.10 2.40 ± 0.11 2.62 ± 0.13 2.84 ± 0.12 3.10 ± 0.11 3.58 ± 0.14 Question control (CT) Question-controlled CT 0 2.87 ± 0.12 2.84 ± 0.13 3.51 ± 0.10 2.46 ± 0.11 2.36 ± 0.09 2.76 ± 0.09 3.10 ± 0.10 3.49 ± 0.12 Question-controlled CT 1 2.82 ± 0.11 2.88 ± 0.11 3.42 ± 0.10 2.46 ± 0.12 2.47 ± 0.11 2.79 ± 0.13 3.14 ± 0.11 3.55 ± 0.10 Question-controlled CT 4 2.78 ± 0.12 2.88 ± 0.10 3.47 ± 0.11 2.40 ± 0.09 2.53 ± 0.13 2.83 ± 0.13 3.24 ± 0.11 3.59 ± 0.10 * Question-controlled CT 7 2.81 ± 0.10 2.99 ± 0.11 3.54 ± 0.09 2.35 ± 0.11 2.66 ± 0.12 2.92 ± 0.12 3.11 ± 0.10 3.47 ± 0.10 Question-controlled CT 10 2.67 ± 0.13 2.87 ± 0.11 3.52 ± 0.12 2.35 ± 0.12 2.63 ± 0.12 2.66 ± 0.10 2.94 ± 0.11 3.53 ± 0.12 Question-controlled CT 10 (boost) 2.68 ± 0.12 2.74 ± 0.09 3.42 ± 0.12 2.19 ± 0.13 2.79 ± 0.11 2.74 ± 0.11 3.00 ± 0.12 3.45 ± 0.13 Specificity control (CT) Specificity-controlled CT 0 2.79 ± 0.10 2.93 ± 0.09 3.44 ± 0.12 2.38 ± 0.11 2.56 ± 0.12 2.84 ± 0.12 3.12 ± 0.13 3.61 ± 0.11 Specificity-controlled CT 2 2.78 ± 0.12 2.74 ± 0.11 3.39 ± 0.13 2.31 ± 0.13 2.56 ± 0.13 2.74 ± 0.12 2.99 ± 0.11 3.47 ± 0.10 Specificity-controlled CT 4 2.82 ± 0.10 2.80 ± 0.13 3.44 ± 0.14 2.32 ± 0.13 2.51 ± 0.12 2.78 ± 0.15 3.09 ± 0.13 3.46 ± 0.13 Specificity-controlled CT 7 2.81 ± 0.12 2.91 ± 0.13 3.43 ± 0.11 2.45 ± 0.10 2.49 ± 0.11 2.81 ± 0.12 3.15 ± 0.12 3.55 ± 0.11 Specificity-controlled CT 9 2.80 ± 0.13 2.78 ± 0.10 3.41 ± 0.12 2.35 ± 0.13 2.28 ± 0.11 2.79 ± 0.11 2.91 ± 0.11 3.51 ± 0.12 Specificity control (WD) Specificity-controlled WD -10 2.76 ± 0.11 2.41 ± 0.12 3.19 ± 0.12 2.15 ± 0.11 2.28 ± 0.13 2.35 ± 0.12 2.89 ± 0.11 3.28 ± 0.12 Specificity-controlled WD -4 2.83 ± 0.10 2.76 ± 0.12 3.37 ± 0.10 2.36 ± 0.11 2.46 ± 0.11 2.62 ± 0.12 3.14 ± 0.09 3.52 ± 0.11 * Specificity-controlled WD 4 2.84 ± 0.10 2.96 ± 0.12 3.45 ± 0.13 2.44 ± 0.12 2.56 ± 0.09 2.94 ± 0.11 3.20 ± 0.10 3.54 ± 0.11 Specificity-controlled WD 6 2.81 ± 0.09 2.91 ± 0.10 3.34 ± 0.09 2.31 ± 0.11 2.53 ± 0.12 2.93 ± 0.12 3.09 ± 0.10 3.41 ± 0.12 Specificity-controlled WD 8 2.70 ± 0.11 2.39 ± 0.12 2.54 ± 0.12 1.80 ± 0.13 2.00 ± 0.10 2.49 ± 0.12 2.47 ± 0.10 2.87 ± 0.11 Response-related control (WD) Response-related controlled WD -10 2.77 ± 0.12 2.45 ± 0.12 3.26 ± 0.11 1.96 ± 0.10 2.31 ± 0.12 2.47 ± 0.12 2.73 ± 0.11 3.12 ± 0.12 Response-related controlled WD 0 2.87 ± 0.12 2.97 ± 0.11 3.55 ± 0.09 2.62 ± 0.11 2.48 ± 0.10 2.88 ± 0.12 3.21 ± 0.09 3.70 ± 0.10 Response-related controlled WD 5 2.79 ± 0.10 2.83 ± 0.09 3.35 ± 0.12 2.40 ± 0.12 2.51 ± 0.13 2.80 ± 0.13 3.13 ± 0.12 3.41 ± 0.12 Response-related controlled WD 10 2.74 ± 0.11 2.42 ± 0.12 2.93 ± 0.11 1.95 ± 0.12 2.20 ± 0.12 2.56 ± 0.12 2.90 ± 0.12 3.12 ± 0.10 Response-related controlled WD 13 2.63 ± 0.12 2.06 ± 0.11 2.40 ± 0.09 1.74 ± 0.11 2.07 ± 0.11 2.25 ± 0.12 2.49 ± 0.14 2.63 ± 0.10 Table 8 : Calibrated scores (mean ± std.)",
"for all models and human evaluation metrics.",
"The first eight columns are Likert metrics on a 1-4 scale (except Avoiding Repetition, which is a 1-3 scale), where higher is better (except Inquisitiveness, which has an optimal score of 3).",
"The last column, Persona Retrieval, is on a scale from 0 to 1 where higher is better.",
"The maximum of each column (excluding Human row) is in bold.",
"Rows marked with * are the six models included in Figure 3 Figure 9 : Calibrated human evaluation scores for all models.",
"This is the same data as in Table 8 .",
"Note: 'Repetition-controlled baseline+' in the rightmost column is 'Response-related controlled WD 0' in Table 8 .",
"See Table 5 for explanation."
]
} | {
"paper_header_number": [
"1",
"2",
"4",
"5",
"5.1",
"5.2",
"6",
"6.1",
"6.2",
"6.3",
"6.4",
"7",
"8",
"8.1",
"8.2",
"8.3",
"9"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Baseline model",
"Controllable text generation methods",
"Conditional Training (CT)",
"Weighted Decoding (WD)",
"Controlling conversational attributes",
"Repetition",
"Specificity",
"Response-relatedness",
"Question-asking",
"Comparison of control methods",
"Human evaluation results",
"Main findings",
"Effect of controlled attributes",
"A/B tests for interestingness",
"Conclusion"
]
} | GEM-SciDuet-train-133#paper-1358#slide-2 | PersonaChat task | I love to drink fancy tea.
I have a big library at home.
I'm a museum tour guide.
I have two dogs.
I like to work on vintage cars.
My favorite music is country.
I own two vintage Mustangs.
Hello, how are you doing?
Great thanks, just listening to my favorite Johnny Cash album!
Nice! I'm not much of a music fan myself, but I do love to read.
Me too! I just read a book about the history of the auto industry.
Most successful teams built neural sequence generation systems. (Dinan et al 2019)
The winning team, Lost in Conversation, used a finetuned version of GPT.
Our baseline model is a standard LSTM-based seq2seq architecture with attention.
It is pretrained on 2.5 million Twitter message/response pairs, then finetuned on PersonaChat. | I love to drink fancy tea.
I have a big library at home.
I'm a museum tour guide.
I have two dogs.
I like to work on vintage cars.
My favorite music is country.
I own two vintage Mustangs.
Hello, how are you doing?
Great thanks, just listening to my favorite Johnny Cash album!
Nice! I'm not much of a music fan myself, but I do love to read.
Me too! I just read a book about the history of the auto industry.
Most successful teams built neural sequence generation systems. (Dinan et al 2019)
The winning team, Lost in Conversation, used a finetuned version of GPT.
Our baseline model is a standard LSTM-based seq2seq architecture with attention.
It is pretrained on 2.5 million Twitter message/response pairs, then finetuned on PersonaChat. | [] |
GEM-SciDuet-train-133#paper-1358#slide-3 | 1358 | What makes a good conversation? How controllable attributes affect human judgments | A good conversation requires balance -between simplicity and detail; staying on topic and changing it; asking questions and answering them. Although dialogue agents are commonly evaluated via human judgments of overall quality, the relationship between quality and these individual factors is less well-studied. In this work, we examine two controllable neural text generation methods, conditional training and weighted decoding, in order to control four important attributes for chitchat dialogue: repetition, specificity, response-relatedness and question-asking. We conduct a large-scale human evaluation to measure the effect of these control parameters on multi-turn interactive conversations on the PersonaChat task. We provide a detailed analysis of their relationship to high-level aspects of conversation, and show that by controlling combinations of these variables our models obtain clear improvements in human quality judgments. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237,
238,
239,
240,
241,
242,
243,
244,
245,
246,
247,
248,
249,
250,
251,
252,
253,
254
],
"paper_content_text": [
"Introduction Neural generation models for dialogue, despite their ubiquity in current research, are still poorly understood.",
"Well known problems, such as the genericness and repetitiveness of responses (Serban et al., 2016a) , remain without a de facto solution.",
"Strikingly, the factors that determine human judgments of overall conversation quality are almost entirely unexplored.",
"Most works have been limited to the next utterance prediction problem, whereas a multi-turn evaluation is necessary to evaluate the quality of a full conversation.",
"In this work we both (i) conduct a large-scale study to identify the fine-grained factors governing human judgments of full conversations, and (ii) develop models that apply our findings in practice, * A.S. completed most of this work at Facebook (FAIR).",
"leading to state-of-the-art performance.",
"Specifically, we identify and study eight aspects of conversation that can be measured by human judgments, while varying four types of low-level attributes that can be algorithmically controlled in neural models; see Figure 1 .",
"To control the lowlevel model attributes, we consider two simple but general algorithms: conditional training, in which the neural model is conditioned on additional control features, and weighted decoding, in which control features are added to the decoding scoring function at test time only.",
"One major result of our findings is that existing work has ignored the importance of conversational flow, as standard models (i) repeat or contradict previous statements, (ii) fail to balance specificity with genericness, and (iii) fail to balance asking questions with other dialogue acts.",
"Conducting experiments on the PersonaChat task (Zhang et al., 2018b) , we obtain significantly higher engagingness scores than the baseline by optimizing control of repetition, specificity and question-asking over multiple turns.",
"Using these findings, our best model matches the performance of the winning entry in the recent NeurIPS ConvAI2 competition (Dinan et al., 2019) , which was trained on much more data but had no control (see Section 8.1).",
"Our code, pretrained models, and full chatlogs, are available at https://parl.ai/projects/ controllable_dialogue.",
"Related Work Dialogue Dialogue evaluation is relatively well understood in goal-oriented tasks, where automated approaches can be coded by measuring task completion (Bordes et al., 2017; El Asri et al., 2017; Hastie, 2012; Henderson et al., 2014; Wen et al., 2017) .",
"Task success combined with dialogue cost can be linked to human judgments like user satisfaction via the PARADISE framework (Walker et al., 1997) .",
"However in chitchat tasks, which we study in this work, automatic metrics and their relation to human ratings are less well-understood.",
"While word-overlap metrics are effective for questionanswering and machine translation, for dialogue they have little to no correlation with human judgments (Liu et al., 2016; Novikova et al., 2017 )this is due to the open-ended nature of dialogue.",
"There are more recent attempts to find better automatic approaches, such as adversarial evaluation (Li et al., 2017b) and learning a scoring model (Lowe et al., 2017) , but their value is still unclear.",
"Nevertheless, a number of studies only use automatic metrics, with no human study at all (Lowe et al., 2015; Parthasarathi and Pineau, 2018; Serban et al., 2016b) .",
"Other works do use human evaluations (Dinan et al., 2018; Li et al., 2016a,b; Venkatesh et al., 2017; Vinyals and Le, 2015; Zhang et al., 2018b) , typically reporting just one type of judgment (either quality or appropriateness) via a Likert scale or pairwise comparison.",
"Most of those works only consider single turn evaluations, often with a shortened dialogue history, rather than full multi-turn dialogue.",
"A more comprehensive evaluation strategy has been studied within the scope of the Alexa prize (Venkatesh et al., 2017; Guo et al., 2018) by combining multiple automatic metrics designed to capture various conversational aspects (engagement, coherence, domain coverage, conversational depth and topical diversity).",
"Though these aspects have some similarity to the aspects studied here, we also focus on lower-level aspects (e.g.",
"avoiding repetition, fluency), to understand how they correspond to both our controllable attributes, and to overall quality judgments.",
"Controllable neural text generation Researchers have proposed several approaches to control aspects of RNN-based natural language generation such as sentiment, length, speaker style and tense (Fan et al., 2018; Ficler and Goldberg, 2017; Ghazvininejad et al., 2017; Hu et al., 2017; Kikuchi et al., 2016; Peng et al., 2018; Wang et al., 2017) .",
"In particular, several works use control to tackle the same common sequence-to-sequence problems we address here (particularly genericness and unrelated output), in the context of single-turn response generation (Baheti et al., 2018; Li et al., 2016a Li et al., , 2017a Shen et al., 2017; Xing et al., 2017; Zhang et al., 2018a; Zhou et al., 2017) .",
"By contrast, we focus on developing controls for, and human evaluation of, multi-turn interactive dialogue -this includes a new method (described in Section 5) to control attributes at the dialogue level rather than the utterance level.",
"In this work, we require a control method that is both general-purpose (one technique to simultaneously control many attributes) and easily tunable (the control setting is adjustable after training).",
"Given these constraints, we study two control methods: conditional training (variants of which have been described by Fan et al.",
"(2018) ; Kikuchi et al.",
"(2016) ; Peng et al.",
"(2018) ) and weighted decoding (described by Ghazvininejad et al.",
"(2017) as a general technique, and by Baheti et al.",
"(2018) to control response-relatedness).",
"To our knowledge, this work is the first to systematically compare the effectiveness of two general-purpose control methods across several attributes.",
"3 The PersonaChat dataset PersonaChat (Zhang et al., 2018b ) is a chitchat dialogue task involving two participants (two humans or a human and a bot).",
"Each participant is given a persona -a short collection of personal traits such as I'm left handed or My favorite season is spring -and are instructed to get to know each other by chatting naturally using their designated personas, for 6-8 turns.",
"The training set contains 8939 conversations and 955 personas, collected via crowdworkers, plus 1000 conversations and 100 personas for validation, and a similar number in the hidden test set.",
"The PersonaChat task was the subject of the NeurIPS 2018 ConvAI2 Challenge (Dinan et al., 2019) , in which competitors were first evaluated with respect to automatic met-rics (perplexity, hits@1 and F1 score), and then with respect to human judgment via the question \"How much did you enjoy talking to this user?\"",
"on a scale of 1-4.",
"Baseline model Our baseline model is a 2-layer LSTM sequenceto-sequence model with attention.",
"On any dialogue turn, the input x to the encoder is the entire dialogue history (separated using unique speakeridentifying tokens), with the model's own persona prepended.",
"Conditioned on this input sequence x, the decoder generates a response y.",
"Except when stated otherwise, all our models decode using beam search with beam size 20.",
"We initialized the word embedding matrix with 300-dimensional GloVe embeddings (Pennington et al., 2014) .",
"Using the ParlAI framework (Miller et al., 2017) , we pretrained the model on a dataset of 2.5 million Twitter message-response pairs, 1 then fine-tuned it on PersonaChat.",
"On the Per-sonaChat validation set, the baseline model has a perplexity of 26.83 and F1 of 17.02, which would have placed us 4th out of 26 models in the Con-vAI2 competition (Dinan et al., 2019) .",
"We attempt to improve over this baseline using control.",
"Controllable text generation methods Suppose we have a sequence-to-sequence model which gives P (y|x) = Π t P (y t |x, y 1 , .",
".",
".",
", y t−1 ), the conditional probability of a response y (the model's next utterance) given input x (the context, which in our case includes the model's own persona and the dialogue history).",
"Contrary to most previous work, which controls at the sentence level, we wish to control attributes of the output y at the dialogue levelmeaning that a single control setting is used for a whole dialogue.",
"For example, to control questionasking, we provide a control setting at the beginning of each dialogue (e.g.",
"20% questions or 70% questions) rather than providing a control setting for each utterance (e.g.",
"is a question or isn't a question).",
"With this approach, the sequence-tosequence model is able to choose what value the controlled attribute should take for any particular utterance, but we are able to choose the overall distribution.",
"We find that this approach works well -for example, the sequence-to-sequence model is generally good at detecting when to ask a question.",
"In particular, this is easier than the alternative: developing a separate process to decide, for each utterance, whether to ask a question.",
"In this section, we describe the two methods -which we call Conditional Training (CT) and Weighted Decoding (WD) -that we use to control attributes of the output y at the dialogue level.",
"Conditional Training (CT) Conditional Training (Fan et al., 2018; Kikuchi et al., 2016; Peng et al., 2018) is a method to learn a sequence-to-sequence model P (y|x, z), where z is a discrete control variable.",
"If the control attribute is naturally continuous (for example in our work, repetitiveness, specificity and response-relatedness), we use z to represent bucketed ranges.",
"For a binary attribute like questionasking, z represents an overall probability (as explained in Section 5).",
"To train a CT model, we first automatically annotate every (x, y) pair in the training set with the attribute we wish to control (for example, whether y contains a question mark).",
"During training, for each example we determine the corresponding z value (for continuous attributes, this simply means sorting into the correct bucket; for question-asking, see Section 6.4).",
"Next, the control variable z is represented via an embedding (each of the possible values of z has its own embedding).",
"For all our experiments, the embedding is of length 10; this was determined via hyperparameter tuning.",
"There are several possible ways to condition the sequence-to-sequence model on z -for example, append z to the end of the input sequence, or use z as the START symbol for the decoder.",
"We find it most effective to concatenate z to the decoder's input on every step.",
"2 Lastly, the CT model learns to produce y = y 1 , .",
".",
".",
", y T by optimizing the cross-entropy loss: loss CT = − 1 T T t=1 log P (y t |x, z, y 1 , .",
".",
".",
", y t−1 ) Our CT models are initialized with the parameters from the baseline sequence-to-sequence model P (y|x) (the new decoder parameters are initialized with small random values), then fine-tuned to optimize loss CT on the PersonaChat training set, until convergence of loss CT on the validation set.",
"Weighted Decoding (WD) Weighted Decoding (Ghazvininejad et al., 2017) is a decoding method that increases or decreases the probability of words with certain features.",
"The technique is applied only at test time, requiring no change to the training method.",
"A limitation of WD is that the controllable attribute must be defined at the word-level; any desired utterance-level attribute must be redefined via word-level features.",
"In weighted decoding, on the t th step of decoding, a partial hypothesis y <t = y 1 , .",
".",
".",
", y t−1 is expanded by computing the score for each possible next word w in the vocabulary: score(w, y <t ; x) = score(y <t ; x) + log P RNN (w|y <t , x) + i w i * f i (w; y <t , x).",
"Here, log P RNN (w|y <t , x) is the log-probability of the word w calculated by the RNN, score(y <t ; x) is the accumulated score of the already-generated words in the hypothesis y <t , and f i (w; y <t , x) are decoding features with associated weights w i .",
"There can be multiple features f i (to control multiple attributes), and the weights w i are hyperparameters to be chosen.",
"A decoding feature f i (w; y <t , x) assigns a real value to the word w, in the context of the text generated so far y <t and the context x.",
"The feature can be continuous (e.g.",
"the unigram probability of w), discrete (e.g.",
"the length of w in characters), or binary (e.g.",
"whether w starts with the same letter as the last word in y <t ).",
"A positive weight w i increases the probability of words w that score highly with respect to f i ; a negative weight decreases their probability.",
"Note that weighted decoding and conditional training can be applied simultaneously (i.e.",
"train a CT model then apply WD at test time) -a strategy we use in our experiments.",
"Controlling conversational attributes In this section, we describe how we use conditional training and weighted decoding to control four attributes: repetition, specificity, responserelatedness and question-asking.",
"We evaluate the effectiveness of both control methods via automatic metrics (i.e., measuring how well the attribute was controlled), and use our findings to select control methods and control settings to be explored further via human evaluation (Section 8).",
"Repetition Our baseline model exhibits three types of repetition, which we call external repetition (selfrepetition across utterances), internal repetition (self-repetition within utterances), and partner repetition (repeating the conversational partner).",
"To control repetition with weighted decoding, 3 we define five n-gram based decoding features (see Appendix D).",
"Three of these features (extrep bigram, intrep bigram and partnerrep bigram) identify repeating bigrams for the three repetition types.",
"The other two features (extrep unigram and intrep unigram) identify repeating content words.",
"By applying a negative weight to these features, we can reduce repetition.",
"In particular, if the weight is −∞, our method is equivalent to n-gram blocking as described by Kulikov et al.",
"(2018) .",
"We observe that repetition control is very important, thus all further control experiments include repetition control.",
"Specificity Like many sequence-to-sequence models using beam search decoding, our baseline frequently asks generic questions such as What music do you like?",
"and gives dull, unspecific responses, such as I like all kinds of music.",
"We control specificity using Normalized Inverse Document Frequency (NIDF) as a measure of word rareness.",
"4 The Inverse Document Frequency of a word w is IDF(w) = log(R/c w ) where R is the number of responses in the dataset, and c w is the number of those responses that contain w. Normalized IDF (which ranges from 0 to 1) is NIDF(w) = IDF(w) − min idf max idf − min idf (1) where min idf and max idf are the minimum and maximum IDFs, taken over all words in the vocabulary.",
"To control specificity with weighted decoding, we use NIDF as a decoding feature.",
"As shown in Table 1 , this method produces reasonable outputs when the feature weight is within a certain range, but at the extremes the outputs are nonsensical.",
"The boundary for nonsensical output differs from example to example.",
"To control specificity with conditional training, we define the specificity of an utterance y to be the mean NIDF of the words in y.",
"Thus our control variable z is mean NIDF (discretized into 10 equal-sized buckets).",
"As shown in Table 1 , this method gives outputs with a narrower NIDF range, but overall produces less nonsensical outputs.",
"Response-relatedness In conversation, it's generally desirable to produce a response that is related to the partner's last utterance; for example if the partner says My grandfather died last month, it is appropriate to say I'm so sorry.",
"Were you close to your grandfather?",
"However, our baseline model frequently responds with unrelated utterances like Do you have any pets?",
"To control response-relatedness with weighted decoding, we use the decoding feature resp rel: resp rel(w; y <t , x) = cos sim(word emb(w), sent emb( )) where word emb(w) is the GloVe embedding for the word w, sent emb( ) is the sentence embedding for the partner's last utterance (note is part of the context x), and cos sim is the cosine similarity between the two.",
"In particular, the sentence embedding sent emb(s) for an utterance s is a weighted average of the GloVe embeddings of the words in s, with the first principal component projected out; for full details, see Arora et al.",
"(2017) .",
"This method of controlling response-relatedness is similar to that described in (Baheti et al., 2018) .",
"We find that weighted decoding is effective to control the semantic relatedness of the model's response to the partner's last utterance (see Table 2 ).",
"As before, we find that extreme weights lead to nonsensical output.",
"To control response-relatedness with conditional training, we try defining the control variable z to be cos sim(sent emb(y), sent emb( )), the overall cosine similarity between the partner's last utterance and the model's response y (again, we discretize z).",
"However, we find this method ineffective -the CT model learns only a very weak connection between z and the semantic relatedness of the output (see Section 7 for more details).",
"Question-asking Considerate chitchat requires a reciprocal asking and answering of questions -asking too few or too many can appear self-centered or nosy.",
"We control question-asking in order to study these trade-offs.",
"To control question-asking with weighted decoding, we use the binary decoding feature is qn word(w), which is equal to 1 if and only if the word w is in a pre-defined list of interrogative words (how, what, when, where, which, who, whom, whose, why, ?)",
".",
"We find this is a somewhat effective method to encourage or discourage questions, but with unintended side-effects: a negative weight can discourage valid non-question utterances that happen to contain interrogative words (such as I'm learning how to knit) and a positive weight can result in degenerate utterances (such as For conditional training, we regard an utterance y as containing a question if and only if y contains a question mark.",
"We train our CT model on a control variable z with 11 possible values: {0, .",
".",
".",
", 10}.",
"As discussed in Section 5, we wish to control question-asking at the distributional, dialogue level, rather than at the binary, utterance level.",
"Thus the setting z = i means that the model should produce, on average, utterances containing '?'",
"with probability i/10.",
"During training we randomly assign examples to buckets such that each bucket i is trained on examples with the correct proportion of questions (i/10), and all buckets have the same amount of training examples.",
"We find that conditional training is effective to control question-asking -as shown in Figure 2 , by increasing z from 0 to 10, we obtain a range of question-asking rates from 1.40% to 97.72%.",
"However, when we introduce repetition control, question-asking is reduced -in particular, the z = 10 setting (which should produce 100% questions) now only produces 79.67% questions.",
"The primary problem is the weighted decoding feature extrep bigram, which discourages bigrams that have appeared in previous utterances -this prevents the model from producing bigrams that commonly occur in many questions, such as do you and what is.",
"To fix this, we introduce an extra setting z = 10 (boost), in which we do not use the feature extrep bigram for weighted decoding during beam search, but we do use it to rerank the candidates after beam search.",
"This setting, which allows the model to produce necessary questionasking bigrams, yields a 99.54% question-asking rate, at the cost of slightly increased external bigram repetition (see Appendix F).",
"For controlling question-asking, conditional training is preferable to weighted decoding for two reasons.",
"Firstly, it allows us to achieve (close to) 0% questions, 100% questions, or anything in between, without introducing the risk of degenerate output.",
"Secondly, presence-of-a-question-mark captures the true attribute of interest (questionasking) more exactly and directly than presence of interrogative words.",
"For these reasons, only the CT method is considered in the human evaluation.",
"Comparison of control methods The previous section shows that conditional training and weighted decoding are both useful techniques, with different strengths and weaknesses.",
"The primary disadvantage of conditional training is that it sometimes fails to learn the connection between the control variable z and the target output y.",
"In practice, we find the model can learn simple attributes of the output (such as the presence of '?",
"', and overall genericness), but not relationships between the input and output (such as semantic relatedness).",
"By contrast, weighted decoding can force the desired feature to appear in the output by raising the weight arbitrarily high (though this may have unintended side-effects).",
"The primary disadvantage of weighted decoding is that it risks going off-distribution when the weight is too strong.",
"By contrast, conditional training produces mostly well-formed, indistribution outputs.",
"This highlights the importance of learned control -it is safer to learn to produce output that both satisfies the control variable and is appropriate, than to alter the decoding process to satisfy the control variable, potentially trading off appropriateness in the process.",
"Other considerations include: (1) Convenience: conditional training requires retraining; weighted decoding doesn't, but is slower at test time.",
"Attribute definition: conditional training can control sentence-level attributes, but they must be discrete.",
"By contrast, weighted decoding requires word-level features, but they can be continuous.",
"Human evaluation results In order to study the effect of our controllable attributes, we conduct a large-scale human evalua-tion of 28 model configurations (see Appendix E), plus human-human conversations for comparison.",
"Approach In our evaluation, a crowdworker chats with a model (or in the human-human case, another crowdworker) for six conversational turns, then answers eight multiple-choice questions which each capture different aspects of conversational quality: avoiding repetition, interestingness, making sense, fluency, listening, inquisitiveness, humanness and engagingness.",
"The eight questions are Likert questions on a 1-4 scale, where higher is better.",
"5 To match the ConvAI2 Challenge, we also add a persona retrieval question, in which the crowdworker is asked to select which of two possible personas was the model's persona.",
"For full details of the evaluation design, see Appendix B.",
"Our evaluation is the same as the ConvAI2 Challenge evaluation, but more detailed -Con-vAI2 includes only engagingness and persona retrieval.",
"6 As in the ConvAI2 challenge, each of our 28 model configurations was evaluated by over 100 crowdworkers, and the results were adjusted for annotator variance via a Bayesian calibration (Kulikov et al., 2018) .",
"In designing our evaluation, we aimed to capture the four aspects we expected to directly improve via control (avoiding repetition, interestingness, listening, inquisitiveness), two important error classes we thought would be affected by our controls (fluency, making sense), and two overall quality measures (engagingness, humanness).",
"Main findings In this section we summarize the main findings of our human evaluation -whose full results can be found in Appendices G and H, with sample conversations in Appendix C. As Figure 3 shows, controlling for repetition, specificity and question-asking all lead to large 5 Exceptions: Avoiding repetition is a 1-3 scale, as we found this gave clearer instructions.",
"Inquisitiveness has an optimal score of 3; 1 and 2 represent too little questionasking, and 4 represents too much.",
"6 There are three other minor differences between our evaluation and ConvAI2's: (1) We fix capitalization and spacing before showing the chatbot's utterances to crowdworkers, while ConvAI2 show the raw lowercase tokenized form.",
"We found the latter interferes with fluency evaluation.",
"(2) We conduct 6 dialogue turns, while ConvAI2 conducts 4-6.",
"This was necessary to evaluate repetitiveness.",
"(3) We use (publicly-available) validation set personas, while ConvAI2 uses (hidden) test set personas.",
"This enables us to release our evaluation chatlogs.",
"engagingness improvements over the greedy and beam-search baseline models.",
"In particular, we find that controlling for multi-turn (self) repetition is important and should be incorporated alongside other attribute control methods.",
"We found no improvement by controlling response-relatedness.",
"To better understand these overall engagingness improvements, we consider the full set of human judgments, shown in Figure 4 .",
"We find that reducing repetition leads to improvements across all our aspects of conversational quality.",
"Increasing specificity shows improvements in interestingness and listening ability over the repetition-controlled baseline, while increasing question-asking shows improvements in inquisitiveness and interestingness over the repetition-controlled baseline.",
"Our most engaging model, which controls both repetition and question-asking -marked 'Question (CT)' in Figure 3 (left) -matches the engagingness of the winning entry in the ConvAI2 competition, as both models achieve a raw score 7 of 3.1 (Dinan et al., 2019) .",
"However, the Con-vAI2 winner, Lost in Conversation, was trained on approximately 12× as much data as our model.",
"Lost in Conversation is based on the OpenAI GPT Language Model (Radford et al., 2018) , which is pretrained on the BookCorpus (Zhu et al., 2015) , which contains approximately 985 million words, whereas our model is pretrained on the Twitter dataset (approximately 79 million words).",
"Altogether, our evaluation clearly shows that controlling low-level attributes over multiple turns leads to improved overall quality.",
"Effect of controlled attributes Repetition (WD) We observe that selfrepetition across utterances (external repetition) is by far the most severe form of repetition in our beam search baseline model.",
"We evaluate several settings of the extrep bigram weighted decoding feature, and find that an aggressive repetition-reduction setting (reducing bigram repetition rate to below gold data levels) is rated best.",
"We also find that blocking repeated content words improves the avoiding repetition score.",
"See Appendices E, F and G for full details.",
"As shown in Figure 3 Figure 3 : Calibrated human judgments of engagingness for the baselines and best controlled models (left); for different specificity control settings (middle); and for different question-asking control settings (right).",
"over the beam search baseline in all metrics, and achieves close-to-human scores on all metrics except humanness.",
"This striking result demonstrates that repetition is by far the biggest limiting quality factor for naive sequence-to-sequence dialogue agents.",
"The result also emphasizes the importance of multi-turn dialogue evaluation to detect the problem.",
"We refer to this model as the repetitioncontrolled baseline, and use it as a basis for all remaining experiments (i.e., we control specificity, response-relatedness and question-asking on top of these repetition-control settings).",
"Specificity (WD, CT) For our weighted decoding models, the extreme settings (very generic and very specific) score poorly in engagingness due to the frequent presence of degenerate output -see Figure 3 (middle).",
"We find that the weight = 4 setting (which is more specific than the repetitioncontrolled baseline and about as specific as the gold data) maximizes engagingness.",
"As shown in Figure 3 (left) and Figure 4 , this more-specific model is rated more interesting, engaging, and a better listener than the repetition-controlled baseline, but at the cost of reduced fluency and making sense.",
"Our CT model with z = 7 (which has a similar NIDF level as WD with weight = 4) shows similar results, but the improvements are smaller.",
"For further discussion on the interestingness of our specificity models, see Section 8.3.",
"Response-relatedness (WD) We evaluated several control settings (weight = −10, 5, 10, 13) and found that none scored better than weight = 0 (no response-relatedness control); see Appendix H. This is surprising -prior to running the human evaluation, we annotated 100 examples ourselves to determine the best control settings.",
"While we identified a more responsive setting (weight = 5) as less likely than the uncontrolled model to ignore the user, crowdworkers rated it as a slightly worse listener than the uncontrolled model.",
"One explanation for this discrepancy is that the more responsive model takes more risks, using more rare words (0.197 NIDF, up from 0.178), and thus receives a lower makes-sense score (3.41, down from 3.70).",
"We hypothesize that, compared to us, the crowdworkers are less tolerant of slightly nonsensical output, and more tolerant of generic unrelated utterances.",
"Question-asking (CT) As shown in Figure 3 (right), a question-asking rate of 65.7% (z = 7) maximizes engagingness.",
"This setting, which asks more questions than both the repetition-controlled baseline (50.0%) and the human-produced gold data (28.8%), brings us closest to human-level engagingness -see Figure 3 (left).",
"Although we find that a rate of approximately 65.7% questionasking is the most engaging, a lower level (48.9%, or z = 4) is rated the best listener.",
"Lastly, we find that although asking too many questions is less engaging, most crowdworkers will not directly criticize a chatbot that asks questions on every turnonly 11.9% of crowdworkers judged the z = 10 (boost) setting, which asks 99.5% questions, as asking too many questions.",
"8 For full details of these scores, see Appendix F and H. For time and budget reasons, we did not evaluate any models controlling both question-asking and specificity.",
"However, we expect it is possible to obtain further improvements by doing so.",
"A/B tests for interestingness Though our more-specific models yielded significant improvements in engagingness, we were surprised that they did not yield clearer improvements in interestingness.",
"To investigate further, we conducted an A/B interestingness evaluation of three specificity-controlled models, compared to the repetition-controlled baseline.",
"Crowdworkers were shown two conversations (from the main human evaluation) and asked to choose which model was more interesting (see Figure 7 for details).",
"We collected 500 samples per comparison, plus 200 additional human vs repetition-controlled baseline samples, which were used to filter for quality control.",
"After discarding low-quality crowdworkers, we have roughly 300 evaluations per comparison, with an average Cohen's κ = 0.6.",
"As shown in Table 3 , all three models were rated significantly more interesting than the repetitioncontrolled baseline.",
"This convincingly shows that producing utterances with more rare words is a valid strategy to improve interestingness.",
"We have two explanations for why these interestingness differences did not materialize in our main evaluation.",
"Firstly, interestingness is a particularly subjective metric (unlike more tangible metrics such as avoiding repetition and making sense) -this makes it hard to calibrate across crowdworkers.",
"Secondly, we suspect that in our original evaluation, the crowdworkers may have evaluated the interestingness of the task rather than the chatbot.",
"This could account for why subtle increases in conversational ability did not result in higher interestingness ratings -the PersonaChat task itself has a natural interestingness limit.",
"Conclusion What makes a good conversation?",
"Through our evaluation, we showed that a good conversation is about balance -controlling for the right level of repetition, specificity and question-asking is important for overall quality.",
"We also found that conversational aspects such as interestingness, listening, and inquisitiveness are all importantthough optimizing these can introduce a trade-off against certain types of errors (such as repetitive, disfluent, or nonsensical output).",
"Secondly, multiturn evaluation is essential to study what makes a good conversation -multiple turns are required to reveal issues such as repetition, consistency, and question-asking frequency.",
"Lastly, what do we mean by 'good'?",
"Although humanness and engagingness are both commonly used as overall quality metrics, the two are very different.",
"While our models achieved close-to-human scores on engagingness, they failed to get close on humannessshowing that a chatbot need not be human-like to be enjoyable.",
"This striking result also demonstrates the importance of measuring more than one quality metric when evaluating dialogue agents.",
"Outlook Our work shows that neural generative systems have systemic problems when applied to open-ended dialogue, some of which (e.g.",
"repetition) are only observable in the multi-turn setting.",
"Furthermore, control of low-level attributes offers a practical way to correct these problems, yielding large improvements to overall quality -in our case, comparable to systems trained on much more data.",
"Future work includes optimizing control settings automatically, and building more convincingly human-like chatbots.",
"Supplementary Material A Screenshots of human evaluation interface B Human evaluation questionnaire design Here are the questions and multiple-choice options used in the human evaluation, in the order presented: [Engagingness] How much did you enjoy talking to this user?",
"Adding w to the hypothesis y <t would create a 2-gram that appears in a previous utterance by the model extrep unigram(w, y <t , x) w is a non-stopword and w appears in a previous utterance by the model intrep bigram(w, y <t , x) Adding w to the hypothesis y <t would create a 2-gram that appears earlier in the hypothesis y <t intrep unigram(w, y <t , x) w is a non-stopword and w appears earlier in the hypothesis y <t partnerrep bigram(w, y <t , x) Adding w to the hypothesis y <t would create a 2-gram that appears in a previous utterance by the partner Repetition control (WD) Extrep bigram WD -0.5 wt -0.5 Extrep bigram WD -1.25 wt -1.25 Extrep bigram WD -3.5 wt -3.5 Extrep bigram WD -inf wt -∞ Repetition-controlled baseline wt -3.5 wt -∞ wt -∞ Question control (CT) Question-controlled CT 0 wt -3.5 wt -∞ wt -∞ z = 0 Question-controlled CT 1 wt -3.5 wt -∞ wt -∞ z = 1 Question-controlled CT 4 wt -3.5 wt -∞ wt -∞ z = 4 Question-controlled CT 7 wt -3.5 wt -∞ wt -∞ z = 7 Question-controlled CT 10 wt -3.5 wt -∞ wt -∞ z = 10 Question-controlled CT 10 (boost) wt 0 * wt -∞ wt -∞ z = 10 Specificity control (CT) Specificity-controlled CT 0 wt -3.5 wt -∞ wt -∞ z = 0 Specificity-controlled CT 2 wt -3.5 wt -∞ wt -∞ z = 2 Specificity-controlled CT 4 wt -3.5 wt -∞ wt -∞ z = 4 Specificity-controlled CT 7 wt -3.5 wt -∞ wt -∞ z = 7 Specificity-controlled CT 9 wt -3.5 wt -∞ wt -∞ z = 9 Specificity control (WD) Specificity-controlled WD -10 wt -3.5 wt -∞ wt -∞ wt -10 Specificity-controlled WD -4 wt -3.5 wt -∞ wt -∞ wt -4 Specificity-controlled WD 4 wt -3.5 wt -∞ wt -∞ wt 4 Specificity-controlled WD 6 wt -3.5 wt -∞ wt -∞ wt 6 Specificity-controlled WD 8 wt -3.5 wt -∞ wt -∞ wt 8 Response-related control (WD) ** Response-related controlled WD -10 wt -3.5 wt -∞ wt -∞ wt -∞ wt -∞ wt -10 Response-related controlled WD 0 wt -3.5 wt -∞ wt -∞ wt -∞ wt -∞ wt 0 Response-related controlled WD 5 wt -3.5 wt -∞ wt -∞ wt -∞ wt -∞ wt 5 Response-related controlled WD 10 wt -3.5 wt -∞ wt -∞ wt -∞ wt -∞ wt 10 Response-related controlled WD 13 wt -3.5 wt -∞ wt -∞ wt -∞ wt -∞ wt 13 Table 5 : Control settings for all configurations that were human-evaluated.",
"'wt' means the weight used for a weighted decoding feature and 'z =' means the setting (i.e.",
"bucket) for the control variable in conditional training.",
"* In the setting Question-controlled CT 10 (boost), the feature extrep bigram is not used for weighted decoding during beam search, but it is used to rerank the candidates after beam search.",
"See Section 6.4 for details.",
"** Note that the Response-related controlled models additionally introduce repetition controls to block internal bigram repetition and partner bigram repetition.",
"This was necessary to prevent the model from parroting the partner's last utterance.",
"In Table 8 , we find that just adding these extra repetition controls (here called Responserelated controlled WD 0, i.e.",
"increased repetition control but no response-relatedness control) outperforms our canonical Repetition-controlled baseline.",
"However, given that we discovered this later, our specificity and question controlled models are built on top of the canonical Repetition-controlled baseline.",
"Table 7 : Raw scores (mean ± std.)",
"for all models and human evaluation metrics.",
"The first eight columns are Likert metrics on a 1-4 scale (except Avoiding Repetition, which is a 1-3 scale), where higher is better (except Inquisitiveness, which has an optimal score of 3).",
"The last column, Persona Retrieval, is on a scale from 0 to 1 where higher is better.",
"The maximum of each column (excluding Human row) is in bold.",
"Model Avoiding Rep.",
"Engage Fluency Humanness Inquisitive Interesting Listening Make Sense Human and baselines * Human 2.79 ± 0.12 3.04 ± 0.11 3.36 ± 0.12 3.35 ± 0.11 2.44 ± 0.12 2.92 ± 0.11 3.32 ± 0.13 3.68 ± 0.11 * Greedy Search 2.08 ± 0.10 2.24 ± 0.11 3.03 ± 0.10 1.75 ± 0.12 1.95 ± 0.10 2.29 ± 0.13 2.62 ± 0.10 3.23 ± 0.10 * Beam Search (beam size 20) 2.08 ± 0.11 2.29 ± 0.11 3.09 ± 0.13 1.71 ± 0.13 2.42 ± 0.11 2.29 ± 0.14 2.47 ± 0.12 3.35 ± 0.13 Repetition control (WD) Extrep bigram WD -0.5 2.62 ± 0.10 2.54 ± 0.12 3.35 ± 0.12 2.13 ± 0.11 2.63 ± 0.11 2.56 ± 0.11 2.93 ± 0.11 3.48 ± 0.11 Extrep bigram WD -1.25 2.78 ± 0.09 2.82 ± 0.13 3.40 ± 0.12 2.27 ± 0.12 2.54 ± 0.09 2.76 ± 0.10 3.05 ± 0.11 3.53 ± 0.14 Extrep bigram WD -3.5 2.83 ± 0.11 2.93 ± 0.10 3.56 ± 0.10 2.43 ± 0.11 2.47 ± 0.11 2.83 ± 0.10 3.14 ± 0.10 3.62 ± 0.12 Extrep bigram WD -inf 2.74 ± 0.11 2.87 ± 0.14 3.49 ± 0.12 2.32 ± 0.13 2.56 ± 0.11 2.75 ± 0.12 3.13 ± 0.12 3.59 ± 0.12 * Repetition-controlled baseline 2.86 ± 0.12 2.82 ± 0.12 3.53 ± 0.10 2.40 ± 0.11 2.62 ± 0.13 2.84 ± 0.12 3.10 ± 0.11 3.58 ± 0.14 Question control (CT) Question-controlled CT 0 2.87 ± 0.12 2.84 ± 0.13 3.51 ± 0.10 2.46 ± 0.11 2.36 ± 0.09 2.76 ± 0.09 3.10 ± 0.10 3.49 ± 0.12 Question-controlled CT 1 2.82 ± 0.11 2.88 ± 0.11 3.42 ± 0.10 2.46 ± 0.12 2.47 ± 0.11 2.79 ± 0.13 3.14 ± 0.11 3.55 ± 0.10 Question-controlled CT 4 2.78 ± 0.12 2.88 ± 0.10 3.47 ± 0.11 2.40 ± 0.09 2.53 ± 0.13 2.83 ± 0.13 3.24 ± 0.11 3.59 ± 0.10 * Question-controlled CT 7 2.81 ± 0.10 2.99 ± 0.11 3.54 ± 0.09 2.35 ± 0.11 2.66 ± 0.12 2.92 ± 0.12 3.11 ± 0.10 3.47 ± 0.10 Question-controlled CT 10 2.67 ± 0.13 2.87 ± 0.11 3.52 ± 0.12 2.35 ± 0.12 2.63 ± 0.12 2.66 ± 0.10 2.94 ± 0.11 3.53 ± 0.12 Question-controlled CT 10 (boost) 2.68 ± 0.12 2.74 ± 0.09 3.42 ± 0.12 2.19 ± 0.13 2.79 ± 0.11 2.74 ± 0.11 3.00 ± 0.12 3.45 ± 0.13 Specificity control (CT) Specificity-controlled CT 0 2.79 ± 0.10 2.93 ± 0.09 3.44 ± 0.12 2.38 ± 0.11 2.56 ± 0.12 2.84 ± 0.12 3.12 ± 0.13 3.61 ± 0.11 Specificity-controlled CT 2 2.78 ± 0.12 2.74 ± 0.11 3.39 ± 0.13 2.31 ± 0.13 2.56 ± 0.13 2.74 ± 0.12 2.99 ± 0.11 3.47 ± 0.10 Specificity-controlled CT 4 2.82 ± 0.10 2.80 ± 0.13 3.44 ± 0.14 2.32 ± 0.13 2.51 ± 0.12 2.78 ± 0.15 3.09 ± 0.13 3.46 ± 0.13 Specificity-controlled CT 7 2.81 ± 0.12 2.91 ± 0.13 3.43 ± 0.11 2.45 ± 0.10 2.49 ± 0.11 2.81 ± 0.12 3.15 ± 0.12 3.55 ± 0.11 Specificity-controlled CT 9 2.80 ± 0.13 2.78 ± 0.10 3.41 ± 0.12 2.35 ± 0.13 2.28 ± 0.11 2.79 ± 0.11 2.91 ± 0.11 3.51 ± 0.12 Specificity control (WD) Specificity-controlled WD -10 2.76 ± 0.11 2.41 ± 0.12 3.19 ± 0.12 2.15 ± 0.11 2.28 ± 0.13 2.35 ± 0.12 2.89 ± 0.11 3.28 ± 0.12 Specificity-controlled WD -4 2.83 ± 0.10 2.76 ± 0.12 3.37 ± 0.10 2.36 ± 0.11 2.46 ± 0.11 2.62 ± 0.12 3.14 ± 0.09 3.52 ± 0.11 * Specificity-controlled WD 4 2.84 ± 0.10 2.96 ± 0.12 3.45 ± 0.13 2.44 ± 0.12 2.56 ± 0.09 2.94 ± 0.11 3.20 ± 0.10 3.54 ± 0.11 Specificity-controlled WD 6 2.81 ± 0.09 2.91 ± 0.10 3.34 ± 0.09 2.31 ± 0.11 2.53 ± 0.12 2.93 ± 0.12 3.09 ± 0.10 3.41 ± 0.12 Specificity-controlled WD 8 2.70 ± 0.11 2.39 ± 0.12 2.54 ± 0.12 1.80 ± 0.13 2.00 ± 0.10 2.49 ± 0.12 2.47 ± 0.10 2.87 ± 0.11 Response-related control (WD) Response-related controlled WD -10 2.77 ± 0.12 2.45 ± 0.12 3.26 ± 0.11 1.96 ± 0.10 2.31 ± 0.12 2.47 ± 0.12 2.73 ± 0.11 3.12 ± 0.12 Response-related controlled WD 0 2.87 ± 0.12 2.97 ± 0.11 3.55 ± 0.09 2.62 ± 0.11 2.48 ± 0.10 2.88 ± 0.12 3.21 ± 0.09 3.70 ± 0.10 Response-related controlled WD 5 2.79 ± 0.10 2.83 ± 0.09 3.35 ± 0.12 2.40 ± 0.12 2.51 ± 0.13 2.80 ± 0.13 3.13 ± 0.12 3.41 ± 0.12 Response-related controlled WD 10 2.74 ± 0.11 2.42 ± 0.12 2.93 ± 0.11 1.95 ± 0.12 2.20 ± 0.12 2.56 ± 0.12 2.90 ± 0.12 3.12 ± 0.10 Response-related controlled WD 13 2.63 ± 0.12 2.06 ± 0.11 2.40 ± 0.09 1.74 ± 0.11 2.07 ± 0.11 2.25 ± 0.12 2.49 ± 0.14 2.63 ± 0.10 Table 8 : Calibrated scores (mean ± std.)",
"for all models and human evaluation metrics.",
"The first eight columns are Likert metrics on a 1-4 scale (except Avoiding Repetition, which is a 1-3 scale), where higher is better (except Inquisitiveness, which has an optimal score of 3).",
"The last column, Persona Retrieval, is on a scale from 0 to 1 where higher is better.",
"The maximum of each column (excluding Human row) is in bold.",
"Rows marked with * are the six models included in Figure 3 Figure 9 : Calibrated human evaluation scores for all models.",
"This is the same data as in Table 8 .",
"Note: 'Repetition-controlled baseline+' in the rightmost column is 'Response-related controlled WD 0' in Table 8 .",
"See Table 5 for explanation."
]
} | {
"paper_header_number": [
"1",
"2",
"4",
"5",
"5.1",
"5.2",
"6",
"6.1",
"6.2",
"6.3",
"6.4",
"7",
"8",
"8.1",
"8.2",
"8.3",
"9"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Baseline model",
"Controllable text generation methods",
"Conditional Training (CT)",
"Weighted Decoding (WD)",
"Controlling conversational attributes",
"Repetition",
"Specificity",
"Response-relatedness",
"Question-asking",
"Comparison of control methods",
"Human evaluation results",
"Main findings",
"Effect of controlled attributes",
"A/B tests for interestingness",
"Conclusion"
]
} | GEM-SciDuet-train-133#paper-1358#slide-3 | What attributes do we control | Goal: Reduce repetition (within and across utterances)
Goal: Reduce genericness of responses (e.g. oh that's cool)
Goal: Respond more on-topic; don't ignore user
Goal: Find the optimal rate of question-asking | Goal: Reduce repetition (within and across utterances)
Goal: Reduce genericness of responses (e.g. oh that's cool)
Goal: Respond more on-topic; don't ignore user
Goal: Find the optimal rate of question-asking | [] |
GEM-SciDuet-train-133#paper-1358#slide-4 | 1358 | What makes a good conversation? How controllable attributes affect human judgments | A good conversation requires balance -between simplicity and detail; staying on topic and changing it; asking questions and answering them. Although dialogue agents are commonly evaluated via human judgments of overall quality, the relationship between quality and these individual factors is less well-studied. In this work, we examine two controllable neural text generation methods, conditional training and weighted decoding, in order to control four important attributes for chitchat dialogue: repetition, specificity, response-relatedness and question-asking. We conduct a large-scale human evaluation to measure the effect of these control parameters on multi-turn interactive conversations on the PersonaChat task. We provide a detailed analysis of their relationship to high-level aspects of conversation, and show that by controlling combinations of these variables our models obtain clear improvements in human quality judgments. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237,
238,
239,
240,
241,
242,
243,
244,
245,
246,
247,
248,
249,
250,
251,
252,
253,
254
],
"paper_content_text": [
"Introduction Neural generation models for dialogue, despite their ubiquity in current research, are still poorly understood.",
"Well known problems, such as the genericness and repetitiveness of responses (Serban et al., 2016a) , remain without a de facto solution.",
"Strikingly, the factors that determine human judgments of overall conversation quality are almost entirely unexplored.",
"Most works have been limited to the next utterance prediction problem, whereas a multi-turn evaluation is necessary to evaluate the quality of a full conversation.",
"In this work we both (i) conduct a large-scale study to identify the fine-grained factors governing human judgments of full conversations, and (ii) develop models that apply our findings in practice, * A.S. completed most of this work at Facebook (FAIR).",
"leading to state-of-the-art performance.",
"Specifically, we identify and study eight aspects of conversation that can be measured by human judgments, while varying four types of low-level attributes that can be algorithmically controlled in neural models; see Figure 1 .",
"To control the lowlevel model attributes, we consider two simple but general algorithms: conditional training, in which the neural model is conditioned on additional control features, and weighted decoding, in which control features are added to the decoding scoring function at test time only.",
"One major result of our findings is that existing work has ignored the importance of conversational flow, as standard models (i) repeat or contradict previous statements, (ii) fail to balance specificity with genericness, and (iii) fail to balance asking questions with other dialogue acts.",
"Conducting experiments on the PersonaChat task (Zhang et al., 2018b) , we obtain significantly higher engagingness scores than the baseline by optimizing control of repetition, specificity and question-asking over multiple turns.",
"Using these findings, our best model matches the performance of the winning entry in the recent NeurIPS ConvAI2 competition (Dinan et al., 2019) , which was trained on much more data but had no control (see Section 8.1).",
"Our code, pretrained models, and full chatlogs, are available at https://parl.ai/projects/ controllable_dialogue.",
"Related Work Dialogue Dialogue evaluation is relatively well understood in goal-oriented tasks, where automated approaches can be coded by measuring task completion (Bordes et al., 2017; El Asri et al., 2017; Hastie, 2012; Henderson et al., 2014; Wen et al., 2017) .",
"Task success combined with dialogue cost can be linked to human judgments like user satisfaction via the PARADISE framework (Walker et al., 1997) .",
"However in chitchat tasks, which we study in this work, automatic metrics and their relation to human ratings are less well-understood.",
"While word-overlap metrics are effective for questionanswering and machine translation, for dialogue they have little to no correlation with human judgments (Liu et al., 2016; Novikova et al., 2017 )this is due to the open-ended nature of dialogue.",
"There are more recent attempts to find better automatic approaches, such as adversarial evaluation (Li et al., 2017b) and learning a scoring model (Lowe et al., 2017) , but their value is still unclear.",
"Nevertheless, a number of studies only use automatic metrics, with no human study at all (Lowe et al., 2015; Parthasarathi and Pineau, 2018; Serban et al., 2016b) .",
"Other works do use human evaluations (Dinan et al., 2018; Li et al., 2016a,b; Venkatesh et al., 2017; Vinyals and Le, 2015; Zhang et al., 2018b) , typically reporting just one type of judgment (either quality or appropriateness) via a Likert scale or pairwise comparison.",
"Most of those works only consider single turn evaluations, often with a shortened dialogue history, rather than full multi-turn dialogue.",
"A more comprehensive evaluation strategy has been studied within the scope of the Alexa prize (Venkatesh et al., 2017; Guo et al., 2018) by combining multiple automatic metrics designed to capture various conversational aspects (engagement, coherence, domain coverage, conversational depth and topical diversity).",
"Though these aspects have some similarity to the aspects studied here, we also focus on lower-level aspects (e.g.",
"avoiding repetition, fluency), to understand how they correspond to both our controllable attributes, and to overall quality judgments.",
"Controllable neural text generation Researchers have proposed several approaches to control aspects of RNN-based natural language generation such as sentiment, length, speaker style and tense (Fan et al., 2018; Ficler and Goldberg, 2017; Ghazvininejad et al., 2017; Hu et al., 2017; Kikuchi et al., 2016; Peng et al., 2018; Wang et al., 2017) .",
"In particular, several works use control to tackle the same common sequence-to-sequence problems we address here (particularly genericness and unrelated output), in the context of single-turn response generation (Baheti et al., 2018; Li et al., 2016a Li et al., , 2017a Shen et al., 2017; Xing et al., 2017; Zhang et al., 2018a; Zhou et al., 2017) .",
"By contrast, we focus on developing controls for, and human evaluation of, multi-turn interactive dialogue -this includes a new method (described in Section 5) to control attributes at the dialogue level rather than the utterance level.",
"In this work, we require a control method that is both general-purpose (one technique to simultaneously control many attributes) and easily tunable (the control setting is adjustable after training).",
"Given these constraints, we study two control methods: conditional training (variants of which have been described by Fan et al.",
"(2018) ; Kikuchi et al.",
"(2016) ; Peng et al.",
"(2018) ) and weighted decoding (described by Ghazvininejad et al.",
"(2017) as a general technique, and by Baheti et al.",
"(2018) to control response-relatedness).",
"To our knowledge, this work is the first to systematically compare the effectiveness of two general-purpose control methods across several attributes.",
"3 The PersonaChat dataset PersonaChat (Zhang et al., 2018b ) is a chitchat dialogue task involving two participants (two humans or a human and a bot).",
"Each participant is given a persona -a short collection of personal traits such as I'm left handed or My favorite season is spring -and are instructed to get to know each other by chatting naturally using their designated personas, for 6-8 turns.",
"The training set contains 8939 conversations and 955 personas, collected via crowdworkers, plus 1000 conversations and 100 personas for validation, and a similar number in the hidden test set.",
"The PersonaChat task was the subject of the NeurIPS 2018 ConvAI2 Challenge (Dinan et al., 2019) , in which competitors were first evaluated with respect to automatic met-rics (perplexity, hits@1 and F1 score), and then with respect to human judgment via the question \"How much did you enjoy talking to this user?\"",
"on a scale of 1-4.",
"Baseline model Our baseline model is a 2-layer LSTM sequenceto-sequence model with attention.",
"On any dialogue turn, the input x to the encoder is the entire dialogue history (separated using unique speakeridentifying tokens), with the model's own persona prepended.",
"Conditioned on this input sequence x, the decoder generates a response y.",
"Except when stated otherwise, all our models decode using beam search with beam size 20.",
"We initialized the word embedding matrix with 300-dimensional GloVe embeddings (Pennington et al., 2014) .",
"Using the ParlAI framework (Miller et al., 2017) , we pretrained the model on a dataset of 2.5 million Twitter message-response pairs, 1 then fine-tuned it on PersonaChat.",
"On the Per-sonaChat validation set, the baseline model has a perplexity of 26.83 and F1 of 17.02, which would have placed us 4th out of 26 models in the Con-vAI2 competition (Dinan et al., 2019) .",
"We attempt to improve over this baseline using control.",
"Controllable text generation methods Suppose we have a sequence-to-sequence model which gives P (y|x) = Π t P (y t |x, y 1 , .",
".",
".",
", y t−1 ), the conditional probability of a response y (the model's next utterance) given input x (the context, which in our case includes the model's own persona and the dialogue history).",
"Contrary to most previous work, which controls at the sentence level, we wish to control attributes of the output y at the dialogue levelmeaning that a single control setting is used for a whole dialogue.",
"For example, to control questionasking, we provide a control setting at the beginning of each dialogue (e.g.",
"20% questions or 70% questions) rather than providing a control setting for each utterance (e.g.",
"is a question or isn't a question).",
"With this approach, the sequence-tosequence model is able to choose what value the controlled attribute should take for any particular utterance, but we are able to choose the overall distribution.",
"We find that this approach works well -for example, the sequence-to-sequence model is generally good at detecting when to ask a question.",
"In particular, this is easier than the alternative: developing a separate process to decide, for each utterance, whether to ask a question.",
"In this section, we describe the two methods -which we call Conditional Training (CT) and Weighted Decoding (WD) -that we use to control attributes of the output y at the dialogue level.",
"Conditional Training (CT) Conditional Training (Fan et al., 2018; Kikuchi et al., 2016; Peng et al., 2018) is a method to learn a sequence-to-sequence model P (y|x, z), where z is a discrete control variable.",
"If the control attribute is naturally continuous (for example in our work, repetitiveness, specificity and response-relatedness), we use z to represent bucketed ranges.",
"For a binary attribute like questionasking, z represents an overall probability (as explained in Section 5).",
"To train a CT model, we first automatically annotate every (x, y) pair in the training set with the attribute we wish to control (for example, whether y contains a question mark).",
"During training, for each example we determine the corresponding z value (for continuous attributes, this simply means sorting into the correct bucket; for question-asking, see Section 6.4).",
"Next, the control variable z is represented via an embedding (each of the possible values of z has its own embedding).",
"For all our experiments, the embedding is of length 10; this was determined via hyperparameter tuning.",
"There are several possible ways to condition the sequence-to-sequence model on z -for example, append z to the end of the input sequence, or use z as the START symbol for the decoder.",
"We find it most effective to concatenate z to the decoder's input on every step.",
"2 Lastly, the CT model learns to produce y = y 1 , .",
".",
".",
", y T by optimizing the cross-entropy loss: loss CT = − 1 T T t=1 log P (y t |x, z, y 1 , .",
".",
".",
", y t−1 ) Our CT models are initialized with the parameters from the baseline sequence-to-sequence model P (y|x) (the new decoder parameters are initialized with small random values), then fine-tuned to optimize loss CT on the PersonaChat training set, until convergence of loss CT on the validation set.",
"Weighted Decoding (WD) Weighted Decoding (Ghazvininejad et al., 2017) is a decoding method that increases or decreases the probability of words with certain features.",
"The technique is applied only at test time, requiring no change to the training method.",
"A limitation of WD is that the controllable attribute must be defined at the word-level; any desired utterance-level attribute must be redefined via word-level features.",
"In weighted decoding, on the t th step of decoding, a partial hypothesis y <t = y 1 , .",
".",
".",
", y t−1 is expanded by computing the score for each possible next word w in the vocabulary: score(w, y <t ; x) = score(y <t ; x) + log P RNN (w|y <t , x) + i w i * f i (w; y <t , x).",
"Here, log P RNN (w|y <t , x) is the log-probability of the word w calculated by the RNN, score(y <t ; x) is the accumulated score of the already-generated words in the hypothesis y <t , and f i (w; y <t , x) are decoding features with associated weights w i .",
"There can be multiple features f i (to control multiple attributes), and the weights w i are hyperparameters to be chosen.",
"A decoding feature f i (w; y <t , x) assigns a real value to the word w, in the context of the text generated so far y <t and the context x.",
"The feature can be continuous (e.g.",
"the unigram probability of w), discrete (e.g.",
"the length of w in characters), or binary (e.g.",
"whether w starts with the same letter as the last word in y <t ).",
"A positive weight w i increases the probability of words w that score highly with respect to f i ; a negative weight decreases their probability.",
"Note that weighted decoding and conditional training can be applied simultaneously (i.e.",
"train a CT model then apply WD at test time) -a strategy we use in our experiments.",
"Controlling conversational attributes In this section, we describe how we use conditional training and weighted decoding to control four attributes: repetition, specificity, responserelatedness and question-asking.",
"We evaluate the effectiveness of both control methods via automatic metrics (i.e., measuring how well the attribute was controlled), and use our findings to select control methods and control settings to be explored further via human evaluation (Section 8).",
"Repetition Our baseline model exhibits three types of repetition, which we call external repetition (selfrepetition across utterances), internal repetition (self-repetition within utterances), and partner repetition (repeating the conversational partner).",
"To control repetition with weighted decoding, 3 we define five n-gram based decoding features (see Appendix D).",
"Three of these features (extrep bigram, intrep bigram and partnerrep bigram) identify repeating bigrams for the three repetition types.",
"The other two features (extrep unigram and intrep unigram) identify repeating content words.",
"By applying a negative weight to these features, we can reduce repetition.",
"In particular, if the weight is −∞, our method is equivalent to n-gram blocking as described by Kulikov et al.",
"(2018) .",
"We observe that repetition control is very important, thus all further control experiments include repetition control.",
"Specificity Like many sequence-to-sequence models using beam search decoding, our baseline frequently asks generic questions such as What music do you like?",
"and gives dull, unspecific responses, such as I like all kinds of music.",
"We control specificity using Normalized Inverse Document Frequency (NIDF) as a measure of word rareness.",
"4 The Inverse Document Frequency of a word w is IDF(w) = log(R/c w ) where R is the number of responses in the dataset, and c w is the number of those responses that contain w. Normalized IDF (which ranges from 0 to 1) is NIDF(w) = IDF(w) − min idf max idf − min idf (1) where min idf and max idf are the minimum and maximum IDFs, taken over all words in the vocabulary.",
"To control specificity with weighted decoding, we use NIDF as a decoding feature.",
"As shown in Table 1 , this method produces reasonable outputs when the feature weight is within a certain range, but at the extremes the outputs are nonsensical.",
"The boundary for nonsensical output differs from example to example.",
"To control specificity with conditional training, we define the specificity of an utterance y to be the mean NIDF of the words in y.",
"Thus our control variable z is mean NIDF (discretized into 10 equal-sized buckets).",
"As shown in Table 1 , this method gives outputs with a narrower NIDF range, but overall produces less nonsensical outputs.",
"Response-relatedness In conversation, it's generally desirable to produce a response that is related to the partner's last utterance; for example if the partner says My grandfather died last month, it is appropriate to say I'm so sorry.",
"Were you close to your grandfather?",
"However, our baseline model frequently responds with unrelated utterances like Do you have any pets?",
"To control response-relatedness with weighted decoding, we use the decoding feature resp rel: resp rel(w; y <t , x) = cos sim(word emb(w), sent emb( )) where word emb(w) is the GloVe embedding for the word w, sent emb( ) is the sentence embedding for the partner's last utterance (note is part of the context x), and cos sim is the cosine similarity between the two.",
"In particular, the sentence embedding sent emb(s) for an utterance s is a weighted average of the GloVe embeddings of the words in s, with the first principal component projected out; for full details, see Arora et al.",
"(2017) .",
"This method of controlling response-relatedness is similar to that described in (Baheti et al., 2018) .",
"We find that weighted decoding is effective to control the semantic relatedness of the model's response to the partner's last utterance (see Table 2 ).",
"As before, we find that extreme weights lead to nonsensical output.",
"To control response-relatedness with conditional training, we try defining the control variable z to be cos sim(sent emb(y), sent emb( )), the overall cosine similarity between the partner's last utterance and the model's response y (again, we discretize z).",
"However, we find this method ineffective -the CT model learns only a very weak connection between z and the semantic relatedness of the output (see Section 7 for more details).",
"Question-asking Considerate chitchat requires a reciprocal asking and answering of questions -asking too few or too many can appear self-centered or nosy.",
"We control question-asking in order to study these trade-offs.",
"To control question-asking with weighted decoding, we use the binary decoding feature is qn word(w), which is equal to 1 if and only if the word w is in a pre-defined list of interrogative words (how, what, when, where, which, who, whom, whose, why, ?)",
".",
"We find this is a somewhat effective method to encourage or discourage questions, but with unintended side-effects: a negative weight can discourage valid non-question utterances that happen to contain interrogative words (such as I'm learning how to knit) and a positive weight can result in degenerate utterances (such as For conditional training, we regard an utterance y as containing a question if and only if y contains a question mark.",
"We train our CT model on a control variable z with 11 possible values: {0, .",
".",
".",
", 10}.",
"As discussed in Section 5, we wish to control question-asking at the distributional, dialogue level, rather than at the binary, utterance level.",
"Thus the setting z = i means that the model should produce, on average, utterances containing '?'",
"with probability i/10.",
"During training we randomly assign examples to buckets such that each bucket i is trained on examples with the correct proportion of questions (i/10), and all buckets have the same amount of training examples.",
"We find that conditional training is effective to control question-asking -as shown in Figure 2 , by increasing z from 0 to 10, we obtain a range of question-asking rates from 1.40% to 97.72%.",
"However, when we introduce repetition control, question-asking is reduced -in particular, the z = 10 setting (which should produce 100% questions) now only produces 79.67% questions.",
"The primary problem is the weighted decoding feature extrep bigram, which discourages bigrams that have appeared in previous utterances -this prevents the model from producing bigrams that commonly occur in many questions, such as do you and what is.",
"To fix this, we introduce an extra setting z = 10 (boost), in which we do not use the feature extrep bigram for weighted decoding during beam search, but we do use it to rerank the candidates after beam search.",
"This setting, which allows the model to produce necessary questionasking bigrams, yields a 99.54% question-asking rate, at the cost of slightly increased external bigram repetition (see Appendix F).",
"For controlling question-asking, conditional training is preferable to weighted decoding for two reasons.",
"Firstly, it allows us to achieve (close to) 0% questions, 100% questions, or anything in between, without introducing the risk of degenerate output.",
"Secondly, presence-of-a-question-mark captures the true attribute of interest (questionasking) more exactly and directly than presence of interrogative words.",
"For these reasons, only the CT method is considered in the human evaluation.",
"Comparison of control methods The previous section shows that conditional training and weighted decoding are both useful techniques, with different strengths and weaknesses.",
"The primary disadvantage of conditional training is that it sometimes fails to learn the connection between the control variable z and the target output y.",
"In practice, we find the model can learn simple attributes of the output (such as the presence of '?",
"', and overall genericness), but not relationships between the input and output (such as semantic relatedness).",
"By contrast, weighted decoding can force the desired feature to appear in the output by raising the weight arbitrarily high (though this may have unintended side-effects).",
"The primary disadvantage of weighted decoding is that it risks going off-distribution when the weight is too strong.",
"By contrast, conditional training produces mostly well-formed, indistribution outputs.",
"This highlights the importance of learned control -it is safer to learn to produce output that both satisfies the control variable and is appropriate, than to alter the decoding process to satisfy the control variable, potentially trading off appropriateness in the process.",
"Other considerations include: (1) Convenience: conditional training requires retraining; weighted decoding doesn't, but is slower at test time.",
"Attribute definition: conditional training can control sentence-level attributes, but they must be discrete.",
"By contrast, weighted decoding requires word-level features, but they can be continuous.",
"Human evaluation results In order to study the effect of our controllable attributes, we conduct a large-scale human evalua-tion of 28 model configurations (see Appendix E), plus human-human conversations for comparison.",
"Approach In our evaluation, a crowdworker chats with a model (or in the human-human case, another crowdworker) for six conversational turns, then answers eight multiple-choice questions which each capture different aspects of conversational quality: avoiding repetition, interestingness, making sense, fluency, listening, inquisitiveness, humanness and engagingness.",
"The eight questions are Likert questions on a 1-4 scale, where higher is better.",
"5 To match the ConvAI2 Challenge, we also add a persona retrieval question, in which the crowdworker is asked to select which of two possible personas was the model's persona.",
"For full details of the evaluation design, see Appendix B.",
"Our evaluation is the same as the ConvAI2 Challenge evaluation, but more detailed -Con-vAI2 includes only engagingness and persona retrieval.",
"6 As in the ConvAI2 challenge, each of our 28 model configurations was evaluated by over 100 crowdworkers, and the results were adjusted for annotator variance via a Bayesian calibration (Kulikov et al., 2018) .",
"In designing our evaluation, we aimed to capture the four aspects we expected to directly improve via control (avoiding repetition, interestingness, listening, inquisitiveness), two important error classes we thought would be affected by our controls (fluency, making sense), and two overall quality measures (engagingness, humanness).",
"Main findings In this section we summarize the main findings of our human evaluation -whose full results can be found in Appendices G and H, with sample conversations in Appendix C. As Figure 3 shows, controlling for repetition, specificity and question-asking all lead to large 5 Exceptions: Avoiding repetition is a 1-3 scale, as we found this gave clearer instructions.",
"Inquisitiveness has an optimal score of 3; 1 and 2 represent too little questionasking, and 4 represents too much.",
"6 There are three other minor differences between our evaluation and ConvAI2's: (1) We fix capitalization and spacing before showing the chatbot's utterances to crowdworkers, while ConvAI2 show the raw lowercase tokenized form.",
"We found the latter interferes with fluency evaluation.",
"(2) We conduct 6 dialogue turns, while ConvAI2 conducts 4-6.",
"This was necessary to evaluate repetitiveness.",
"(3) We use (publicly-available) validation set personas, while ConvAI2 uses (hidden) test set personas.",
"This enables us to release our evaluation chatlogs.",
"engagingness improvements over the greedy and beam-search baseline models.",
"In particular, we find that controlling for multi-turn (self) repetition is important and should be incorporated alongside other attribute control methods.",
"We found no improvement by controlling response-relatedness.",
"To better understand these overall engagingness improvements, we consider the full set of human judgments, shown in Figure 4 .",
"We find that reducing repetition leads to improvements across all our aspects of conversational quality.",
"Increasing specificity shows improvements in interestingness and listening ability over the repetition-controlled baseline, while increasing question-asking shows improvements in inquisitiveness and interestingness over the repetition-controlled baseline.",
"Our most engaging model, which controls both repetition and question-asking -marked 'Question (CT)' in Figure 3 (left) -matches the engagingness of the winning entry in the ConvAI2 competition, as both models achieve a raw score 7 of 3.1 (Dinan et al., 2019) .",
"However, the Con-vAI2 winner, Lost in Conversation, was trained on approximately 12× as much data as our model.",
"Lost in Conversation is based on the OpenAI GPT Language Model (Radford et al., 2018) , which is pretrained on the BookCorpus (Zhu et al., 2015) , which contains approximately 985 million words, whereas our model is pretrained on the Twitter dataset (approximately 79 million words).",
"Altogether, our evaluation clearly shows that controlling low-level attributes over multiple turns leads to improved overall quality.",
"Effect of controlled attributes Repetition (WD) We observe that selfrepetition across utterances (external repetition) is by far the most severe form of repetition in our beam search baseline model.",
"We evaluate several settings of the extrep bigram weighted decoding feature, and find that an aggressive repetition-reduction setting (reducing bigram repetition rate to below gold data levels) is rated best.",
"We also find that blocking repeated content words improves the avoiding repetition score.",
"See Appendices E, F and G for full details.",
"As shown in Figure 3 Figure 3 : Calibrated human judgments of engagingness for the baselines and best controlled models (left); for different specificity control settings (middle); and for different question-asking control settings (right).",
"over the beam search baseline in all metrics, and achieves close-to-human scores on all metrics except humanness.",
"This striking result demonstrates that repetition is by far the biggest limiting quality factor for naive sequence-to-sequence dialogue agents.",
"The result also emphasizes the importance of multi-turn dialogue evaluation to detect the problem.",
"We refer to this model as the repetitioncontrolled baseline, and use it as a basis for all remaining experiments (i.e., we control specificity, response-relatedness and question-asking on top of these repetition-control settings).",
"Specificity (WD, CT) For our weighted decoding models, the extreme settings (very generic and very specific) score poorly in engagingness due to the frequent presence of degenerate output -see Figure 3 (middle).",
"We find that the weight = 4 setting (which is more specific than the repetitioncontrolled baseline and about as specific as the gold data) maximizes engagingness.",
"As shown in Figure 3 (left) and Figure 4 , this more-specific model is rated more interesting, engaging, and a better listener than the repetition-controlled baseline, but at the cost of reduced fluency and making sense.",
"Our CT model with z = 7 (which has a similar NIDF level as WD with weight = 4) shows similar results, but the improvements are smaller.",
"For further discussion on the interestingness of our specificity models, see Section 8.3.",
"Response-relatedness (WD) We evaluated several control settings (weight = −10, 5, 10, 13) and found that none scored better than weight = 0 (no response-relatedness control); see Appendix H. This is surprising -prior to running the human evaluation, we annotated 100 examples ourselves to determine the best control settings.",
"While we identified a more responsive setting (weight = 5) as less likely than the uncontrolled model to ignore the user, crowdworkers rated it as a slightly worse listener than the uncontrolled model.",
"One explanation for this discrepancy is that the more responsive model takes more risks, using more rare words (0.197 NIDF, up from 0.178), and thus receives a lower makes-sense score (3.41, down from 3.70).",
"We hypothesize that, compared to us, the crowdworkers are less tolerant of slightly nonsensical output, and more tolerant of generic unrelated utterances.",
"Question-asking (CT) As shown in Figure 3 (right), a question-asking rate of 65.7% (z = 7) maximizes engagingness.",
"This setting, which asks more questions than both the repetition-controlled baseline (50.0%) and the human-produced gold data (28.8%), brings us closest to human-level engagingness -see Figure 3 (left).",
"Although we find that a rate of approximately 65.7% questionasking is the most engaging, a lower level (48.9%, or z = 4) is rated the best listener.",
"Lastly, we find that although asking too many questions is less engaging, most crowdworkers will not directly criticize a chatbot that asks questions on every turnonly 11.9% of crowdworkers judged the z = 10 (boost) setting, which asks 99.5% questions, as asking too many questions.",
"8 For full details of these scores, see Appendix F and H. For time and budget reasons, we did not evaluate any models controlling both question-asking and specificity.",
"However, we expect it is possible to obtain further improvements by doing so.",
"A/B tests for interestingness Though our more-specific models yielded significant improvements in engagingness, we were surprised that they did not yield clearer improvements in interestingness.",
"To investigate further, we conducted an A/B interestingness evaluation of three specificity-controlled models, compared to the repetition-controlled baseline.",
"Crowdworkers were shown two conversations (from the main human evaluation) and asked to choose which model was more interesting (see Figure 7 for details).",
"We collected 500 samples per comparison, plus 200 additional human vs repetition-controlled baseline samples, which were used to filter for quality control.",
"After discarding low-quality crowdworkers, we have roughly 300 evaluations per comparison, with an average Cohen's κ = 0.6.",
"As shown in Table 3 , all three models were rated significantly more interesting than the repetitioncontrolled baseline.",
"This convincingly shows that producing utterances with more rare words is a valid strategy to improve interestingness.",
"We have two explanations for why these interestingness differences did not materialize in our main evaluation.",
"Firstly, interestingness is a particularly subjective metric (unlike more tangible metrics such as avoiding repetition and making sense) -this makes it hard to calibrate across crowdworkers.",
"Secondly, we suspect that in our original evaluation, the crowdworkers may have evaluated the interestingness of the task rather than the chatbot.",
"This could account for why subtle increases in conversational ability did not result in higher interestingness ratings -the PersonaChat task itself has a natural interestingness limit.",
"Conclusion What makes a good conversation?",
"Through our evaluation, we showed that a good conversation is about balance -controlling for the right level of repetition, specificity and question-asking is important for overall quality.",
"We also found that conversational aspects such as interestingness, listening, and inquisitiveness are all importantthough optimizing these can introduce a trade-off against certain types of errors (such as repetitive, disfluent, or nonsensical output).",
"Secondly, multiturn evaluation is essential to study what makes a good conversation -multiple turns are required to reveal issues such as repetition, consistency, and question-asking frequency.",
"Lastly, what do we mean by 'good'?",
"Although humanness and engagingness are both commonly used as overall quality metrics, the two are very different.",
"While our models achieved close-to-human scores on engagingness, they failed to get close on humannessshowing that a chatbot need not be human-like to be enjoyable.",
"This striking result also demonstrates the importance of measuring more than one quality metric when evaluating dialogue agents.",
"Outlook Our work shows that neural generative systems have systemic problems when applied to open-ended dialogue, some of which (e.g.",
"repetition) are only observable in the multi-turn setting.",
"Furthermore, control of low-level attributes offers a practical way to correct these problems, yielding large improvements to overall quality -in our case, comparable to systems trained on much more data.",
"Future work includes optimizing control settings automatically, and building more convincingly human-like chatbots.",
"Supplementary Material A Screenshots of human evaluation interface B Human evaluation questionnaire design Here are the questions and multiple-choice options used in the human evaluation, in the order presented: [Engagingness] How much did you enjoy talking to this user?",
"Adding w to the hypothesis y <t would create a 2-gram that appears in a previous utterance by the model extrep unigram(w, y <t , x) w is a non-stopword and w appears in a previous utterance by the model intrep bigram(w, y <t , x) Adding w to the hypothesis y <t would create a 2-gram that appears earlier in the hypothesis y <t intrep unigram(w, y <t , x) w is a non-stopword and w appears earlier in the hypothesis y <t partnerrep bigram(w, y <t , x) Adding w to the hypothesis y <t would create a 2-gram that appears in a previous utterance by the partner Repetition control (WD) Extrep bigram WD -0.5 wt -0.5 Extrep bigram WD -1.25 wt -1.25 Extrep bigram WD -3.5 wt -3.5 Extrep bigram WD -inf wt -∞ Repetition-controlled baseline wt -3.5 wt -∞ wt -∞ Question control (CT) Question-controlled CT 0 wt -3.5 wt -∞ wt -∞ z = 0 Question-controlled CT 1 wt -3.5 wt -∞ wt -∞ z = 1 Question-controlled CT 4 wt -3.5 wt -∞ wt -∞ z = 4 Question-controlled CT 7 wt -3.5 wt -∞ wt -∞ z = 7 Question-controlled CT 10 wt -3.5 wt -∞ wt -∞ z = 10 Question-controlled CT 10 (boost) wt 0 * wt -∞ wt -∞ z = 10 Specificity control (CT) Specificity-controlled CT 0 wt -3.5 wt -∞ wt -∞ z = 0 Specificity-controlled CT 2 wt -3.5 wt -∞ wt -∞ z = 2 Specificity-controlled CT 4 wt -3.5 wt -∞ wt -∞ z = 4 Specificity-controlled CT 7 wt -3.5 wt -∞ wt -∞ z = 7 Specificity-controlled CT 9 wt -3.5 wt -∞ wt -∞ z = 9 Specificity control (WD) Specificity-controlled WD -10 wt -3.5 wt -∞ wt -∞ wt -10 Specificity-controlled WD -4 wt -3.5 wt -∞ wt -∞ wt -4 Specificity-controlled WD 4 wt -3.5 wt -∞ wt -∞ wt 4 Specificity-controlled WD 6 wt -3.5 wt -∞ wt -∞ wt 6 Specificity-controlled WD 8 wt -3.5 wt -∞ wt -∞ wt 8 Response-related control (WD) ** Response-related controlled WD -10 wt -3.5 wt -∞ wt -∞ wt -∞ wt -∞ wt -10 Response-related controlled WD 0 wt -3.5 wt -∞ wt -∞ wt -∞ wt -∞ wt 0 Response-related controlled WD 5 wt -3.5 wt -∞ wt -∞ wt -∞ wt -∞ wt 5 Response-related controlled WD 10 wt -3.5 wt -∞ wt -∞ wt -∞ wt -∞ wt 10 Response-related controlled WD 13 wt -3.5 wt -∞ wt -∞ wt -∞ wt -∞ wt 13 Table 5 : Control settings for all configurations that were human-evaluated.",
"'wt' means the weight used for a weighted decoding feature and 'z =' means the setting (i.e.",
"bucket) for the control variable in conditional training.",
"* In the setting Question-controlled CT 10 (boost), the feature extrep bigram is not used for weighted decoding during beam search, but it is used to rerank the candidates after beam search.",
"See Section 6.4 for details.",
"** Note that the Response-related controlled models additionally introduce repetition controls to block internal bigram repetition and partner bigram repetition.",
"This was necessary to prevent the model from parroting the partner's last utterance.",
"In Table 8 , we find that just adding these extra repetition controls (here called Responserelated controlled WD 0, i.e.",
"increased repetition control but no response-relatedness control) outperforms our canonical Repetition-controlled baseline.",
"However, given that we discovered this later, our specificity and question controlled models are built on top of the canonical Repetition-controlled baseline.",
"Table 7 : Raw scores (mean ± std.)",
"for all models and human evaluation metrics.",
"The first eight columns are Likert metrics on a 1-4 scale (except Avoiding Repetition, which is a 1-3 scale), where higher is better (except Inquisitiveness, which has an optimal score of 3).",
"The last column, Persona Retrieval, is on a scale from 0 to 1 where higher is better.",
"The maximum of each column (excluding Human row) is in bold.",
"Model Avoiding Rep.",
"Engage Fluency Humanness Inquisitive Interesting Listening Make Sense Human and baselines * Human 2.79 ± 0.12 3.04 ± 0.11 3.36 ± 0.12 3.35 ± 0.11 2.44 ± 0.12 2.92 ± 0.11 3.32 ± 0.13 3.68 ± 0.11 * Greedy Search 2.08 ± 0.10 2.24 ± 0.11 3.03 ± 0.10 1.75 ± 0.12 1.95 ± 0.10 2.29 ± 0.13 2.62 ± 0.10 3.23 ± 0.10 * Beam Search (beam size 20) 2.08 ± 0.11 2.29 ± 0.11 3.09 ± 0.13 1.71 ± 0.13 2.42 ± 0.11 2.29 ± 0.14 2.47 ± 0.12 3.35 ± 0.13 Repetition control (WD) Extrep bigram WD -0.5 2.62 ± 0.10 2.54 ± 0.12 3.35 ± 0.12 2.13 ± 0.11 2.63 ± 0.11 2.56 ± 0.11 2.93 ± 0.11 3.48 ± 0.11 Extrep bigram WD -1.25 2.78 ± 0.09 2.82 ± 0.13 3.40 ± 0.12 2.27 ± 0.12 2.54 ± 0.09 2.76 ± 0.10 3.05 ± 0.11 3.53 ± 0.14 Extrep bigram WD -3.5 2.83 ± 0.11 2.93 ± 0.10 3.56 ± 0.10 2.43 ± 0.11 2.47 ± 0.11 2.83 ± 0.10 3.14 ± 0.10 3.62 ± 0.12 Extrep bigram WD -inf 2.74 ± 0.11 2.87 ± 0.14 3.49 ± 0.12 2.32 ± 0.13 2.56 ± 0.11 2.75 ± 0.12 3.13 ± 0.12 3.59 ± 0.12 * Repetition-controlled baseline 2.86 ± 0.12 2.82 ± 0.12 3.53 ± 0.10 2.40 ± 0.11 2.62 ± 0.13 2.84 ± 0.12 3.10 ± 0.11 3.58 ± 0.14 Question control (CT) Question-controlled CT 0 2.87 ± 0.12 2.84 ± 0.13 3.51 ± 0.10 2.46 ± 0.11 2.36 ± 0.09 2.76 ± 0.09 3.10 ± 0.10 3.49 ± 0.12 Question-controlled CT 1 2.82 ± 0.11 2.88 ± 0.11 3.42 ± 0.10 2.46 ± 0.12 2.47 ± 0.11 2.79 ± 0.13 3.14 ± 0.11 3.55 ± 0.10 Question-controlled CT 4 2.78 ± 0.12 2.88 ± 0.10 3.47 ± 0.11 2.40 ± 0.09 2.53 ± 0.13 2.83 ± 0.13 3.24 ± 0.11 3.59 ± 0.10 * Question-controlled CT 7 2.81 ± 0.10 2.99 ± 0.11 3.54 ± 0.09 2.35 ± 0.11 2.66 ± 0.12 2.92 ± 0.12 3.11 ± 0.10 3.47 ± 0.10 Question-controlled CT 10 2.67 ± 0.13 2.87 ± 0.11 3.52 ± 0.12 2.35 ± 0.12 2.63 ± 0.12 2.66 ± 0.10 2.94 ± 0.11 3.53 ± 0.12 Question-controlled CT 10 (boost) 2.68 ± 0.12 2.74 ± 0.09 3.42 ± 0.12 2.19 ± 0.13 2.79 ± 0.11 2.74 ± 0.11 3.00 ± 0.12 3.45 ± 0.13 Specificity control (CT) Specificity-controlled CT 0 2.79 ± 0.10 2.93 ± 0.09 3.44 ± 0.12 2.38 ± 0.11 2.56 ± 0.12 2.84 ± 0.12 3.12 ± 0.13 3.61 ± 0.11 Specificity-controlled CT 2 2.78 ± 0.12 2.74 ± 0.11 3.39 ± 0.13 2.31 ± 0.13 2.56 ± 0.13 2.74 ± 0.12 2.99 ± 0.11 3.47 ± 0.10 Specificity-controlled CT 4 2.82 ± 0.10 2.80 ± 0.13 3.44 ± 0.14 2.32 ± 0.13 2.51 ± 0.12 2.78 ± 0.15 3.09 ± 0.13 3.46 ± 0.13 Specificity-controlled CT 7 2.81 ± 0.12 2.91 ± 0.13 3.43 ± 0.11 2.45 ± 0.10 2.49 ± 0.11 2.81 ± 0.12 3.15 ± 0.12 3.55 ± 0.11 Specificity-controlled CT 9 2.80 ± 0.13 2.78 ± 0.10 3.41 ± 0.12 2.35 ± 0.13 2.28 ± 0.11 2.79 ± 0.11 2.91 ± 0.11 3.51 ± 0.12 Specificity control (WD) Specificity-controlled WD -10 2.76 ± 0.11 2.41 ± 0.12 3.19 ± 0.12 2.15 ± 0.11 2.28 ± 0.13 2.35 ± 0.12 2.89 ± 0.11 3.28 ± 0.12 Specificity-controlled WD -4 2.83 ± 0.10 2.76 ± 0.12 3.37 ± 0.10 2.36 ± 0.11 2.46 ± 0.11 2.62 ± 0.12 3.14 ± 0.09 3.52 ± 0.11 * Specificity-controlled WD 4 2.84 ± 0.10 2.96 ± 0.12 3.45 ± 0.13 2.44 ± 0.12 2.56 ± 0.09 2.94 ± 0.11 3.20 ± 0.10 3.54 ± 0.11 Specificity-controlled WD 6 2.81 ± 0.09 2.91 ± 0.10 3.34 ± 0.09 2.31 ± 0.11 2.53 ± 0.12 2.93 ± 0.12 3.09 ± 0.10 3.41 ± 0.12 Specificity-controlled WD 8 2.70 ± 0.11 2.39 ± 0.12 2.54 ± 0.12 1.80 ± 0.13 2.00 ± 0.10 2.49 ± 0.12 2.47 ± 0.10 2.87 ± 0.11 Response-related control (WD) Response-related controlled WD -10 2.77 ± 0.12 2.45 ± 0.12 3.26 ± 0.11 1.96 ± 0.10 2.31 ± 0.12 2.47 ± 0.12 2.73 ± 0.11 3.12 ± 0.12 Response-related controlled WD 0 2.87 ± 0.12 2.97 ± 0.11 3.55 ± 0.09 2.62 ± 0.11 2.48 ± 0.10 2.88 ± 0.12 3.21 ± 0.09 3.70 ± 0.10 Response-related controlled WD 5 2.79 ± 0.10 2.83 ± 0.09 3.35 ± 0.12 2.40 ± 0.12 2.51 ± 0.13 2.80 ± 0.13 3.13 ± 0.12 3.41 ± 0.12 Response-related controlled WD 10 2.74 ± 0.11 2.42 ± 0.12 2.93 ± 0.11 1.95 ± 0.12 2.20 ± 0.12 2.56 ± 0.12 2.90 ± 0.12 3.12 ± 0.10 Response-related controlled WD 13 2.63 ± 0.12 2.06 ± 0.11 2.40 ± 0.09 1.74 ± 0.11 2.07 ± 0.11 2.25 ± 0.12 2.49 ± 0.14 2.63 ± 0.10 Table 8 : Calibrated scores (mean ± std.)",
"for all models and human evaluation metrics.",
"The first eight columns are Likert metrics on a 1-4 scale (except Avoiding Repetition, which is a 1-3 scale), where higher is better (except Inquisitiveness, which has an optimal score of 3).",
"The last column, Persona Retrieval, is on a scale from 0 to 1 where higher is better.",
"The maximum of each column (excluding Human row) is in bold.",
"Rows marked with * are the six models included in Figure 3 Figure 9 : Calibrated human evaluation scores for all models.",
"This is the same data as in Table 8 .",
"Note: 'Repetition-controlled baseline+' in the rightmost column is 'Response-related controlled WD 0' in Table 8 .",
"See Table 5 for explanation."
]
} | {
"paper_header_number": [
"1",
"2",
"4",
"5",
"5.1",
"5.2",
"6",
"6.1",
"6.2",
"6.3",
"6.4",
"7",
"8",
"8.1",
"8.2",
"8.3",
"9"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Baseline model",
"Controllable text generation methods",
"Conditional Training (CT)",
"Weighted Decoding (WD)",
"Controlling conversational attributes",
"Repetition",
"Specificity",
"Response-relatedness",
"Question-asking",
"Comparison of control methods",
"Human evaluation results",
"Main findings",
"Effect of controlled attributes",
"A/B tests for interestingness",
"Conclusion"
]
} | GEM-SciDuet-train-133#paper-1358#slide-4 | What quality aspects do we measure | Does the bot repeat itself?
Did you find the bot interesting to talk to?
Does the bot say things that don't make sense?
Does the bot use English naturally?
Does the bot pay attention to what you say?
Does the bot ask a good amount of questions?
Is it a person or a bot?
Is it enjoyable to talk to? | Does the bot repeat itself?
Did you find the bot interesting to talk to?
Does the bot say things that don't make sense?
Does the bot use English naturally?
Does the bot pay attention to what you say?
Does the bot ask a good amount of questions?
Is it a person or a bot?
Is it enjoyable to talk to? | [] |
GEM-SciDuet-train-133#paper-1358#slide-5 | 1358 | What makes a good conversation? How controllable attributes affect human judgments | A good conversation requires balance -between simplicity and detail; staying on topic and changing it; asking questions and answering them. Although dialogue agents are commonly evaluated via human judgments of overall quality, the relationship between quality and these individual factors is less well-studied. In this work, we examine two controllable neural text generation methods, conditional training and weighted decoding, in order to control four important attributes for chitchat dialogue: repetition, specificity, response-relatedness and question-asking. We conduct a large-scale human evaluation to measure the effect of these control parameters on multi-turn interactive conversations on the PersonaChat task. We provide a detailed analysis of their relationship to high-level aspects of conversation, and show that by controlling combinations of these variables our models obtain clear improvements in human quality judgments. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237,
238,
239,
240,
241,
242,
243,
244,
245,
246,
247,
248,
249,
250,
251,
252,
253,
254
],
"paper_content_text": [
"Introduction Neural generation models for dialogue, despite their ubiquity in current research, are still poorly understood.",
"Well known problems, such as the genericness and repetitiveness of responses (Serban et al., 2016a) , remain without a de facto solution.",
"Strikingly, the factors that determine human judgments of overall conversation quality are almost entirely unexplored.",
"Most works have been limited to the next utterance prediction problem, whereas a multi-turn evaluation is necessary to evaluate the quality of a full conversation.",
"In this work we both (i) conduct a large-scale study to identify the fine-grained factors governing human judgments of full conversations, and (ii) develop models that apply our findings in practice, * A.S. completed most of this work at Facebook (FAIR).",
"leading to state-of-the-art performance.",
"Specifically, we identify and study eight aspects of conversation that can be measured by human judgments, while varying four types of low-level attributes that can be algorithmically controlled in neural models; see Figure 1 .",
"To control the lowlevel model attributes, we consider two simple but general algorithms: conditional training, in which the neural model is conditioned on additional control features, and weighted decoding, in which control features are added to the decoding scoring function at test time only.",
"One major result of our findings is that existing work has ignored the importance of conversational flow, as standard models (i) repeat or contradict previous statements, (ii) fail to balance specificity with genericness, and (iii) fail to balance asking questions with other dialogue acts.",
"Conducting experiments on the PersonaChat task (Zhang et al., 2018b) , we obtain significantly higher engagingness scores than the baseline by optimizing control of repetition, specificity and question-asking over multiple turns.",
"Using these findings, our best model matches the performance of the winning entry in the recent NeurIPS ConvAI2 competition (Dinan et al., 2019) , which was trained on much more data but had no control (see Section 8.1).",
"Our code, pretrained models, and full chatlogs, are available at https://parl.ai/projects/ controllable_dialogue.",
"Related Work Dialogue Dialogue evaluation is relatively well understood in goal-oriented tasks, where automated approaches can be coded by measuring task completion (Bordes et al., 2017; El Asri et al., 2017; Hastie, 2012; Henderson et al., 2014; Wen et al., 2017) .",
"Task success combined with dialogue cost can be linked to human judgments like user satisfaction via the PARADISE framework (Walker et al., 1997) .",
"However in chitchat tasks, which we study in this work, automatic metrics and their relation to human ratings are less well-understood.",
"While word-overlap metrics are effective for questionanswering and machine translation, for dialogue they have little to no correlation with human judgments (Liu et al., 2016; Novikova et al., 2017 )this is due to the open-ended nature of dialogue.",
"There are more recent attempts to find better automatic approaches, such as adversarial evaluation (Li et al., 2017b) and learning a scoring model (Lowe et al., 2017) , but their value is still unclear.",
"Nevertheless, a number of studies only use automatic metrics, with no human study at all (Lowe et al., 2015; Parthasarathi and Pineau, 2018; Serban et al., 2016b) .",
"Other works do use human evaluations (Dinan et al., 2018; Li et al., 2016a,b; Venkatesh et al., 2017; Vinyals and Le, 2015; Zhang et al., 2018b) , typically reporting just one type of judgment (either quality or appropriateness) via a Likert scale or pairwise comparison.",
"Most of those works only consider single turn evaluations, often with a shortened dialogue history, rather than full multi-turn dialogue.",
"A more comprehensive evaluation strategy has been studied within the scope of the Alexa prize (Venkatesh et al., 2017; Guo et al., 2018) by combining multiple automatic metrics designed to capture various conversational aspects (engagement, coherence, domain coverage, conversational depth and topical diversity).",
"Though these aspects have some similarity to the aspects studied here, we also focus on lower-level aspects (e.g.",
"avoiding repetition, fluency), to understand how they correspond to both our controllable attributes, and to overall quality judgments.",
"Controllable neural text generation Researchers have proposed several approaches to control aspects of RNN-based natural language generation such as sentiment, length, speaker style and tense (Fan et al., 2018; Ficler and Goldberg, 2017; Ghazvininejad et al., 2017; Hu et al., 2017; Kikuchi et al., 2016; Peng et al., 2018; Wang et al., 2017) .",
"In particular, several works use control to tackle the same common sequence-to-sequence problems we address here (particularly genericness and unrelated output), in the context of single-turn response generation (Baheti et al., 2018; Li et al., 2016a Li et al., , 2017a Shen et al., 2017; Xing et al., 2017; Zhang et al., 2018a; Zhou et al., 2017) .",
"By contrast, we focus on developing controls for, and human evaluation of, multi-turn interactive dialogue -this includes a new method (described in Section 5) to control attributes at the dialogue level rather than the utterance level.",
"In this work, we require a control method that is both general-purpose (one technique to simultaneously control many attributes) and easily tunable (the control setting is adjustable after training).",
"Given these constraints, we study two control methods: conditional training (variants of which have been described by Fan et al.",
"(2018) ; Kikuchi et al.",
"(2016) ; Peng et al.",
"(2018) ) and weighted decoding (described by Ghazvininejad et al.",
"(2017) as a general technique, and by Baheti et al.",
"(2018) to control response-relatedness).",
"To our knowledge, this work is the first to systematically compare the effectiveness of two general-purpose control methods across several attributes.",
"3 The PersonaChat dataset PersonaChat (Zhang et al., 2018b ) is a chitchat dialogue task involving two participants (two humans or a human and a bot).",
"Each participant is given a persona -a short collection of personal traits such as I'm left handed or My favorite season is spring -and are instructed to get to know each other by chatting naturally using their designated personas, for 6-8 turns.",
"The training set contains 8939 conversations and 955 personas, collected via crowdworkers, plus 1000 conversations and 100 personas for validation, and a similar number in the hidden test set.",
"The PersonaChat task was the subject of the NeurIPS 2018 ConvAI2 Challenge (Dinan et al., 2019) , in which competitors were first evaluated with respect to automatic met-rics (perplexity, hits@1 and F1 score), and then with respect to human judgment via the question \"How much did you enjoy talking to this user?\"",
"on a scale of 1-4.",
"Baseline model Our baseline model is a 2-layer LSTM sequenceto-sequence model with attention.",
"On any dialogue turn, the input x to the encoder is the entire dialogue history (separated using unique speakeridentifying tokens), with the model's own persona prepended.",
"Conditioned on this input sequence x, the decoder generates a response y.",
"Except when stated otherwise, all our models decode using beam search with beam size 20.",
"We initialized the word embedding matrix with 300-dimensional GloVe embeddings (Pennington et al., 2014) .",
"Using the ParlAI framework (Miller et al., 2017) , we pretrained the model on a dataset of 2.5 million Twitter message-response pairs, 1 then fine-tuned it on PersonaChat.",
"On the Per-sonaChat validation set, the baseline model has a perplexity of 26.83 and F1 of 17.02, which would have placed us 4th out of 26 models in the Con-vAI2 competition (Dinan et al., 2019) .",
"We attempt to improve over this baseline using control.",
"Controllable text generation methods Suppose we have a sequence-to-sequence model which gives P (y|x) = Π t P (y t |x, y 1 , .",
".",
".",
", y t−1 ), the conditional probability of a response y (the model's next utterance) given input x (the context, which in our case includes the model's own persona and the dialogue history).",
"Contrary to most previous work, which controls at the sentence level, we wish to control attributes of the output y at the dialogue levelmeaning that a single control setting is used for a whole dialogue.",
"For example, to control questionasking, we provide a control setting at the beginning of each dialogue (e.g.",
"20% questions or 70% questions) rather than providing a control setting for each utterance (e.g.",
"is a question or isn't a question).",
"With this approach, the sequence-tosequence model is able to choose what value the controlled attribute should take for any particular utterance, but we are able to choose the overall distribution.",
"We find that this approach works well -for example, the sequence-to-sequence model is generally good at detecting when to ask a question.",
"In particular, this is easier than the alternative: developing a separate process to decide, for each utterance, whether to ask a question.",
"In this section, we describe the two methods -which we call Conditional Training (CT) and Weighted Decoding (WD) -that we use to control attributes of the output y at the dialogue level.",
"Conditional Training (CT) Conditional Training (Fan et al., 2018; Kikuchi et al., 2016; Peng et al., 2018) is a method to learn a sequence-to-sequence model P (y|x, z), where z is a discrete control variable.",
"If the control attribute is naturally continuous (for example in our work, repetitiveness, specificity and response-relatedness), we use z to represent bucketed ranges.",
"For a binary attribute like questionasking, z represents an overall probability (as explained in Section 5).",
"To train a CT model, we first automatically annotate every (x, y) pair in the training set with the attribute we wish to control (for example, whether y contains a question mark).",
"During training, for each example we determine the corresponding z value (for continuous attributes, this simply means sorting into the correct bucket; for question-asking, see Section 6.4).",
"Next, the control variable z is represented via an embedding (each of the possible values of z has its own embedding).",
"For all our experiments, the embedding is of length 10; this was determined via hyperparameter tuning.",
"There are several possible ways to condition the sequence-to-sequence model on z -for example, append z to the end of the input sequence, or use z as the START symbol for the decoder.",
"We find it most effective to concatenate z to the decoder's input on every step.",
"2 Lastly, the CT model learns to produce y = y 1 , .",
".",
".",
", y T by optimizing the cross-entropy loss: loss CT = − 1 T T t=1 log P (y t |x, z, y 1 , .",
".",
".",
", y t−1 ) Our CT models are initialized with the parameters from the baseline sequence-to-sequence model P (y|x) (the new decoder parameters are initialized with small random values), then fine-tuned to optimize loss CT on the PersonaChat training set, until convergence of loss CT on the validation set.",
"Weighted Decoding (WD) Weighted Decoding (Ghazvininejad et al., 2017) is a decoding method that increases or decreases the probability of words with certain features.",
"The technique is applied only at test time, requiring no change to the training method.",
"A limitation of WD is that the controllable attribute must be defined at the word-level; any desired utterance-level attribute must be redefined via word-level features.",
"In weighted decoding, on the t th step of decoding, a partial hypothesis y <t = y 1 , .",
".",
".",
", y t−1 is expanded by computing the score for each possible next word w in the vocabulary: score(w, y <t ; x) = score(y <t ; x) + log P RNN (w|y <t , x) + i w i * f i (w; y <t , x).",
"Here, log P RNN (w|y <t , x) is the log-probability of the word w calculated by the RNN, score(y <t ; x) is the accumulated score of the already-generated words in the hypothesis y <t , and f i (w; y <t , x) are decoding features with associated weights w i .",
"There can be multiple features f i (to control multiple attributes), and the weights w i are hyperparameters to be chosen.",
"A decoding feature f i (w; y <t , x) assigns a real value to the word w, in the context of the text generated so far y <t and the context x.",
"The feature can be continuous (e.g.",
"the unigram probability of w), discrete (e.g.",
"the length of w in characters), or binary (e.g.",
"whether w starts with the same letter as the last word in y <t ).",
"A positive weight w i increases the probability of words w that score highly with respect to f i ; a negative weight decreases their probability.",
"Note that weighted decoding and conditional training can be applied simultaneously (i.e.",
"train a CT model then apply WD at test time) -a strategy we use in our experiments.",
"Controlling conversational attributes In this section, we describe how we use conditional training and weighted decoding to control four attributes: repetition, specificity, responserelatedness and question-asking.",
"We evaluate the effectiveness of both control methods via automatic metrics (i.e., measuring how well the attribute was controlled), and use our findings to select control methods and control settings to be explored further via human evaluation (Section 8).",
"Repetition Our baseline model exhibits three types of repetition, which we call external repetition (selfrepetition across utterances), internal repetition (self-repetition within utterances), and partner repetition (repeating the conversational partner).",
"To control repetition with weighted decoding, 3 we define five n-gram based decoding features (see Appendix D).",
"Three of these features (extrep bigram, intrep bigram and partnerrep bigram) identify repeating bigrams for the three repetition types.",
"The other two features (extrep unigram and intrep unigram) identify repeating content words.",
"By applying a negative weight to these features, we can reduce repetition.",
"In particular, if the weight is −∞, our method is equivalent to n-gram blocking as described by Kulikov et al.",
"(2018) .",
"We observe that repetition control is very important, thus all further control experiments include repetition control.",
"Specificity Like many sequence-to-sequence models using beam search decoding, our baseline frequently asks generic questions such as What music do you like?",
"and gives dull, unspecific responses, such as I like all kinds of music.",
"We control specificity using Normalized Inverse Document Frequency (NIDF) as a measure of word rareness.",
"4 The Inverse Document Frequency of a word w is IDF(w) = log(R/c w ) where R is the number of responses in the dataset, and c w is the number of those responses that contain w. Normalized IDF (which ranges from 0 to 1) is NIDF(w) = IDF(w) − min idf max idf − min idf (1) where min idf and max idf are the minimum and maximum IDFs, taken over all words in the vocabulary.",
"To control specificity with weighted decoding, we use NIDF as a decoding feature.",
"As shown in Table 1 , this method produces reasonable outputs when the feature weight is within a certain range, but at the extremes the outputs are nonsensical.",
"The boundary for nonsensical output differs from example to example.",
"To control specificity with conditional training, we define the specificity of an utterance y to be the mean NIDF of the words in y.",
"Thus our control variable z is mean NIDF (discretized into 10 equal-sized buckets).",
"As shown in Table 1 , this method gives outputs with a narrower NIDF range, but overall produces less nonsensical outputs.",
"Response-relatedness In conversation, it's generally desirable to produce a response that is related to the partner's last utterance; for example if the partner says My grandfather died last month, it is appropriate to say I'm so sorry.",
"Were you close to your grandfather?",
"However, our baseline model frequently responds with unrelated utterances like Do you have any pets?",
"To control response-relatedness with weighted decoding, we use the decoding feature resp rel: resp rel(w; y <t , x) = cos sim(word emb(w), sent emb( )) where word emb(w) is the GloVe embedding for the word w, sent emb( ) is the sentence embedding for the partner's last utterance (note is part of the context x), and cos sim is the cosine similarity between the two.",
"In particular, the sentence embedding sent emb(s) for an utterance s is a weighted average of the GloVe embeddings of the words in s, with the first principal component projected out; for full details, see Arora et al.",
"(2017) .",
"This method of controlling response-relatedness is similar to that described in (Baheti et al., 2018) .",
"We find that weighted decoding is effective to control the semantic relatedness of the model's response to the partner's last utterance (see Table 2 ).",
"As before, we find that extreme weights lead to nonsensical output.",
"To control response-relatedness with conditional training, we try defining the control variable z to be cos sim(sent emb(y), sent emb( )), the overall cosine similarity between the partner's last utterance and the model's response y (again, we discretize z).",
"However, we find this method ineffective -the CT model learns only a very weak connection between z and the semantic relatedness of the output (see Section 7 for more details).",
"Question-asking Considerate chitchat requires a reciprocal asking and answering of questions -asking too few or too many can appear self-centered or nosy.",
"We control question-asking in order to study these trade-offs.",
"To control question-asking with weighted decoding, we use the binary decoding feature is qn word(w), which is equal to 1 if and only if the word w is in a pre-defined list of interrogative words (how, what, when, where, which, who, whom, whose, why, ?)",
".",
"We find this is a somewhat effective method to encourage or discourage questions, but with unintended side-effects: a negative weight can discourage valid non-question utterances that happen to contain interrogative words (such as I'm learning how to knit) and a positive weight can result in degenerate utterances (such as For conditional training, we regard an utterance y as containing a question if and only if y contains a question mark.",
"We train our CT model on a control variable z with 11 possible values: {0, .",
".",
".",
", 10}.",
"As discussed in Section 5, we wish to control question-asking at the distributional, dialogue level, rather than at the binary, utterance level.",
"Thus the setting z = i means that the model should produce, on average, utterances containing '?'",
"with probability i/10.",
"During training we randomly assign examples to buckets such that each bucket i is trained on examples with the correct proportion of questions (i/10), and all buckets have the same amount of training examples.",
"We find that conditional training is effective to control question-asking -as shown in Figure 2 , by increasing z from 0 to 10, we obtain a range of question-asking rates from 1.40% to 97.72%.",
"However, when we introduce repetition control, question-asking is reduced -in particular, the z = 10 setting (which should produce 100% questions) now only produces 79.67% questions.",
"The primary problem is the weighted decoding feature extrep bigram, which discourages bigrams that have appeared in previous utterances -this prevents the model from producing bigrams that commonly occur in many questions, such as do you and what is.",
"To fix this, we introduce an extra setting z = 10 (boost), in which we do not use the feature extrep bigram for weighted decoding during beam search, but we do use it to rerank the candidates after beam search.",
"This setting, which allows the model to produce necessary questionasking bigrams, yields a 99.54% question-asking rate, at the cost of slightly increased external bigram repetition (see Appendix F).",
"For controlling question-asking, conditional training is preferable to weighted decoding for two reasons.",
"Firstly, it allows us to achieve (close to) 0% questions, 100% questions, or anything in between, without introducing the risk of degenerate output.",
"Secondly, presence-of-a-question-mark captures the true attribute of interest (questionasking) more exactly and directly than presence of interrogative words.",
"For these reasons, only the CT method is considered in the human evaluation.",
"Comparison of control methods The previous section shows that conditional training and weighted decoding are both useful techniques, with different strengths and weaknesses.",
"The primary disadvantage of conditional training is that it sometimes fails to learn the connection between the control variable z and the target output y.",
"In practice, we find the model can learn simple attributes of the output (such as the presence of '?",
"', and overall genericness), but not relationships between the input and output (such as semantic relatedness).",
"By contrast, weighted decoding can force the desired feature to appear in the output by raising the weight arbitrarily high (though this may have unintended side-effects).",
"The primary disadvantage of weighted decoding is that it risks going off-distribution when the weight is too strong.",
"By contrast, conditional training produces mostly well-formed, indistribution outputs.",
"This highlights the importance of learned control -it is safer to learn to produce output that both satisfies the control variable and is appropriate, than to alter the decoding process to satisfy the control variable, potentially trading off appropriateness in the process.",
"Other considerations include: (1) Convenience: conditional training requires retraining; weighted decoding doesn't, but is slower at test time.",
"Attribute definition: conditional training can control sentence-level attributes, but they must be discrete.",
"By contrast, weighted decoding requires word-level features, but they can be continuous.",
"Human evaluation results In order to study the effect of our controllable attributes, we conduct a large-scale human evalua-tion of 28 model configurations (see Appendix E), plus human-human conversations for comparison.",
"Approach In our evaluation, a crowdworker chats with a model (or in the human-human case, another crowdworker) for six conversational turns, then answers eight multiple-choice questions which each capture different aspects of conversational quality: avoiding repetition, interestingness, making sense, fluency, listening, inquisitiveness, humanness and engagingness.",
"The eight questions are Likert questions on a 1-4 scale, where higher is better.",
"5 To match the ConvAI2 Challenge, we also add a persona retrieval question, in which the crowdworker is asked to select which of two possible personas was the model's persona.",
"For full details of the evaluation design, see Appendix B.",
"Our evaluation is the same as the ConvAI2 Challenge evaluation, but more detailed -Con-vAI2 includes only engagingness and persona retrieval.",
"6 As in the ConvAI2 challenge, each of our 28 model configurations was evaluated by over 100 crowdworkers, and the results were adjusted for annotator variance via a Bayesian calibration (Kulikov et al., 2018) .",
"In designing our evaluation, we aimed to capture the four aspects we expected to directly improve via control (avoiding repetition, interestingness, listening, inquisitiveness), two important error classes we thought would be affected by our controls (fluency, making sense), and two overall quality measures (engagingness, humanness).",
"Main findings In this section we summarize the main findings of our human evaluation -whose full results can be found in Appendices G and H, with sample conversations in Appendix C. As Figure 3 shows, controlling for repetition, specificity and question-asking all lead to large 5 Exceptions: Avoiding repetition is a 1-3 scale, as we found this gave clearer instructions.",
"Inquisitiveness has an optimal score of 3; 1 and 2 represent too little questionasking, and 4 represents too much.",
"6 There are three other minor differences between our evaluation and ConvAI2's: (1) We fix capitalization and spacing before showing the chatbot's utterances to crowdworkers, while ConvAI2 show the raw lowercase tokenized form.",
"We found the latter interferes with fluency evaluation.",
"(2) We conduct 6 dialogue turns, while ConvAI2 conducts 4-6.",
"This was necessary to evaluate repetitiveness.",
"(3) We use (publicly-available) validation set personas, while ConvAI2 uses (hidden) test set personas.",
"This enables us to release our evaluation chatlogs.",
"engagingness improvements over the greedy and beam-search baseline models.",
"In particular, we find that controlling for multi-turn (self) repetition is important and should be incorporated alongside other attribute control methods.",
"We found no improvement by controlling response-relatedness.",
"To better understand these overall engagingness improvements, we consider the full set of human judgments, shown in Figure 4 .",
"We find that reducing repetition leads to improvements across all our aspects of conversational quality.",
"Increasing specificity shows improvements in interestingness and listening ability over the repetition-controlled baseline, while increasing question-asking shows improvements in inquisitiveness and interestingness over the repetition-controlled baseline.",
"Our most engaging model, which controls both repetition and question-asking -marked 'Question (CT)' in Figure 3 (left) -matches the engagingness of the winning entry in the ConvAI2 competition, as both models achieve a raw score 7 of 3.1 (Dinan et al., 2019) .",
"However, the Con-vAI2 winner, Lost in Conversation, was trained on approximately 12× as much data as our model.",
"Lost in Conversation is based on the OpenAI GPT Language Model (Radford et al., 2018) , which is pretrained on the BookCorpus (Zhu et al., 2015) , which contains approximately 985 million words, whereas our model is pretrained on the Twitter dataset (approximately 79 million words).",
"Altogether, our evaluation clearly shows that controlling low-level attributes over multiple turns leads to improved overall quality.",
"Effect of controlled attributes Repetition (WD) We observe that selfrepetition across utterances (external repetition) is by far the most severe form of repetition in our beam search baseline model.",
"We evaluate several settings of the extrep bigram weighted decoding feature, and find that an aggressive repetition-reduction setting (reducing bigram repetition rate to below gold data levels) is rated best.",
"We also find that blocking repeated content words improves the avoiding repetition score.",
"See Appendices E, F and G for full details.",
"As shown in Figure 3 Figure 3 : Calibrated human judgments of engagingness for the baselines and best controlled models (left); for different specificity control settings (middle); and for different question-asking control settings (right).",
"over the beam search baseline in all metrics, and achieves close-to-human scores on all metrics except humanness.",
"This striking result demonstrates that repetition is by far the biggest limiting quality factor for naive sequence-to-sequence dialogue agents.",
"The result also emphasizes the importance of multi-turn dialogue evaluation to detect the problem.",
"We refer to this model as the repetitioncontrolled baseline, and use it as a basis for all remaining experiments (i.e., we control specificity, response-relatedness and question-asking on top of these repetition-control settings).",
"Specificity (WD, CT) For our weighted decoding models, the extreme settings (very generic and very specific) score poorly in engagingness due to the frequent presence of degenerate output -see Figure 3 (middle).",
"We find that the weight = 4 setting (which is more specific than the repetitioncontrolled baseline and about as specific as the gold data) maximizes engagingness.",
"As shown in Figure 3 (left) and Figure 4 , this more-specific model is rated more interesting, engaging, and a better listener than the repetition-controlled baseline, but at the cost of reduced fluency and making sense.",
"Our CT model with z = 7 (which has a similar NIDF level as WD with weight = 4) shows similar results, but the improvements are smaller.",
"For further discussion on the interestingness of our specificity models, see Section 8.3.",
"Response-relatedness (WD) We evaluated several control settings (weight = −10, 5, 10, 13) and found that none scored better than weight = 0 (no response-relatedness control); see Appendix H. This is surprising -prior to running the human evaluation, we annotated 100 examples ourselves to determine the best control settings.",
"While we identified a more responsive setting (weight = 5) as less likely than the uncontrolled model to ignore the user, crowdworkers rated it as a slightly worse listener than the uncontrolled model.",
"One explanation for this discrepancy is that the more responsive model takes more risks, using more rare words (0.197 NIDF, up from 0.178), and thus receives a lower makes-sense score (3.41, down from 3.70).",
"We hypothesize that, compared to us, the crowdworkers are less tolerant of slightly nonsensical output, and more tolerant of generic unrelated utterances.",
"Question-asking (CT) As shown in Figure 3 (right), a question-asking rate of 65.7% (z = 7) maximizes engagingness.",
"This setting, which asks more questions than both the repetition-controlled baseline (50.0%) and the human-produced gold data (28.8%), brings us closest to human-level engagingness -see Figure 3 (left).",
"Although we find that a rate of approximately 65.7% questionasking is the most engaging, a lower level (48.9%, or z = 4) is rated the best listener.",
"Lastly, we find that although asking too many questions is less engaging, most crowdworkers will not directly criticize a chatbot that asks questions on every turnonly 11.9% of crowdworkers judged the z = 10 (boost) setting, which asks 99.5% questions, as asking too many questions.",
"8 For full details of these scores, see Appendix F and H. For time and budget reasons, we did not evaluate any models controlling both question-asking and specificity.",
"However, we expect it is possible to obtain further improvements by doing so.",
"A/B tests for interestingness Though our more-specific models yielded significant improvements in engagingness, we were surprised that they did not yield clearer improvements in interestingness.",
"To investigate further, we conducted an A/B interestingness evaluation of three specificity-controlled models, compared to the repetition-controlled baseline.",
"Crowdworkers were shown two conversations (from the main human evaluation) and asked to choose which model was more interesting (see Figure 7 for details).",
"We collected 500 samples per comparison, plus 200 additional human vs repetition-controlled baseline samples, which were used to filter for quality control.",
"After discarding low-quality crowdworkers, we have roughly 300 evaluations per comparison, with an average Cohen's κ = 0.6.",
"As shown in Table 3 , all three models were rated significantly more interesting than the repetitioncontrolled baseline.",
"This convincingly shows that producing utterances with more rare words is a valid strategy to improve interestingness.",
"We have two explanations for why these interestingness differences did not materialize in our main evaluation.",
"Firstly, interestingness is a particularly subjective metric (unlike more tangible metrics such as avoiding repetition and making sense) -this makes it hard to calibrate across crowdworkers.",
"Secondly, we suspect that in our original evaluation, the crowdworkers may have evaluated the interestingness of the task rather than the chatbot.",
"This could account for why subtle increases in conversational ability did not result in higher interestingness ratings -the PersonaChat task itself has a natural interestingness limit.",
"Conclusion What makes a good conversation?",
"Through our evaluation, we showed that a good conversation is about balance -controlling for the right level of repetition, specificity and question-asking is important for overall quality.",
"We also found that conversational aspects such as interestingness, listening, and inquisitiveness are all importantthough optimizing these can introduce a trade-off against certain types of errors (such as repetitive, disfluent, or nonsensical output).",
"Secondly, multiturn evaluation is essential to study what makes a good conversation -multiple turns are required to reveal issues such as repetition, consistency, and question-asking frequency.",
"Lastly, what do we mean by 'good'?",
"Although humanness and engagingness are both commonly used as overall quality metrics, the two are very different.",
"While our models achieved close-to-human scores on engagingness, they failed to get close on humannessshowing that a chatbot need not be human-like to be enjoyable.",
"This striking result also demonstrates the importance of measuring more than one quality metric when evaluating dialogue agents.",
"Outlook Our work shows that neural generative systems have systemic problems when applied to open-ended dialogue, some of which (e.g.",
"repetition) are only observable in the multi-turn setting.",
"Furthermore, control of low-level attributes offers a practical way to correct these problems, yielding large improvements to overall quality -in our case, comparable to systems trained on much more data.",
"Future work includes optimizing control settings automatically, and building more convincingly human-like chatbots.",
"Supplementary Material A Screenshots of human evaluation interface B Human evaluation questionnaire design Here are the questions and multiple-choice options used in the human evaluation, in the order presented: [Engagingness] How much did you enjoy talking to this user?",
"Adding w to the hypothesis y <t would create a 2-gram that appears in a previous utterance by the model extrep unigram(w, y <t , x) w is a non-stopword and w appears in a previous utterance by the model intrep bigram(w, y <t , x) Adding w to the hypothesis y <t would create a 2-gram that appears earlier in the hypothesis y <t intrep unigram(w, y <t , x) w is a non-stopword and w appears earlier in the hypothesis y <t partnerrep bigram(w, y <t , x) Adding w to the hypothesis y <t would create a 2-gram that appears in a previous utterance by the partner Repetition control (WD) Extrep bigram WD -0.5 wt -0.5 Extrep bigram WD -1.25 wt -1.25 Extrep bigram WD -3.5 wt -3.5 Extrep bigram WD -inf wt -∞ Repetition-controlled baseline wt -3.5 wt -∞ wt -∞ Question control (CT) Question-controlled CT 0 wt -3.5 wt -∞ wt -∞ z = 0 Question-controlled CT 1 wt -3.5 wt -∞ wt -∞ z = 1 Question-controlled CT 4 wt -3.5 wt -∞ wt -∞ z = 4 Question-controlled CT 7 wt -3.5 wt -∞ wt -∞ z = 7 Question-controlled CT 10 wt -3.5 wt -∞ wt -∞ z = 10 Question-controlled CT 10 (boost) wt 0 * wt -∞ wt -∞ z = 10 Specificity control (CT) Specificity-controlled CT 0 wt -3.5 wt -∞ wt -∞ z = 0 Specificity-controlled CT 2 wt -3.5 wt -∞ wt -∞ z = 2 Specificity-controlled CT 4 wt -3.5 wt -∞ wt -∞ z = 4 Specificity-controlled CT 7 wt -3.5 wt -∞ wt -∞ z = 7 Specificity-controlled CT 9 wt -3.5 wt -∞ wt -∞ z = 9 Specificity control (WD) Specificity-controlled WD -10 wt -3.5 wt -∞ wt -∞ wt -10 Specificity-controlled WD -4 wt -3.5 wt -∞ wt -∞ wt -4 Specificity-controlled WD 4 wt -3.5 wt -∞ wt -∞ wt 4 Specificity-controlled WD 6 wt -3.5 wt -∞ wt -∞ wt 6 Specificity-controlled WD 8 wt -3.5 wt -∞ wt -∞ wt 8 Response-related control (WD) ** Response-related controlled WD -10 wt -3.5 wt -∞ wt -∞ wt -∞ wt -∞ wt -10 Response-related controlled WD 0 wt -3.5 wt -∞ wt -∞ wt -∞ wt -∞ wt 0 Response-related controlled WD 5 wt -3.5 wt -∞ wt -∞ wt -∞ wt -∞ wt 5 Response-related controlled WD 10 wt -3.5 wt -∞ wt -∞ wt -∞ wt -∞ wt 10 Response-related controlled WD 13 wt -3.5 wt -∞ wt -∞ wt -∞ wt -∞ wt 13 Table 5 : Control settings for all configurations that were human-evaluated.",
"'wt' means the weight used for a weighted decoding feature and 'z =' means the setting (i.e.",
"bucket) for the control variable in conditional training.",
"* In the setting Question-controlled CT 10 (boost), the feature extrep bigram is not used for weighted decoding during beam search, but it is used to rerank the candidates after beam search.",
"See Section 6.4 for details.",
"** Note that the Response-related controlled models additionally introduce repetition controls to block internal bigram repetition and partner bigram repetition.",
"This was necessary to prevent the model from parroting the partner's last utterance.",
"In Table 8 , we find that just adding these extra repetition controls (here called Responserelated controlled WD 0, i.e.",
"increased repetition control but no response-relatedness control) outperforms our canonical Repetition-controlled baseline.",
"However, given that we discovered this later, our specificity and question controlled models are built on top of the canonical Repetition-controlled baseline.",
"Table 7 : Raw scores (mean ± std.)",
"for all models and human evaluation metrics.",
"The first eight columns are Likert metrics on a 1-4 scale (except Avoiding Repetition, which is a 1-3 scale), where higher is better (except Inquisitiveness, which has an optimal score of 3).",
"The last column, Persona Retrieval, is on a scale from 0 to 1 where higher is better.",
"The maximum of each column (excluding Human row) is in bold.",
"Model Avoiding Rep.",
"Engage Fluency Humanness Inquisitive Interesting Listening Make Sense Human and baselines * Human 2.79 ± 0.12 3.04 ± 0.11 3.36 ± 0.12 3.35 ± 0.11 2.44 ± 0.12 2.92 ± 0.11 3.32 ± 0.13 3.68 ± 0.11 * Greedy Search 2.08 ± 0.10 2.24 ± 0.11 3.03 ± 0.10 1.75 ± 0.12 1.95 ± 0.10 2.29 ± 0.13 2.62 ± 0.10 3.23 ± 0.10 * Beam Search (beam size 20) 2.08 ± 0.11 2.29 ± 0.11 3.09 ± 0.13 1.71 ± 0.13 2.42 ± 0.11 2.29 ± 0.14 2.47 ± 0.12 3.35 ± 0.13 Repetition control (WD) Extrep bigram WD -0.5 2.62 ± 0.10 2.54 ± 0.12 3.35 ± 0.12 2.13 ± 0.11 2.63 ± 0.11 2.56 ± 0.11 2.93 ± 0.11 3.48 ± 0.11 Extrep bigram WD -1.25 2.78 ± 0.09 2.82 ± 0.13 3.40 ± 0.12 2.27 ± 0.12 2.54 ± 0.09 2.76 ± 0.10 3.05 ± 0.11 3.53 ± 0.14 Extrep bigram WD -3.5 2.83 ± 0.11 2.93 ± 0.10 3.56 ± 0.10 2.43 ± 0.11 2.47 ± 0.11 2.83 ± 0.10 3.14 ± 0.10 3.62 ± 0.12 Extrep bigram WD -inf 2.74 ± 0.11 2.87 ± 0.14 3.49 ± 0.12 2.32 ± 0.13 2.56 ± 0.11 2.75 ± 0.12 3.13 ± 0.12 3.59 ± 0.12 * Repetition-controlled baseline 2.86 ± 0.12 2.82 ± 0.12 3.53 ± 0.10 2.40 ± 0.11 2.62 ± 0.13 2.84 ± 0.12 3.10 ± 0.11 3.58 ± 0.14 Question control (CT) Question-controlled CT 0 2.87 ± 0.12 2.84 ± 0.13 3.51 ± 0.10 2.46 ± 0.11 2.36 ± 0.09 2.76 ± 0.09 3.10 ± 0.10 3.49 ± 0.12 Question-controlled CT 1 2.82 ± 0.11 2.88 ± 0.11 3.42 ± 0.10 2.46 ± 0.12 2.47 ± 0.11 2.79 ± 0.13 3.14 ± 0.11 3.55 ± 0.10 Question-controlled CT 4 2.78 ± 0.12 2.88 ± 0.10 3.47 ± 0.11 2.40 ± 0.09 2.53 ± 0.13 2.83 ± 0.13 3.24 ± 0.11 3.59 ± 0.10 * Question-controlled CT 7 2.81 ± 0.10 2.99 ± 0.11 3.54 ± 0.09 2.35 ± 0.11 2.66 ± 0.12 2.92 ± 0.12 3.11 ± 0.10 3.47 ± 0.10 Question-controlled CT 10 2.67 ± 0.13 2.87 ± 0.11 3.52 ± 0.12 2.35 ± 0.12 2.63 ± 0.12 2.66 ± 0.10 2.94 ± 0.11 3.53 ± 0.12 Question-controlled CT 10 (boost) 2.68 ± 0.12 2.74 ± 0.09 3.42 ± 0.12 2.19 ± 0.13 2.79 ± 0.11 2.74 ± 0.11 3.00 ± 0.12 3.45 ± 0.13 Specificity control (CT) Specificity-controlled CT 0 2.79 ± 0.10 2.93 ± 0.09 3.44 ± 0.12 2.38 ± 0.11 2.56 ± 0.12 2.84 ± 0.12 3.12 ± 0.13 3.61 ± 0.11 Specificity-controlled CT 2 2.78 ± 0.12 2.74 ± 0.11 3.39 ± 0.13 2.31 ± 0.13 2.56 ± 0.13 2.74 ± 0.12 2.99 ± 0.11 3.47 ± 0.10 Specificity-controlled CT 4 2.82 ± 0.10 2.80 ± 0.13 3.44 ± 0.14 2.32 ± 0.13 2.51 ± 0.12 2.78 ± 0.15 3.09 ± 0.13 3.46 ± 0.13 Specificity-controlled CT 7 2.81 ± 0.12 2.91 ± 0.13 3.43 ± 0.11 2.45 ± 0.10 2.49 ± 0.11 2.81 ± 0.12 3.15 ± 0.12 3.55 ± 0.11 Specificity-controlled CT 9 2.80 ± 0.13 2.78 ± 0.10 3.41 ± 0.12 2.35 ± 0.13 2.28 ± 0.11 2.79 ± 0.11 2.91 ± 0.11 3.51 ± 0.12 Specificity control (WD) Specificity-controlled WD -10 2.76 ± 0.11 2.41 ± 0.12 3.19 ± 0.12 2.15 ± 0.11 2.28 ± 0.13 2.35 ± 0.12 2.89 ± 0.11 3.28 ± 0.12 Specificity-controlled WD -4 2.83 ± 0.10 2.76 ± 0.12 3.37 ± 0.10 2.36 ± 0.11 2.46 ± 0.11 2.62 ± 0.12 3.14 ± 0.09 3.52 ± 0.11 * Specificity-controlled WD 4 2.84 ± 0.10 2.96 ± 0.12 3.45 ± 0.13 2.44 ± 0.12 2.56 ± 0.09 2.94 ± 0.11 3.20 ± 0.10 3.54 ± 0.11 Specificity-controlled WD 6 2.81 ± 0.09 2.91 ± 0.10 3.34 ± 0.09 2.31 ± 0.11 2.53 ± 0.12 2.93 ± 0.12 3.09 ± 0.10 3.41 ± 0.12 Specificity-controlled WD 8 2.70 ± 0.11 2.39 ± 0.12 2.54 ± 0.12 1.80 ± 0.13 2.00 ± 0.10 2.49 ± 0.12 2.47 ± 0.10 2.87 ± 0.11 Response-related control (WD) Response-related controlled WD -10 2.77 ± 0.12 2.45 ± 0.12 3.26 ± 0.11 1.96 ± 0.10 2.31 ± 0.12 2.47 ± 0.12 2.73 ± 0.11 3.12 ± 0.12 Response-related controlled WD 0 2.87 ± 0.12 2.97 ± 0.11 3.55 ± 0.09 2.62 ± 0.11 2.48 ± 0.10 2.88 ± 0.12 3.21 ± 0.09 3.70 ± 0.10 Response-related controlled WD 5 2.79 ± 0.10 2.83 ± 0.09 3.35 ± 0.12 2.40 ± 0.12 2.51 ± 0.13 2.80 ± 0.13 3.13 ± 0.12 3.41 ± 0.12 Response-related controlled WD 10 2.74 ± 0.11 2.42 ± 0.12 2.93 ± 0.11 1.95 ± 0.12 2.20 ± 0.12 2.56 ± 0.12 2.90 ± 0.12 3.12 ± 0.10 Response-related controlled WD 13 2.63 ± 0.12 2.06 ± 0.11 2.40 ± 0.09 1.74 ± 0.11 2.07 ± 0.11 2.25 ± 0.12 2.49 ± 0.14 2.63 ± 0.10 Table 8 : Calibrated scores (mean ± std.)",
"for all models and human evaluation metrics.",
"The first eight columns are Likert metrics on a 1-4 scale (except Avoiding Repetition, which is a 1-3 scale), where higher is better (except Inquisitiveness, which has an optimal score of 3).",
"The last column, Persona Retrieval, is on a scale from 0 to 1 where higher is better.",
"The maximum of each column (excluding Human row) is in bold.",
"Rows marked with * are the six models included in Figure 3 Figure 9 : Calibrated human evaluation scores for all models.",
"This is the same data as in Table 8 .",
"Note: 'Repetition-controlled baseline+' in the rightmost column is 'Response-related controlled WD 0' in Table 8 .",
"See Table 5 for explanation."
]
} | {
"paper_header_number": [
"1",
"2",
"4",
"5",
"5.1",
"5.2",
"6",
"6.1",
"6.2",
"6.3",
"6.4",
"7",
"8",
"8.1",
"8.2",
"8.3",
"9"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Baseline model",
"Controllable text generation methods",
"Conditional Training (CT)",
"Weighted Decoding (WD)",
"Controlling conversational attributes",
"Repetition",
"Specificity",
"Response-relatedness",
"Question-asking",
"Comparison of control methods",
"Human evaluation results",
"Main findings",
"Effect of controlled attributes",
"A/B tests for interestingness",
"Conclusion"
]
} | GEM-SciDuet-train-133#paper-1358#slide-5 | Control methods | Conditional Training (CT): Train the model to generate response y, conditioned on the input x, and the desired output attribute z.
Weighted Decoding (WD): During decoding, increase/decrease the probability of generating words w in proportion to features f(w). | Conditional Training (CT): Train the model to generate response y, conditioned on the input x, and the desired output attribute z.
Weighted Decoding (WD): During decoding, increase/decrease the probability of generating words w in proportion to features f(w). | [] |
GEM-SciDuet-train-133#paper-1358#slide-6 | 1358 | What makes a good conversation? How controllable attributes affect human judgments | A good conversation requires balance -between simplicity and detail; staying on topic and changing it; asking questions and answering them. Although dialogue agents are commonly evaluated via human judgments of overall quality, the relationship between quality and these individual factors is less well-studied. In this work, we examine two controllable neural text generation methods, conditional training and weighted decoding, in order to control four important attributes for chitchat dialogue: repetition, specificity, response-relatedness and question-asking. We conduct a large-scale human evaluation to measure the effect of these control parameters on multi-turn interactive conversations on the PersonaChat task. We provide a detailed analysis of their relationship to high-level aspects of conversation, and show that by controlling combinations of these variables our models obtain clear improvements in human quality judgments. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237,
238,
239,
240,
241,
242,
243,
244,
245,
246,
247,
248,
249,
250,
251,
252,
253,
254
],
"paper_content_text": [
"Introduction Neural generation models for dialogue, despite their ubiquity in current research, are still poorly understood.",
"Well known problems, such as the genericness and repetitiveness of responses (Serban et al., 2016a) , remain without a de facto solution.",
"Strikingly, the factors that determine human judgments of overall conversation quality are almost entirely unexplored.",
"Most works have been limited to the next utterance prediction problem, whereas a multi-turn evaluation is necessary to evaluate the quality of a full conversation.",
"In this work we both (i) conduct a large-scale study to identify the fine-grained factors governing human judgments of full conversations, and (ii) develop models that apply our findings in practice, * A.S. completed most of this work at Facebook (FAIR).",
"leading to state-of-the-art performance.",
"Specifically, we identify and study eight aspects of conversation that can be measured by human judgments, while varying four types of low-level attributes that can be algorithmically controlled in neural models; see Figure 1 .",
"To control the lowlevel model attributes, we consider two simple but general algorithms: conditional training, in which the neural model is conditioned on additional control features, and weighted decoding, in which control features are added to the decoding scoring function at test time only.",
"One major result of our findings is that existing work has ignored the importance of conversational flow, as standard models (i) repeat or contradict previous statements, (ii) fail to balance specificity with genericness, and (iii) fail to balance asking questions with other dialogue acts.",
"Conducting experiments on the PersonaChat task (Zhang et al., 2018b) , we obtain significantly higher engagingness scores than the baseline by optimizing control of repetition, specificity and question-asking over multiple turns.",
"Using these findings, our best model matches the performance of the winning entry in the recent NeurIPS ConvAI2 competition (Dinan et al., 2019) , which was trained on much more data but had no control (see Section 8.1).",
"Our code, pretrained models, and full chatlogs, are available at https://parl.ai/projects/ controllable_dialogue.",
"Related Work Dialogue Dialogue evaluation is relatively well understood in goal-oriented tasks, where automated approaches can be coded by measuring task completion (Bordes et al., 2017; El Asri et al., 2017; Hastie, 2012; Henderson et al., 2014; Wen et al., 2017) .",
"Task success combined with dialogue cost can be linked to human judgments like user satisfaction via the PARADISE framework (Walker et al., 1997) .",
"However in chitchat tasks, which we study in this work, automatic metrics and their relation to human ratings are less well-understood.",
"While word-overlap metrics are effective for questionanswering and machine translation, for dialogue they have little to no correlation with human judgments (Liu et al., 2016; Novikova et al., 2017 )this is due to the open-ended nature of dialogue.",
"There are more recent attempts to find better automatic approaches, such as adversarial evaluation (Li et al., 2017b) and learning a scoring model (Lowe et al., 2017) , but their value is still unclear.",
"Nevertheless, a number of studies only use automatic metrics, with no human study at all (Lowe et al., 2015; Parthasarathi and Pineau, 2018; Serban et al., 2016b) .",
"Other works do use human evaluations (Dinan et al., 2018; Li et al., 2016a,b; Venkatesh et al., 2017; Vinyals and Le, 2015; Zhang et al., 2018b) , typically reporting just one type of judgment (either quality or appropriateness) via a Likert scale or pairwise comparison.",
"Most of those works only consider single turn evaluations, often with a shortened dialogue history, rather than full multi-turn dialogue.",
"A more comprehensive evaluation strategy has been studied within the scope of the Alexa prize (Venkatesh et al., 2017; Guo et al., 2018) by combining multiple automatic metrics designed to capture various conversational aspects (engagement, coherence, domain coverage, conversational depth and topical diversity).",
"Though these aspects have some similarity to the aspects studied here, we also focus on lower-level aspects (e.g.",
"avoiding repetition, fluency), to understand how they correspond to both our controllable attributes, and to overall quality judgments.",
"Controllable neural text generation Researchers have proposed several approaches to control aspects of RNN-based natural language generation such as sentiment, length, speaker style and tense (Fan et al., 2018; Ficler and Goldberg, 2017; Ghazvininejad et al., 2017; Hu et al., 2017; Kikuchi et al., 2016; Peng et al., 2018; Wang et al., 2017) .",
"In particular, several works use control to tackle the same common sequence-to-sequence problems we address here (particularly genericness and unrelated output), in the context of single-turn response generation (Baheti et al., 2018; Li et al., 2016a Li et al., , 2017a Shen et al., 2017; Xing et al., 2017; Zhang et al., 2018a; Zhou et al., 2017) .",
"By contrast, we focus on developing controls for, and human evaluation of, multi-turn interactive dialogue -this includes a new method (described in Section 5) to control attributes at the dialogue level rather than the utterance level.",
"In this work, we require a control method that is both general-purpose (one technique to simultaneously control many attributes) and easily tunable (the control setting is adjustable after training).",
"Given these constraints, we study two control methods: conditional training (variants of which have been described by Fan et al.",
"(2018) ; Kikuchi et al.",
"(2016) ; Peng et al.",
"(2018) ) and weighted decoding (described by Ghazvininejad et al.",
"(2017) as a general technique, and by Baheti et al.",
"(2018) to control response-relatedness).",
"To our knowledge, this work is the first to systematically compare the effectiveness of two general-purpose control methods across several attributes.",
"3 The PersonaChat dataset PersonaChat (Zhang et al., 2018b ) is a chitchat dialogue task involving two participants (two humans or a human and a bot).",
"Each participant is given a persona -a short collection of personal traits such as I'm left handed or My favorite season is spring -and are instructed to get to know each other by chatting naturally using their designated personas, for 6-8 turns.",
"The training set contains 8939 conversations and 955 personas, collected via crowdworkers, plus 1000 conversations and 100 personas for validation, and a similar number in the hidden test set.",
"The PersonaChat task was the subject of the NeurIPS 2018 ConvAI2 Challenge (Dinan et al., 2019) , in which competitors were first evaluated with respect to automatic met-rics (perplexity, hits@1 and F1 score), and then with respect to human judgment via the question \"How much did you enjoy talking to this user?\"",
"on a scale of 1-4.",
"Baseline model Our baseline model is a 2-layer LSTM sequenceto-sequence model with attention.",
"On any dialogue turn, the input x to the encoder is the entire dialogue history (separated using unique speakeridentifying tokens), with the model's own persona prepended.",
"Conditioned on this input sequence x, the decoder generates a response y.",
"Except when stated otherwise, all our models decode using beam search with beam size 20.",
"We initialized the word embedding matrix with 300-dimensional GloVe embeddings (Pennington et al., 2014) .",
"Using the ParlAI framework (Miller et al., 2017) , we pretrained the model on a dataset of 2.5 million Twitter message-response pairs, 1 then fine-tuned it on PersonaChat.",
"On the Per-sonaChat validation set, the baseline model has a perplexity of 26.83 and F1 of 17.02, which would have placed us 4th out of 26 models in the Con-vAI2 competition (Dinan et al., 2019) .",
"We attempt to improve over this baseline using control.",
"Controllable text generation methods Suppose we have a sequence-to-sequence model which gives P (y|x) = Π t P (y t |x, y 1 , .",
".",
".",
", y t−1 ), the conditional probability of a response y (the model's next utterance) given input x (the context, which in our case includes the model's own persona and the dialogue history).",
"Contrary to most previous work, which controls at the sentence level, we wish to control attributes of the output y at the dialogue levelmeaning that a single control setting is used for a whole dialogue.",
"For example, to control questionasking, we provide a control setting at the beginning of each dialogue (e.g.",
"20% questions or 70% questions) rather than providing a control setting for each utterance (e.g.",
"is a question or isn't a question).",
"With this approach, the sequence-tosequence model is able to choose what value the controlled attribute should take for any particular utterance, but we are able to choose the overall distribution.",
"We find that this approach works well -for example, the sequence-to-sequence model is generally good at detecting when to ask a question.",
"In particular, this is easier than the alternative: developing a separate process to decide, for each utterance, whether to ask a question.",
"In this section, we describe the two methods -which we call Conditional Training (CT) and Weighted Decoding (WD) -that we use to control attributes of the output y at the dialogue level.",
"Conditional Training (CT) Conditional Training (Fan et al., 2018; Kikuchi et al., 2016; Peng et al., 2018) is a method to learn a sequence-to-sequence model P (y|x, z), where z is a discrete control variable.",
"If the control attribute is naturally continuous (for example in our work, repetitiveness, specificity and response-relatedness), we use z to represent bucketed ranges.",
"For a binary attribute like questionasking, z represents an overall probability (as explained in Section 5).",
"To train a CT model, we first automatically annotate every (x, y) pair in the training set with the attribute we wish to control (for example, whether y contains a question mark).",
"During training, for each example we determine the corresponding z value (for continuous attributes, this simply means sorting into the correct bucket; for question-asking, see Section 6.4).",
"Next, the control variable z is represented via an embedding (each of the possible values of z has its own embedding).",
"For all our experiments, the embedding is of length 10; this was determined via hyperparameter tuning.",
"There are several possible ways to condition the sequence-to-sequence model on z -for example, append z to the end of the input sequence, or use z as the START symbol for the decoder.",
"We find it most effective to concatenate z to the decoder's input on every step.",
"2 Lastly, the CT model learns to produce y = y 1 , .",
".",
".",
", y T by optimizing the cross-entropy loss: loss CT = − 1 T T t=1 log P (y t |x, z, y 1 , .",
".",
".",
", y t−1 ) Our CT models are initialized with the parameters from the baseline sequence-to-sequence model P (y|x) (the new decoder parameters are initialized with small random values), then fine-tuned to optimize loss CT on the PersonaChat training set, until convergence of loss CT on the validation set.",
"Weighted Decoding (WD) Weighted Decoding (Ghazvininejad et al., 2017) is a decoding method that increases or decreases the probability of words with certain features.",
"The technique is applied only at test time, requiring no change to the training method.",
"A limitation of WD is that the controllable attribute must be defined at the word-level; any desired utterance-level attribute must be redefined via word-level features.",
"In weighted decoding, on the t th step of decoding, a partial hypothesis y <t = y 1 , .",
".",
".",
", y t−1 is expanded by computing the score for each possible next word w in the vocabulary: score(w, y <t ; x) = score(y <t ; x) + log P RNN (w|y <t , x) + i w i * f i (w; y <t , x).",
"Here, log P RNN (w|y <t , x) is the log-probability of the word w calculated by the RNN, score(y <t ; x) is the accumulated score of the already-generated words in the hypothesis y <t , and f i (w; y <t , x) are decoding features with associated weights w i .",
"There can be multiple features f i (to control multiple attributes), and the weights w i are hyperparameters to be chosen.",
"A decoding feature f i (w; y <t , x) assigns a real value to the word w, in the context of the text generated so far y <t and the context x.",
"The feature can be continuous (e.g.",
"the unigram probability of w), discrete (e.g.",
"the length of w in characters), or binary (e.g.",
"whether w starts with the same letter as the last word in y <t ).",
"A positive weight w i increases the probability of words w that score highly with respect to f i ; a negative weight decreases their probability.",
"Note that weighted decoding and conditional training can be applied simultaneously (i.e.",
"train a CT model then apply WD at test time) -a strategy we use in our experiments.",
"Controlling conversational attributes In this section, we describe how we use conditional training and weighted decoding to control four attributes: repetition, specificity, responserelatedness and question-asking.",
"We evaluate the effectiveness of both control methods via automatic metrics (i.e., measuring how well the attribute was controlled), and use our findings to select control methods and control settings to be explored further via human evaluation (Section 8).",
"Repetition Our baseline model exhibits three types of repetition, which we call external repetition (selfrepetition across utterances), internal repetition (self-repetition within utterances), and partner repetition (repeating the conversational partner).",
"To control repetition with weighted decoding, 3 we define five n-gram based decoding features (see Appendix D).",
"Three of these features (extrep bigram, intrep bigram and partnerrep bigram) identify repeating bigrams for the three repetition types.",
"The other two features (extrep unigram and intrep unigram) identify repeating content words.",
"By applying a negative weight to these features, we can reduce repetition.",
"In particular, if the weight is −∞, our method is equivalent to n-gram blocking as described by Kulikov et al.",
"(2018) .",
"We observe that repetition control is very important, thus all further control experiments include repetition control.",
"Specificity Like many sequence-to-sequence models using beam search decoding, our baseline frequently asks generic questions such as What music do you like?",
"and gives dull, unspecific responses, such as I like all kinds of music.",
"We control specificity using Normalized Inverse Document Frequency (NIDF) as a measure of word rareness.",
"4 The Inverse Document Frequency of a word w is IDF(w) = log(R/c w ) where R is the number of responses in the dataset, and c w is the number of those responses that contain w. Normalized IDF (which ranges from 0 to 1) is NIDF(w) = IDF(w) − min idf max idf − min idf (1) where min idf and max idf are the minimum and maximum IDFs, taken over all words in the vocabulary.",
"To control specificity with weighted decoding, we use NIDF as a decoding feature.",
"As shown in Table 1 , this method produces reasonable outputs when the feature weight is within a certain range, but at the extremes the outputs are nonsensical.",
"The boundary for nonsensical output differs from example to example.",
"To control specificity with conditional training, we define the specificity of an utterance y to be the mean NIDF of the words in y.",
"Thus our control variable z is mean NIDF (discretized into 10 equal-sized buckets).",
"As shown in Table 1 , this method gives outputs with a narrower NIDF range, but overall produces less nonsensical outputs.",
"Response-relatedness In conversation, it's generally desirable to produce a response that is related to the partner's last utterance; for example if the partner says My grandfather died last month, it is appropriate to say I'm so sorry.",
"Were you close to your grandfather?",
"However, our baseline model frequently responds with unrelated utterances like Do you have any pets?",
"To control response-relatedness with weighted decoding, we use the decoding feature resp rel: resp rel(w; y <t , x) = cos sim(word emb(w), sent emb( )) where word emb(w) is the GloVe embedding for the word w, sent emb( ) is the sentence embedding for the partner's last utterance (note is part of the context x), and cos sim is the cosine similarity between the two.",
"In particular, the sentence embedding sent emb(s) for an utterance s is a weighted average of the GloVe embeddings of the words in s, with the first principal component projected out; for full details, see Arora et al.",
"(2017) .",
"This method of controlling response-relatedness is similar to that described in (Baheti et al., 2018) .",
"We find that weighted decoding is effective to control the semantic relatedness of the model's response to the partner's last utterance (see Table 2 ).",
"As before, we find that extreme weights lead to nonsensical output.",
"To control response-relatedness with conditional training, we try defining the control variable z to be cos sim(sent emb(y), sent emb( )), the overall cosine similarity between the partner's last utterance and the model's response y (again, we discretize z).",
"However, we find this method ineffective -the CT model learns only a very weak connection between z and the semantic relatedness of the output (see Section 7 for more details).",
"Question-asking Considerate chitchat requires a reciprocal asking and answering of questions -asking too few or too many can appear self-centered or nosy.",
"We control question-asking in order to study these trade-offs.",
"To control question-asking with weighted decoding, we use the binary decoding feature is qn word(w), which is equal to 1 if and only if the word w is in a pre-defined list of interrogative words (how, what, when, where, which, who, whom, whose, why, ?)",
".",
"We find this is a somewhat effective method to encourage or discourage questions, but with unintended side-effects: a negative weight can discourage valid non-question utterances that happen to contain interrogative words (such as I'm learning how to knit) and a positive weight can result in degenerate utterances (such as For conditional training, we regard an utterance y as containing a question if and only if y contains a question mark.",
"We train our CT model on a control variable z with 11 possible values: {0, .",
".",
".",
", 10}.",
"As discussed in Section 5, we wish to control question-asking at the distributional, dialogue level, rather than at the binary, utterance level.",
"Thus the setting z = i means that the model should produce, on average, utterances containing '?'",
"with probability i/10.",
"During training we randomly assign examples to buckets such that each bucket i is trained on examples with the correct proportion of questions (i/10), and all buckets have the same amount of training examples.",
"We find that conditional training is effective to control question-asking -as shown in Figure 2 , by increasing z from 0 to 10, we obtain a range of question-asking rates from 1.40% to 97.72%.",
"However, when we introduce repetition control, question-asking is reduced -in particular, the z = 10 setting (which should produce 100% questions) now only produces 79.67% questions.",
"The primary problem is the weighted decoding feature extrep bigram, which discourages bigrams that have appeared in previous utterances -this prevents the model from producing bigrams that commonly occur in many questions, such as do you and what is.",
"To fix this, we introduce an extra setting z = 10 (boost), in which we do not use the feature extrep bigram for weighted decoding during beam search, but we do use it to rerank the candidates after beam search.",
"This setting, which allows the model to produce necessary questionasking bigrams, yields a 99.54% question-asking rate, at the cost of slightly increased external bigram repetition (see Appendix F).",
"For controlling question-asking, conditional training is preferable to weighted decoding for two reasons.",
"Firstly, it allows us to achieve (close to) 0% questions, 100% questions, or anything in between, without introducing the risk of degenerate output.",
"Secondly, presence-of-a-question-mark captures the true attribute of interest (questionasking) more exactly and directly than presence of interrogative words.",
"For these reasons, only the CT method is considered in the human evaluation.",
"Comparison of control methods The previous section shows that conditional training and weighted decoding are both useful techniques, with different strengths and weaknesses.",
"The primary disadvantage of conditional training is that it sometimes fails to learn the connection between the control variable z and the target output y.",
"In practice, we find the model can learn simple attributes of the output (such as the presence of '?",
"', and overall genericness), but not relationships between the input and output (such as semantic relatedness).",
"By contrast, weighted decoding can force the desired feature to appear in the output by raising the weight arbitrarily high (though this may have unintended side-effects).",
"The primary disadvantage of weighted decoding is that it risks going off-distribution when the weight is too strong.",
"By contrast, conditional training produces mostly well-formed, indistribution outputs.",
"This highlights the importance of learned control -it is safer to learn to produce output that both satisfies the control variable and is appropriate, than to alter the decoding process to satisfy the control variable, potentially trading off appropriateness in the process.",
"Other considerations include: (1) Convenience: conditional training requires retraining; weighted decoding doesn't, but is slower at test time.",
"Attribute definition: conditional training can control sentence-level attributes, but they must be discrete.",
"By contrast, weighted decoding requires word-level features, but they can be continuous.",
"Human evaluation results In order to study the effect of our controllable attributes, we conduct a large-scale human evalua-tion of 28 model configurations (see Appendix E), plus human-human conversations for comparison.",
"Approach In our evaluation, a crowdworker chats with a model (or in the human-human case, another crowdworker) for six conversational turns, then answers eight multiple-choice questions which each capture different aspects of conversational quality: avoiding repetition, interestingness, making sense, fluency, listening, inquisitiveness, humanness and engagingness.",
"The eight questions are Likert questions on a 1-4 scale, where higher is better.",
"5 To match the ConvAI2 Challenge, we also add a persona retrieval question, in which the crowdworker is asked to select which of two possible personas was the model's persona.",
"For full details of the evaluation design, see Appendix B.",
"Our evaluation is the same as the ConvAI2 Challenge evaluation, but more detailed -Con-vAI2 includes only engagingness and persona retrieval.",
"6 As in the ConvAI2 challenge, each of our 28 model configurations was evaluated by over 100 crowdworkers, and the results were adjusted for annotator variance via a Bayesian calibration (Kulikov et al., 2018) .",
"In designing our evaluation, we aimed to capture the four aspects we expected to directly improve via control (avoiding repetition, interestingness, listening, inquisitiveness), two important error classes we thought would be affected by our controls (fluency, making sense), and two overall quality measures (engagingness, humanness).",
"Main findings In this section we summarize the main findings of our human evaluation -whose full results can be found in Appendices G and H, with sample conversations in Appendix C. As Figure 3 shows, controlling for repetition, specificity and question-asking all lead to large 5 Exceptions: Avoiding repetition is a 1-3 scale, as we found this gave clearer instructions.",
"Inquisitiveness has an optimal score of 3; 1 and 2 represent too little questionasking, and 4 represents too much.",
"6 There are three other minor differences between our evaluation and ConvAI2's: (1) We fix capitalization and spacing before showing the chatbot's utterances to crowdworkers, while ConvAI2 show the raw lowercase tokenized form.",
"We found the latter interferes with fluency evaluation.",
"(2) We conduct 6 dialogue turns, while ConvAI2 conducts 4-6.",
"This was necessary to evaluate repetitiveness.",
"(3) We use (publicly-available) validation set personas, while ConvAI2 uses (hidden) test set personas.",
"This enables us to release our evaluation chatlogs.",
"engagingness improvements over the greedy and beam-search baseline models.",
"In particular, we find that controlling for multi-turn (self) repetition is important and should be incorporated alongside other attribute control methods.",
"We found no improvement by controlling response-relatedness.",
"To better understand these overall engagingness improvements, we consider the full set of human judgments, shown in Figure 4 .",
"We find that reducing repetition leads to improvements across all our aspects of conversational quality.",
"Increasing specificity shows improvements in interestingness and listening ability over the repetition-controlled baseline, while increasing question-asking shows improvements in inquisitiveness and interestingness over the repetition-controlled baseline.",
"Our most engaging model, which controls both repetition and question-asking -marked 'Question (CT)' in Figure 3 (left) -matches the engagingness of the winning entry in the ConvAI2 competition, as both models achieve a raw score 7 of 3.1 (Dinan et al., 2019) .",
"However, the Con-vAI2 winner, Lost in Conversation, was trained on approximately 12× as much data as our model.",
"Lost in Conversation is based on the OpenAI GPT Language Model (Radford et al., 2018) , which is pretrained on the BookCorpus (Zhu et al., 2015) , which contains approximately 985 million words, whereas our model is pretrained on the Twitter dataset (approximately 79 million words).",
"Altogether, our evaluation clearly shows that controlling low-level attributes over multiple turns leads to improved overall quality.",
"Effect of controlled attributes Repetition (WD) We observe that selfrepetition across utterances (external repetition) is by far the most severe form of repetition in our beam search baseline model.",
"We evaluate several settings of the extrep bigram weighted decoding feature, and find that an aggressive repetition-reduction setting (reducing bigram repetition rate to below gold data levels) is rated best.",
"We also find that blocking repeated content words improves the avoiding repetition score.",
"See Appendices E, F and G for full details.",
"As shown in Figure 3 Figure 3 : Calibrated human judgments of engagingness for the baselines and best controlled models (left); for different specificity control settings (middle); and for different question-asking control settings (right).",
"over the beam search baseline in all metrics, and achieves close-to-human scores on all metrics except humanness.",
"This striking result demonstrates that repetition is by far the biggest limiting quality factor for naive sequence-to-sequence dialogue agents.",
"The result also emphasizes the importance of multi-turn dialogue evaluation to detect the problem.",
"We refer to this model as the repetitioncontrolled baseline, and use it as a basis for all remaining experiments (i.e., we control specificity, response-relatedness and question-asking on top of these repetition-control settings).",
"Specificity (WD, CT) For our weighted decoding models, the extreme settings (very generic and very specific) score poorly in engagingness due to the frequent presence of degenerate output -see Figure 3 (middle).",
"We find that the weight = 4 setting (which is more specific than the repetitioncontrolled baseline and about as specific as the gold data) maximizes engagingness.",
"As shown in Figure 3 (left) and Figure 4 , this more-specific model is rated more interesting, engaging, and a better listener than the repetition-controlled baseline, but at the cost of reduced fluency and making sense.",
"Our CT model with z = 7 (which has a similar NIDF level as WD with weight = 4) shows similar results, but the improvements are smaller.",
"For further discussion on the interestingness of our specificity models, see Section 8.3.",
"Response-relatedness (WD) We evaluated several control settings (weight = −10, 5, 10, 13) and found that none scored better than weight = 0 (no response-relatedness control); see Appendix H. This is surprising -prior to running the human evaluation, we annotated 100 examples ourselves to determine the best control settings.",
"While we identified a more responsive setting (weight = 5) as less likely than the uncontrolled model to ignore the user, crowdworkers rated it as a slightly worse listener than the uncontrolled model.",
"One explanation for this discrepancy is that the more responsive model takes more risks, using more rare words (0.197 NIDF, up from 0.178), and thus receives a lower makes-sense score (3.41, down from 3.70).",
"We hypothesize that, compared to us, the crowdworkers are less tolerant of slightly nonsensical output, and more tolerant of generic unrelated utterances.",
"Question-asking (CT) As shown in Figure 3 (right), a question-asking rate of 65.7% (z = 7) maximizes engagingness.",
"This setting, which asks more questions than both the repetition-controlled baseline (50.0%) and the human-produced gold data (28.8%), brings us closest to human-level engagingness -see Figure 3 (left).",
"Although we find that a rate of approximately 65.7% questionasking is the most engaging, a lower level (48.9%, or z = 4) is rated the best listener.",
"Lastly, we find that although asking too many questions is less engaging, most crowdworkers will not directly criticize a chatbot that asks questions on every turnonly 11.9% of crowdworkers judged the z = 10 (boost) setting, which asks 99.5% questions, as asking too many questions.",
"8 For full details of these scores, see Appendix F and H. For time and budget reasons, we did not evaluate any models controlling both question-asking and specificity.",
"However, we expect it is possible to obtain further improvements by doing so.",
"A/B tests for interestingness Though our more-specific models yielded significant improvements in engagingness, we were surprised that they did not yield clearer improvements in interestingness.",
"To investigate further, we conducted an A/B interestingness evaluation of three specificity-controlled models, compared to the repetition-controlled baseline.",
"Crowdworkers were shown two conversations (from the main human evaluation) and asked to choose which model was more interesting (see Figure 7 for details).",
"We collected 500 samples per comparison, plus 200 additional human vs repetition-controlled baseline samples, which were used to filter for quality control.",
"After discarding low-quality crowdworkers, we have roughly 300 evaluations per comparison, with an average Cohen's κ = 0.6.",
"As shown in Table 3 , all three models were rated significantly more interesting than the repetitioncontrolled baseline.",
"This convincingly shows that producing utterances with more rare words is a valid strategy to improve interestingness.",
"We have two explanations for why these interestingness differences did not materialize in our main evaluation.",
"Firstly, interestingness is a particularly subjective metric (unlike more tangible metrics such as avoiding repetition and making sense) -this makes it hard to calibrate across crowdworkers.",
"Secondly, we suspect that in our original evaluation, the crowdworkers may have evaluated the interestingness of the task rather than the chatbot.",
"This could account for why subtle increases in conversational ability did not result in higher interestingness ratings -the PersonaChat task itself has a natural interestingness limit.",
"Conclusion What makes a good conversation?",
"Through our evaluation, we showed that a good conversation is about balance -controlling for the right level of repetition, specificity and question-asking is important for overall quality.",
"We also found that conversational aspects such as interestingness, listening, and inquisitiveness are all importantthough optimizing these can introduce a trade-off against certain types of errors (such as repetitive, disfluent, or nonsensical output).",
"Secondly, multiturn evaluation is essential to study what makes a good conversation -multiple turns are required to reveal issues such as repetition, consistency, and question-asking frequency.",
"Lastly, what do we mean by 'good'?",
"Although humanness and engagingness are both commonly used as overall quality metrics, the two are very different.",
"While our models achieved close-to-human scores on engagingness, they failed to get close on humannessshowing that a chatbot need not be human-like to be enjoyable.",
"This striking result also demonstrates the importance of measuring more than one quality metric when evaluating dialogue agents.",
"Outlook Our work shows that neural generative systems have systemic problems when applied to open-ended dialogue, some of which (e.g.",
"repetition) are only observable in the multi-turn setting.",
"Furthermore, control of low-level attributes offers a practical way to correct these problems, yielding large improvements to overall quality -in our case, comparable to systems trained on much more data.",
"Future work includes optimizing control settings automatically, and building more convincingly human-like chatbots.",
"Supplementary Material A Screenshots of human evaluation interface B Human evaluation questionnaire design Here are the questions and multiple-choice options used in the human evaluation, in the order presented: [Engagingness] How much did you enjoy talking to this user?",
"Adding w to the hypothesis y <t would create a 2-gram that appears in a previous utterance by the model extrep unigram(w, y <t , x) w is a non-stopword and w appears in a previous utterance by the model intrep bigram(w, y <t , x) Adding w to the hypothesis y <t would create a 2-gram that appears earlier in the hypothesis y <t intrep unigram(w, y <t , x) w is a non-stopword and w appears earlier in the hypothesis y <t partnerrep bigram(w, y <t , x) Adding w to the hypothesis y <t would create a 2-gram that appears in a previous utterance by the partner Repetition control (WD) Extrep bigram WD -0.5 wt -0.5 Extrep bigram WD -1.25 wt -1.25 Extrep bigram WD -3.5 wt -3.5 Extrep bigram WD -inf wt -∞ Repetition-controlled baseline wt -3.5 wt -∞ wt -∞ Question control (CT) Question-controlled CT 0 wt -3.5 wt -∞ wt -∞ z = 0 Question-controlled CT 1 wt -3.5 wt -∞ wt -∞ z = 1 Question-controlled CT 4 wt -3.5 wt -∞ wt -∞ z = 4 Question-controlled CT 7 wt -3.5 wt -∞ wt -∞ z = 7 Question-controlled CT 10 wt -3.5 wt -∞ wt -∞ z = 10 Question-controlled CT 10 (boost) wt 0 * wt -∞ wt -∞ z = 10 Specificity control (CT) Specificity-controlled CT 0 wt -3.5 wt -∞ wt -∞ z = 0 Specificity-controlled CT 2 wt -3.5 wt -∞ wt -∞ z = 2 Specificity-controlled CT 4 wt -3.5 wt -∞ wt -∞ z = 4 Specificity-controlled CT 7 wt -3.5 wt -∞ wt -∞ z = 7 Specificity-controlled CT 9 wt -3.5 wt -∞ wt -∞ z = 9 Specificity control (WD) Specificity-controlled WD -10 wt -3.5 wt -∞ wt -∞ wt -10 Specificity-controlled WD -4 wt -3.5 wt -∞ wt -∞ wt -4 Specificity-controlled WD 4 wt -3.5 wt -∞ wt -∞ wt 4 Specificity-controlled WD 6 wt -3.5 wt -∞ wt -∞ wt 6 Specificity-controlled WD 8 wt -3.5 wt -∞ wt -∞ wt 8 Response-related control (WD) ** Response-related controlled WD -10 wt -3.5 wt -∞ wt -∞ wt -∞ wt -∞ wt -10 Response-related controlled WD 0 wt -3.5 wt -∞ wt -∞ wt -∞ wt -∞ wt 0 Response-related controlled WD 5 wt -3.5 wt -∞ wt -∞ wt -∞ wt -∞ wt 5 Response-related controlled WD 10 wt -3.5 wt -∞ wt -∞ wt -∞ wt -∞ wt 10 Response-related controlled WD 13 wt -3.5 wt -∞ wt -∞ wt -∞ wt -∞ wt 13 Table 5 : Control settings for all configurations that were human-evaluated.",
"'wt' means the weight used for a weighted decoding feature and 'z =' means the setting (i.e.",
"bucket) for the control variable in conditional training.",
"* In the setting Question-controlled CT 10 (boost), the feature extrep bigram is not used for weighted decoding during beam search, but it is used to rerank the candidates after beam search.",
"See Section 6.4 for details.",
"** Note that the Response-related controlled models additionally introduce repetition controls to block internal bigram repetition and partner bigram repetition.",
"This was necessary to prevent the model from parroting the partner's last utterance.",
"In Table 8 , we find that just adding these extra repetition controls (here called Responserelated controlled WD 0, i.e.",
"increased repetition control but no response-relatedness control) outperforms our canonical Repetition-controlled baseline.",
"However, given that we discovered this later, our specificity and question controlled models are built on top of the canonical Repetition-controlled baseline.",
"Table 7 : Raw scores (mean ± std.)",
"for all models and human evaluation metrics.",
"The first eight columns are Likert metrics on a 1-4 scale (except Avoiding Repetition, which is a 1-3 scale), where higher is better (except Inquisitiveness, which has an optimal score of 3).",
"The last column, Persona Retrieval, is on a scale from 0 to 1 where higher is better.",
"The maximum of each column (excluding Human row) is in bold.",
"Model Avoiding Rep.",
"Engage Fluency Humanness Inquisitive Interesting Listening Make Sense Human and baselines * Human 2.79 ± 0.12 3.04 ± 0.11 3.36 ± 0.12 3.35 ± 0.11 2.44 ± 0.12 2.92 ± 0.11 3.32 ± 0.13 3.68 ± 0.11 * Greedy Search 2.08 ± 0.10 2.24 ± 0.11 3.03 ± 0.10 1.75 ± 0.12 1.95 ± 0.10 2.29 ± 0.13 2.62 ± 0.10 3.23 ± 0.10 * Beam Search (beam size 20) 2.08 ± 0.11 2.29 ± 0.11 3.09 ± 0.13 1.71 ± 0.13 2.42 ± 0.11 2.29 ± 0.14 2.47 ± 0.12 3.35 ± 0.13 Repetition control (WD) Extrep bigram WD -0.5 2.62 ± 0.10 2.54 ± 0.12 3.35 ± 0.12 2.13 ± 0.11 2.63 ± 0.11 2.56 ± 0.11 2.93 ± 0.11 3.48 ± 0.11 Extrep bigram WD -1.25 2.78 ± 0.09 2.82 ± 0.13 3.40 ± 0.12 2.27 ± 0.12 2.54 ± 0.09 2.76 ± 0.10 3.05 ± 0.11 3.53 ± 0.14 Extrep bigram WD -3.5 2.83 ± 0.11 2.93 ± 0.10 3.56 ± 0.10 2.43 ± 0.11 2.47 ± 0.11 2.83 ± 0.10 3.14 ± 0.10 3.62 ± 0.12 Extrep bigram WD -inf 2.74 ± 0.11 2.87 ± 0.14 3.49 ± 0.12 2.32 ± 0.13 2.56 ± 0.11 2.75 ± 0.12 3.13 ± 0.12 3.59 ± 0.12 * Repetition-controlled baseline 2.86 ± 0.12 2.82 ± 0.12 3.53 ± 0.10 2.40 ± 0.11 2.62 ± 0.13 2.84 ± 0.12 3.10 ± 0.11 3.58 ± 0.14 Question control (CT) Question-controlled CT 0 2.87 ± 0.12 2.84 ± 0.13 3.51 ± 0.10 2.46 ± 0.11 2.36 ± 0.09 2.76 ± 0.09 3.10 ± 0.10 3.49 ± 0.12 Question-controlled CT 1 2.82 ± 0.11 2.88 ± 0.11 3.42 ± 0.10 2.46 ± 0.12 2.47 ± 0.11 2.79 ± 0.13 3.14 ± 0.11 3.55 ± 0.10 Question-controlled CT 4 2.78 ± 0.12 2.88 ± 0.10 3.47 ± 0.11 2.40 ± 0.09 2.53 ± 0.13 2.83 ± 0.13 3.24 ± 0.11 3.59 ± 0.10 * Question-controlled CT 7 2.81 ± 0.10 2.99 ± 0.11 3.54 ± 0.09 2.35 ± 0.11 2.66 ± 0.12 2.92 ± 0.12 3.11 ± 0.10 3.47 ± 0.10 Question-controlled CT 10 2.67 ± 0.13 2.87 ± 0.11 3.52 ± 0.12 2.35 ± 0.12 2.63 ± 0.12 2.66 ± 0.10 2.94 ± 0.11 3.53 ± 0.12 Question-controlled CT 10 (boost) 2.68 ± 0.12 2.74 ± 0.09 3.42 ± 0.12 2.19 ± 0.13 2.79 ± 0.11 2.74 ± 0.11 3.00 ± 0.12 3.45 ± 0.13 Specificity control (CT) Specificity-controlled CT 0 2.79 ± 0.10 2.93 ± 0.09 3.44 ± 0.12 2.38 ± 0.11 2.56 ± 0.12 2.84 ± 0.12 3.12 ± 0.13 3.61 ± 0.11 Specificity-controlled CT 2 2.78 ± 0.12 2.74 ± 0.11 3.39 ± 0.13 2.31 ± 0.13 2.56 ± 0.13 2.74 ± 0.12 2.99 ± 0.11 3.47 ± 0.10 Specificity-controlled CT 4 2.82 ± 0.10 2.80 ± 0.13 3.44 ± 0.14 2.32 ± 0.13 2.51 ± 0.12 2.78 ± 0.15 3.09 ± 0.13 3.46 ± 0.13 Specificity-controlled CT 7 2.81 ± 0.12 2.91 ± 0.13 3.43 ± 0.11 2.45 ± 0.10 2.49 ± 0.11 2.81 ± 0.12 3.15 ± 0.12 3.55 ± 0.11 Specificity-controlled CT 9 2.80 ± 0.13 2.78 ± 0.10 3.41 ± 0.12 2.35 ± 0.13 2.28 ± 0.11 2.79 ± 0.11 2.91 ± 0.11 3.51 ± 0.12 Specificity control (WD) Specificity-controlled WD -10 2.76 ± 0.11 2.41 ± 0.12 3.19 ± 0.12 2.15 ± 0.11 2.28 ± 0.13 2.35 ± 0.12 2.89 ± 0.11 3.28 ± 0.12 Specificity-controlled WD -4 2.83 ± 0.10 2.76 ± 0.12 3.37 ± 0.10 2.36 ± 0.11 2.46 ± 0.11 2.62 ± 0.12 3.14 ± 0.09 3.52 ± 0.11 * Specificity-controlled WD 4 2.84 ± 0.10 2.96 ± 0.12 3.45 ± 0.13 2.44 ± 0.12 2.56 ± 0.09 2.94 ± 0.11 3.20 ± 0.10 3.54 ± 0.11 Specificity-controlled WD 6 2.81 ± 0.09 2.91 ± 0.10 3.34 ± 0.09 2.31 ± 0.11 2.53 ± 0.12 2.93 ± 0.12 3.09 ± 0.10 3.41 ± 0.12 Specificity-controlled WD 8 2.70 ± 0.11 2.39 ± 0.12 2.54 ± 0.12 1.80 ± 0.13 2.00 ± 0.10 2.49 ± 0.12 2.47 ± 0.10 2.87 ± 0.11 Response-related control (WD) Response-related controlled WD -10 2.77 ± 0.12 2.45 ± 0.12 3.26 ± 0.11 1.96 ± 0.10 2.31 ± 0.12 2.47 ± 0.12 2.73 ± 0.11 3.12 ± 0.12 Response-related controlled WD 0 2.87 ± 0.12 2.97 ± 0.11 3.55 ± 0.09 2.62 ± 0.11 2.48 ± 0.10 2.88 ± 0.12 3.21 ± 0.09 3.70 ± 0.10 Response-related controlled WD 5 2.79 ± 0.10 2.83 ± 0.09 3.35 ± 0.12 2.40 ± 0.12 2.51 ± 0.13 2.80 ± 0.13 3.13 ± 0.12 3.41 ± 0.12 Response-related controlled WD 10 2.74 ± 0.11 2.42 ± 0.12 2.93 ± 0.11 1.95 ± 0.12 2.20 ± 0.12 2.56 ± 0.12 2.90 ± 0.12 3.12 ± 0.10 Response-related controlled WD 13 2.63 ± 0.12 2.06 ± 0.11 2.40 ± 0.09 1.74 ± 0.11 2.07 ± 0.11 2.25 ± 0.12 2.49 ± 0.14 2.63 ± 0.10 Table 8 : Calibrated scores (mean ± std.)",
"for all models and human evaluation metrics.",
"The first eight columns are Likert metrics on a 1-4 scale (except Avoiding Repetition, which is a 1-3 scale), where higher is better (except Inquisitiveness, which has an optimal score of 3).",
"The last column, Persona Retrieval, is on a scale from 0 to 1 where higher is better.",
"The maximum of each column (excluding Human row) is in bold.",
"Rows marked with * are the six models included in Figure 3 Figure 9 : Calibrated human evaluation scores for all models.",
"This is the same data as in Table 8 .",
"Note: 'Repetition-controlled baseline+' in the rightmost column is 'Response-related controlled WD 0' in Table 8 .",
"See Table 5 for explanation."
]
} | {
"paper_header_number": [
"1",
"2",
"4",
"5",
"5.1",
"5.2",
"6",
"6.1",
"6.2",
"6.3",
"6.4",
"7",
"8",
"8.1",
"8.2",
"8.3",
"9"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Baseline model",
"Controllable text generation methods",
"Conditional Training (CT)",
"Weighted Decoding (WD)",
"Controlling conversational attributes",
"Repetition",
"Specificity",
"Response-relatedness",
"Question-asking",
"Comparison of control methods",
"Human evaluation results",
"Main findings",
"Effect of controlled attributes",
"A/B tests for interestingness",
"Conclusion"
]
} | GEM-SciDuet-train-133#paper-1358#slide-6 | Q1 How effectively can we control attributes | Attributes: repetition, specificity, question-asking, response-relatedness
Conditional Training (CT): Weighted Decoding (WD):
Requires sufficient training examples for the attribute
Requires attribute to be defined at the word-level
Ineffective at learning complex relationships between input and output ( response-relatedness)
Effective for: repetition, response-relatedness, specificity | Attributes: repetition, specificity, question-asking, response-relatedness
Conditional Training (CT): Weighted Decoding (WD):
Requires sufficient training examples for the attribute
Requires attribute to be defined at the word-level
Ineffective at learning complex relationships between input and output ( response-relatedness)
Effective for: repetition, response-relatedness, specificity | [] |
GEM-SciDuet-train-133#paper-1358#slide-7 | 1358 | What makes a good conversation? How controllable attributes affect human judgments | A good conversation requires balance -between simplicity and detail; staying on topic and changing it; asking questions and answering them. Although dialogue agents are commonly evaluated via human judgments of overall quality, the relationship between quality and these individual factors is less well-studied. In this work, we examine two controllable neural text generation methods, conditional training and weighted decoding, in order to control four important attributes for chitchat dialogue: repetition, specificity, response-relatedness and question-asking. We conduct a large-scale human evaluation to measure the effect of these control parameters on multi-turn interactive conversations on the PersonaChat task. We provide a detailed analysis of their relationship to high-level aspects of conversation, and show that by controlling combinations of these variables our models obtain clear improvements in human quality judgments. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237,
238,
239,
240,
241,
242,
243,
244,
245,
246,
247,
248,
249,
250,
251,
252,
253,
254
],
"paper_content_text": [
"Introduction Neural generation models for dialogue, despite their ubiquity in current research, are still poorly understood.",
"Well known problems, such as the genericness and repetitiveness of responses (Serban et al., 2016a) , remain without a de facto solution.",
"Strikingly, the factors that determine human judgments of overall conversation quality are almost entirely unexplored.",
"Most works have been limited to the next utterance prediction problem, whereas a multi-turn evaluation is necessary to evaluate the quality of a full conversation.",
"In this work we both (i) conduct a large-scale study to identify the fine-grained factors governing human judgments of full conversations, and (ii) develop models that apply our findings in practice, * A.S. completed most of this work at Facebook (FAIR).",
"leading to state-of-the-art performance.",
"Specifically, we identify and study eight aspects of conversation that can be measured by human judgments, while varying four types of low-level attributes that can be algorithmically controlled in neural models; see Figure 1 .",
"To control the lowlevel model attributes, we consider two simple but general algorithms: conditional training, in which the neural model is conditioned on additional control features, and weighted decoding, in which control features are added to the decoding scoring function at test time only.",
"One major result of our findings is that existing work has ignored the importance of conversational flow, as standard models (i) repeat or contradict previous statements, (ii) fail to balance specificity with genericness, and (iii) fail to balance asking questions with other dialogue acts.",
"Conducting experiments on the PersonaChat task (Zhang et al., 2018b) , we obtain significantly higher engagingness scores than the baseline by optimizing control of repetition, specificity and question-asking over multiple turns.",
"Using these findings, our best model matches the performance of the winning entry in the recent NeurIPS ConvAI2 competition (Dinan et al., 2019) , which was trained on much more data but had no control (see Section 8.1).",
"Our code, pretrained models, and full chatlogs, are available at https://parl.ai/projects/ controllable_dialogue.",
"Related Work Dialogue Dialogue evaluation is relatively well understood in goal-oriented tasks, where automated approaches can be coded by measuring task completion (Bordes et al., 2017; El Asri et al., 2017; Hastie, 2012; Henderson et al., 2014; Wen et al., 2017) .",
"Task success combined with dialogue cost can be linked to human judgments like user satisfaction via the PARADISE framework (Walker et al., 1997) .",
"However in chitchat tasks, which we study in this work, automatic metrics and their relation to human ratings are less well-understood.",
"While word-overlap metrics are effective for questionanswering and machine translation, for dialogue they have little to no correlation with human judgments (Liu et al., 2016; Novikova et al., 2017 )this is due to the open-ended nature of dialogue.",
"There are more recent attempts to find better automatic approaches, such as adversarial evaluation (Li et al., 2017b) and learning a scoring model (Lowe et al., 2017) , but their value is still unclear.",
"Nevertheless, a number of studies only use automatic metrics, with no human study at all (Lowe et al., 2015; Parthasarathi and Pineau, 2018; Serban et al., 2016b) .",
"Other works do use human evaluations (Dinan et al., 2018; Li et al., 2016a,b; Venkatesh et al., 2017; Vinyals and Le, 2015; Zhang et al., 2018b) , typically reporting just one type of judgment (either quality or appropriateness) via a Likert scale or pairwise comparison.",
"Most of those works only consider single turn evaluations, often with a shortened dialogue history, rather than full multi-turn dialogue.",
"A more comprehensive evaluation strategy has been studied within the scope of the Alexa prize (Venkatesh et al., 2017; Guo et al., 2018) by combining multiple automatic metrics designed to capture various conversational aspects (engagement, coherence, domain coverage, conversational depth and topical diversity).",
"Though these aspects have some similarity to the aspects studied here, we also focus on lower-level aspects (e.g.",
"avoiding repetition, fluency), to understand how they correspond to both our controllable attributes, and to overall quality judgments.",
"Controllable neural text generation Researchers have proposed several approaches to control aspects of RNN-based natural language generation such as sentiment, length, speaker style and tense (Fan et al., 2018; Ficler and Goldberg, 2017; Ghazvininejad et al., 2017; Hu et al., 2017; Kikuchi et al., 2016; Peng et al., 2018; Wang et al., 2017) .",
"In particular, several works use control to tackle the same common sequence-to-sequence problems we address here (particularly genericness and unrelated output), in the context of single-turn response generation (Baheti et al., 2018; Li et al., 2016a Li et al., , 2017a Shen et al., 2017; Xing et al., 2017; Zhang et al., 2018a; Zhou et al., 2017) .",
"By contrast, we focus on developing controls for, and human evaluation of, multi-turn interactive dialogue -this includes a new method (described in Section 5) to control attributes at the dialogue level rather than the utterance level.",
"In this work, we require a control method that is both general-purpose (one technique to simultaneously control many attributes) and easily tunable (the control setting is adjustable after training).",
"Given these constraints, we study two control methods: conditional training (variants of which have been described by Fan et al.",
"(2018) ; Kikuchi et al.",
"(2016) ; Peng et al.",
"(2018) ) and weighted decoding (described by Ghazvininejad et al.",
"(2017) as a general technique, and by Baheti et al.",
"(2018) to control response-relatedness).",
"To our knowledge, this work is the first to systematically compare the effectiveness of two general-purpose control methods across several attributes.",
"3 The PersonaChat dataset PersonaChat (Zhang et al., 2018b ) is a chitchat dialogue task involving two participants (two humans or a human and a bot).",
"Each participant is given a persona -a short collection of personal traits such as I'm left handed or My favorite season is spring -and are instructed to get to know each other by chatting naturally using their designated personas, for 6-8 turns.",
"The training set contains 8939 conversations and 955 personas, collected via crowdworkers, plus 1000 conversations and 100 personas for validation, and a similar number in the hidden test set.",
"The PersonaChat task was the subject of the NeurIPS 2018 ConvAI2 Challenge (Dinan et al., 2019) , in which competitors were first evaluated with respect to automatic met-rics (perplexity, hits@1 and F1 score), and then with respect to human judgment via the question \"How much did you enjoy talking to this user?\"",
"on a scale of 1-4.",
"Baseline model Our baseline model is a 2-layer LSTM sequenceto-sequence model with attention.",
"On any dialogue turn, the input x to the encoder is the entire dialogue history (separated using unique speakeridentifying tokens), with the model's own persona prepended.",
"Conditioned on this input sequence x, the decoder generates a response y.",
"Except when stated otherwise, all our models decode using beam search with beam size 20.",
"We initialized the word embedding matrix with 300-dimensional GloVe embeddings (Pennington et al., 2014) .",
"Using the ParlAI framework (Miller et al., 2017) , we pretrained the model on a dataset of 2.5 million Twitter message-response pairs, 1 then fine-tuned it on PersonaChat.",
"On the Per-sonaChat validation set, the baseline model has a perplexity of 26.83 and F1 of 17.02, which would have placed us 4th out of 26 models in the Con-vAI2 competition (Dinan et al., 2019) .",
"We attempt to improve over this baseline using control.",
"Controllable text generation methods Suppose we have a sequence-to-sequence model which gives P (y|x) = Π t P (y t |x, y 1 , .",
".",
".",
", y t−1 ), the conditional probability of a response y (the model's next utterance) given input x (the context, which in our case includes the model's own persona and the dialogue history).",
"Contrary to most previous work, which controls at the sentence level, we wish to control attributes of the output y at the dialogue levelmeaning that a single control setting is used for a whole dialogue.",
"For example, to control questionasking, we provide a control setting at the beginning of each dialogue (e.g.",
"20% questions or 70% questions) rather than providing a control setting for each utterance (e.g.",
"is a question or isn't a question).",
"With this approach, the sequence-tosequence model is able to choose what value the controlled attribute should take for any particular utterance, but we are able to choose the overall distribution.",
"We find that this approach works well -for example, the sequence-to-sequence model is generally good at detecting when to ask a question.",
"In particular, this is easier than the alternative: developing a separate process to decide, for each utterance, whether to ask a question.",
"In this section, we describe the two methods -which we call Conditional Training (CT) and Weighted Decoding (WD) -that we use to control attributes of the output y at the dialogue level.",
"Conditional Training (CT) Conditional Training (Fan et al., 2018; Kikuchi et al., 2016; Peng et al., 2018) is a method to learn a sequence-to-sequence model P (y|x, z), where z is a discrete control variable.",
"If the control attribute is naturally continuous (for example in our work, repetitiveness, specificity and response-relatedness), we use z to represent bucketed ranges.",
"For a binary attribute like questionasking, z represents an overall probability (as explained in Section 5).",
"To train a CT model, we first automatically annotate every (x, y) pair in the training set with the attribute we wish to control (for example, whether y contains a question mark).",
"During training, for each example we determine the corresponding z value (for continuous attributes, this simply means sorting into the correct bucket; for question-asking, see Section 6.4).",
"Next, the control variable z is represented via an embedding (each of the possible values of z has its own embedding).",
"For all our experiments, the embedding is of length 10; this was determined via hyperparameter tuning.",
"There are several possible ways to condition the sequence-to-sequence model on z -for example, append z to the end of the input sequence, or use z as the START symbol for the decoder.",
"We find it most effective to concatenate z to the decoder's input on every step.",
"2 Lastly, the CT model learns to produce y = y 1 , .",
".",
".",
", y T by optimizing the cross-entropy loss: loss CT = − 1 T T t=1 log P (y t |x, z, y 1 , .",
".",
".",
", y t−1 ) Our CT models are initialized with the parameters from the baseline sequence-to-sequence model P (y|x) (the new decoder parameters are initialized with small random values), then fine-tuned to optimize loss CT on the PersonaChat training set, until convergence of loss CT on the validation set.",
"Weighted Decoding (WD) Weighted Decoding (Ghazvininejad et al., 2017) is a decoding method that increases or decreases the probability of words with certain features.",
"The technique is applied only at test time, requiring no change to the training method.",
"A limitation of WD is that the controllable attribute must be defined at the word-level; any desired utterance-level attribute must be redefined via word-level features.",
"In weighted decoding, on the t th step of decoding, a partial hypothesis y <t = y 1 , .",
".",
".",
", y t−1 is expanded by computing the score for each possible next word w in the vocabulary: score(w, y <t ; x) = score(y <t ; x) + log P RNN (w|y <t , x) + i w i * f i (w; y <t , x).",
"Here, log P RNN (w|y <t , x) is the log-probability of the word w calculated by the RNN, score(y <t ; x) is the accumulated score of the already-generated words in the hypothesis y <t , and f i (w; y <t , x) are decoding features with associated weights w i .",
"There can be multiple features f i (to control multiple attributes), and the weights w i are hyperparameters to be chosen.",
"A decoding feature f i (w; y <t , x) assigns a real value to the word w, in the context of the text generated so far y <t and the context x.",
"The feature can be continuous (e.g.",
"the unigram probability of w), discrete (e.g.",
"the length of w in characters), or binary (e.g.",
"whether w starts with the same letter as the last word in y <t ).",
"A positive weight w i increases the probability of words w that score highly with respect to f i ; a negative weight decreases their probability.",
"Note that weighted decoding and conditional training can be applied simultaneously (i.e.",
"train a CT model then apply WD at test time) -a strategy we use in our experiments.",
"Controlling conversational attributes In this section, we describe how we use conditional training and weighted decoding to control four attributes: repetition, specificity, responserelatedness and question-asking.",
"We evaluate the effectiveness of both control methods via automatic metrics (i.e., measuring how well the attribute was controlled), and use our findings to select control methods and control settings to be explored further via human evaluation (Section 8).",
"Repetition Our baseline model exhibits three types of repetition, which we call external repetition (selfrepetition across utterances), internal repetition (self-repetition within utterances), and partner repetition (repeating the conversational partner).",
"To control repetition with weighted decoding, 3 we define five n-gram based decoding features (see Appendix D).",
"Three of these features (extrep bigram, intrep bigram and partnerrep bigram) identify repeating bigrams for the three repetition types.",
"The other two features (extrep unigram and intrep unigram) identify repeating content words.",
"By applying a negative weight to these features, we can reduce repetition.",
"In particular, if the weight is −∞, our method is equivalent to n-gram blocking as described by Kulikov et al.",
"(2018) .",
"We observe that repetition control is very important, thus all further control experiments include repetition control.",
"Specificity Like many sequence-to-sequence models using beam search decoding, our baseline frequently asks generic questions such as What music do you like?",
"and gives dull, unspecific responses, such as I like all kinds of music.",
"We control specificity using Normalized Inverse Document Frequency (NIDF) as a measure of word rareness.",
"4 The Inverse Document Frequency of a word w is IDF(w) = log(R/c w ) where R is the number of responses in the dataset, and c w is the number of those responses that contain w. Normalized IDF (which ranges from 0 to 1) is NIDF(w) = IDF(w) − min idf max idf − min idf (1) where min idf and max idf are the minimum and maximum IDFs, taken over all words in the vocabulary.",
"To control specificity with weighted decoding, we use NIDF as a decoding feature.",
"As shown in Table 1 , this method produces reasonable outputs when the feature weight is within a certain range, but at the extremes the outputs are nonsensical.",
"The boundary for nonsensical output differs from example to example.",
"To control specificity with conditional training, we define the specificity of an utterance y to be the mean NIDF of the words in y.",
"Thus our control variable z is mean NIDF (discretized into 10 equal-sized buckets).",
"As shown in Table 1 , this method gives outputs with a narrower NIDF range, but overall produces less nonsensical outputs.",
"Response-relatedness In conversation, it's generally desirable to produce a response that is related to the partner's last utterance; for example if the partner says My grandfather died last month, it is appropriate to say I'm so sorry.",
"Were you close to your grandfather?",
"However, our baseline model frequently responds with unrelated utterances like Do you have any pets?",
"To control response-relatedness with weighted decoding, we use the decoding feature resp rel: resp rel(w; y <t , x) = cos sim(word emb(w), sent emb( )) where word emb(w) is the GloVe embedding for the word w, sent emb( ) is the sentence embedding for the partner's last utterance (note is part of the context x), and cos sim is the cosine similarity between the two.",
"In particular, the sentence embedding sent emb(s) for an utterance s is a weighted average of the GloVe embeddings of the words in s, with the first principal component projected out; for full details, see Arora et al.",
"(2017) .",
"This method of controlling response-relatedness is similar to that described in (Baheti et al., 2018) .",
"We find that weighted decoding is effective to control the semantic relatedness of the model's response to the partner's last utterance (see Table 2 ).",
"As before, we find that extreme weights lead to nonsensical output.",
"To control response-relatedness with conditional training, we try defining the control variable z to be cos sim(sent emb(y), sent emb( )), the overall cosine similarity between the partner's last utterance and the model's response y (again, we discretize z).",
"However, we find this method ineffective -the CT model learns only a very weak connection between z and the semantic relatedness of the output (see Section 7 for more details).",
"Question-asking Considerate chitchat requires a reciprocal asking and answering of questions -asking too few or too many can appear self-centered or nosy.",
"We control question-asking in order to study these trade-offs.",
"To control question-asking with weighted decoding, we use the binary decoding feature is qn word(w), which is equal to 1 if and only if the word w is in a pre-defined list of interrogative words (how, what, when, where, which, who, whom, whose, why, ?)",
".",
"We find this is a somewhat effective method to encourage or discourage questions, but with unintended side-effects: a negative weight can discourage valid non-question utterances that happen to contain interrogative words (such as I'm learning how to knit) and a positive weight can result in degenerate utterances (such as For conditional training, we regard an utterance y as containing a question if and only if y contains a question mark.",
"We train our CT model on a control variable z with 11 possible values: {0, .",
".",
".",
", 10}.",
"As discussed in Section 5, we wish to control question-asking at the distributional, dialogue level, rather than at the binary, utterance level.",
"Thus the setting z = i means that the model should produce, on average, utterances containing '?'",
"with probability i/10.",
"During training we randomly assign examples to buckets such that each bucket i is trained on examples with the correct proportion of questions (i/10), and all buckets have the same amount of training examples.",
"We find that conditional training is effective to control question-asking -as shown in Figure 2 , by increasing z from 0 to 10, we obtain a range of question-asking rates from 1.40% to 97.72%.",
"However, when we introduce repetition control, question-asking is reduced -in particular, the z = 10 setting (which should produce 100% questions) now only produces 79.67% questions.",
"The primary problem is the weighted decoding feature extrep bigram, which discourages bigrams that have appeared in previous utterances -this prevents the model from producing bigrams that commonly occur in many questions, such as do you and what is.",
"To fix this, we introduce an extra setting z = 10 (boost), in which we do not use the feature extrep bigram for weighted decoding during beam search, but we do use it to rerank the candidates after beam search.",
"This setting, which allows the model to produce necessary questionasking bigrams, yields a 99.54% question-asking rate, at the cost of slightly increased external bigram repetition (see Appendix F).",
"For controlling question-asking, conditional training is preferable to weighted decoding for two reasons.",
"Firstly, it allows us to achieve (close to) 0% questions, 100% questions, or anything in between, without introducing the risk of degenerate output.",
"Secondly, presence-of-a-question-mark captures the true attribute of interest (questionasking) more exactly and directly than presence of interrogative words.",
"For these reasons, only the CT method is considered in the human evaluation.",
"Comparison of control methods The previous section shows that conditional training and weighted decoding are both useful techniques, with different strengths and weaknesses.",
"The primary disadvantage of conditional training is that it sometimes fails to learn the connection between the control variable z and the target output y.",
"In practice, we find the model can learn simple attributes of the output (such as the presence of '?",
"', and overall genericness), but not relationships between the input and output (such as semantic relatedness).",
"By contrast, weighted decoding can force the desired feature to appear in the output by raising the weight arbitrarily high (though this may have unintended side-effects).",
"The primary disadvantage of weighted decoding is that it risks going off-distribution when the weight is too strong.",
"By contrast, conditional training produces mostly well-formed, indistribution outputs.",
"This highlights the importance of learned control -it is safer to learn to produce output that both satisfies the control variable and is appropriate, than to alter the decoding process to satisfy the control variable, potentially trading off appropriateness in the process.",
"Other considerations include: (1) Convenience: conditional training requires retraining; weighted decoding doesn't, but is slower at test time.",
"Attribute definition: conditional training can control sentence-level attributes, but they must be discrete.",
"By contrast, weighted decoding requires word-level features, but they can be continuous.",
"Human evaluation results In order to study the effect of our controllable attributes, we conduct a large-scale human evalua-tion of 28 model configurations (see Appendix E), plus human-human conversations for comparison.",
"Approach In our evaluation, a crowdworker chats with a model (or in the human-human case, another crowdworker) for six conversational turns, then answers eight multiple-choice questions which each capture different aspects of conversational quality: avoiding repetition, interestingness, making sense, fluency, listening, inquisitiveness, humanness and engagingness.",
"The eight questions are Likert questions on a 1-4 scale, where higher is better.",
"5 To match the ConvAI2 Challenge, we also add a persona retrieval question, in which the crowdworker is asked to select which of two possible personas was the model's persona.",
"For full details of the evaluation design, see Appendix B.",
"Our evaluation is the same as the ConvAI2 Challenge evaluation, but more detailed -Con-vAI2 includes only engagingness and persona retrieval.",
"6 As in the ConvAI2 challenge, each of our 28 model configurations was evaluated by over 100 crowdworkers, and the results were adjusted for annotator variance via a Bayesian calibration (Kulikov et al., 2018) .",
"In designing our evaluation, we aimed to capture the four aspects we expected to directly improve via control (avoiding repetition, interestingness, listening, inquisitiveness), two important error classes we thought would be affected by our controls (fluency, making sense), and two overall quality measures (engagingness, humanness).",
"Main findings In this section we summarize the main findings of our human evaluation -whose full results can be found in Appendices G and H, with sample conversations in Appendix C. As Figure 3 shows, controlling for repetition, specificity and question-asking all lead to large 5 Exceptions: Avoiding repetition is a 1-3 scale, as we found this gave clearer instructions.",
"Inquisitiveness has an optimal score of 3; 1 and 2 represent too little questionasking, and 4 represents too much.",
"6 There are three other minor differences between our evaluation and ConvAI2's: (1) We fix capitalization and spacing before showing the chatbot's utterances to crowdworkers, while ConvAI2 show the raw lowercase tokenized form.",
"We found the latter interferes with fluency evaluation.",
"(2) We conduct 6 dialogue turns, while ConvAI2 conducts 4-6.",
"This was necessary to evaluate repetitiveness.",
"(3) We use (publicly-available) validation set personas, while ConvAI2 uses (hidden) test set personas.",
"This enables us to release our evaluation chatlogs.",
"engagingness improvements over the greedy and beam-search baseline models.",
"In particular, we find that controlling for multi-turn (self) repetition is important and should be incorporated alongside other attribute control methods.",
"We found no improvement by controlling response-relatedness.",
"To better understand these overall engagingness improvements, we consider the full set of human judgments, shown in Figure 4 .",
"We find that reducing repetition leads to improvements across all our aspects of conversational quality.",
"Increasing specificity shows improvements in interestingness and listening ability over the repetition-controlled baseline, while increasing question-asking shows improvements in inquisitiveness and interestingness over the repetition-controlled baseline.",
"Our most engaging model, which controls both repetition and question-asking -marked 'Question (CT)' in Figure 3 (left) -matches the engagingness of the winning entry in the ConvAI2 competition, as both models achieve a raw score 7 of 3.1 (Dinan et al., 2019) .",
"However, the Con-vAI2 winner, Lost in Conversation, was trained on approximately 12× as much data as our model.",
"Lost in Conversation is based on the OpenAI GPT Language Model (Radford et al., 2018) , which is pretrained on the BookCorpus (Zhu et al., 2015) , which contains approximately 985 million words, whereas our model is pretrained on the Twitter dataset (approximately 79 million words).",
"Altogether, our evaluation clearly shows that controlling low-level attributes over multiple turns leads to improved overall quality.",
"Effect of controlled attributes Repetition (WD) We observe that selfrepetition across utterances (external repetition) is by far the most severe form of repetition in our beam search baseline model.",
"We evaluate several settings of the extrep bigram weighted decoding feature, and find that an aggressive repetition-reduction setting (reducing bigram repetition rate to below gold data levels) is rated best.",
"We also find that blocking repeated content words improves the avoiding repetition score.",
"See Appendices E, F and G for full details.",
"As shown in Figure 3 Figure 3 : Calibrated human judgments of engagingness for the baselines and best controlled models (left); for different specificity control settings (middle); and for different question-asking control settings (right).",
"over the beam search baseline in all metrics, and achieves close-to-human scores on all metrics except humanness.",
"This striking result demonstrates that repetition is by far the biggest limiting quality factor for naive sequence-to-sequence dialogue agents.",
"The result also emphasizes the importance of multi-turn dialogue evaluation to detect the problem.",
"We refer to this model as the repetitioncontrolled baseline, and use it as a basis for all remaining experiments (i.e., we control specificity, response-relatedness and question-asking on top of these repetition-control settings).",
"Specificity (WD, CT) For our weighted decoding models, the extreme settings (very generic and very specific) score poorly in engagingness due to the frequent presence of degenerate output -see Figure 3 (middle).",
"We find that the weight = 4 setting (which is more specific than the repetitioncontrolled baseline and about as specific as the gold data) maximizes engagingness.",
"As shown in Figure 3 (left) and Figure 4 , this more-specific model is rated more interesting, engaging, and a better listener than the repetition-controlled baseline, but at the cost of reduced fluency and making sense.",
"Our CT model with z = 7 (which has a similar NIDF level as WD with weight = 4) shows similar results, but the improvements are smaller.",
"For further discussion on the interestingness of our specificity models, see Section 8.3.",
"Response-relatedness (WD) We evaluated several control settings (weight = −10, 5, 10, 13) and found that none scored better than weight = 0 (no response-relatedness control); see Appendix H. This is surprising -prior to running the human evaluation, we annotated 100 examples ourselves to determine the best control settings.",
"While we identified a more responsive setting (weight = 5) as less likely than the uncontrolled model to ignore the user, crowdworkers rated it as a slightly worse listener than the uncontrolled model.",
"One explanation for this discrepancy is that the more responsive model takes more risks, using more rare words (0.197 NIDF, up from 0.178), and thus receives a lower makes-sense score (3.41, down from 3.70).",
"We hypothesize that, compared to us, the crowdworkers are less tolerant of slightly nonsensical output, and more tolerant of generic unrelated utterances.",
"Question-asking (CT) As shown in Figure 3 (right), a question-asking rate of 65.7% (z = 7) maximizes engagingness.",
"This setting, which asks more questions than both the repetition-controlled baseline (50.0%) and the human-produced gold data (28.8%), brings us closest to human-level engagingness -see Figure 3 (left).",
"Although we find that a rate of approximately 65.7% questionasking is the most engaging, a lower level (48.9%, or z = 4) is rated the best listener.",
"Lastly, we find that although asking too many questions is less engaging, most crowdworkers will not directly criticize a chatbot that asks questions on every turnonly 11.9% of crowdworkers judged the z = 10 (boost) setting, which asks 99.5% questions, as asking too many questions.",
"8 For full details of these scores, see Appendix F and H. For time and budget reasons, we did not evaluate any models controlling both question-asking and specificity.",
"However, we expect it is possible to obtain further improvements by doing so.",
"A/B tests for interestingness Though our more-specific models yielded significant improvements in engagingness, we were surprised that they did not yield clearer improvements in interestingness.",
"To investigate further, we conducted an A/B interestingness evaluation of three specificity-controlled models, compared to the repetition-controlled baseline.",
"Crowdworkers were shown two conversations (from the main human evaluation) and asked to choose which model was more interesting (see Figure 7 for details).",
"We collected 500 samples per comparison, plus 200 additional human vs repetition-controlled baseline samples, which were used to filter for quality control.",
"After discarding low-quality crowdworkers, we have roughly 300 evaluations per comparison, with an average Cohen's κ = 0.6.",
"As shown in Table 3 , all three models were rated significantly more interesting than the repetitioncontrolled baseline.",
"This convincingly shows that producing utterances with more rare words is a valid strategy to improve interestingness.",
"We have two explanations for why these interestingness differences did not materialize in our main evaluation.",
"Firstly, interestingness is a particularly subjective metric (unlike more tangible metrics such as avoiding repetition and making sense) -this makes it hard to calibrate across crowdworkers.",
"Secondly, we suspect that in our original evaluation, the crowdworkers may have evaluated the interestingness of the task rather than the chatbot.",
"This could account for why subtle increases in conversational ability did not result in higher interestingness ratings -the PersonaChat task itself has a natural interestingness limit.",
"Conclusion What makes a good conversation?",
"Through our evaluation, we showed that a good conversation is about balance -controlling for the right level of repetition, specificity and question-asking is important for overall quality.",
"We also found that conversational aspects such as interestingness, listening, and inquisitiveness are all importantthough optimizing these can introduce a trade-off against certain types of errors (such as repetitive, disfluent, or nonsensical output).",
"Secondly, multiturn evaluation is essential to study what makes a good conversation -multiple turns are required to reveal issues such as repetition, consistency, and question-asking frequency.",
"Lastly, what do we mean by 'good'?",
"Although humanness and engagingness are both commonly used as overall quality metrics, the two are very different.",
"While our models achieved close-to-human scores on engagingness, they failed to get close on humannessshowing that a chatbot need not be human-like to be enjoyable.",
"This striking result also demonstrates the importance of measuring more than one quality metric when evaluating dialogue agents.",
"Outlook Our work shows that neural generative systems have systemic problems when applied to open-ended dialogue, some of which (e.g.",
"repetition) are only observable in the multi-turn setting.",
"Furthermore, control of low-level attributes offers a practical way to correct these problems, yielding large improvements to overall quality -in our case, comparable to systems trained on much more data.",
"Future work includes optimizing control settings automatically, and building more convincingly human-like chatbots.",
"Supplementary Material A Screenshots of human evaluation interface B Human evaluation questionnaire design Here are the questions and multiple-choice options used in the human evaluation, in the order presented: [Engagingness] How much did you enjoy talking to this user?",
"Adding w to the hypothesis y <t would create a 2-gram that appears in a previous utterance by the model extrep unigram(w, y <t , x) w is a non-stopword and w appears in a previous utterance by the model intrep bigram(w, y <t , x) Adding w to the hypothesis y <t would create a 2-gram that appears earlier in the hypothesis y <t intrep unigram(w, y <t , x) w is a non-stopword and w appears earlier in the hypothesis y <t partnerrep bigram(w, y <t , x) Adding w to the hypothesis y <t would create a 2-gram that appears in a previous utterance by the partner Repetition control (WD) Extrep bigram WD -0.5 wt -0.5 Extrep bigram WD -1.25 wt -1.25 Extrep bigram WD -3.5 wt -3.5 Extrep bigram WD -inf wt -∞ Repetition-controlled baseline wt -3.5 wt -∞ wt -∞ Question control (CT) Question-controlled CT 0 wt -3.5 wt -∞ wt -∞ z = 0 Question-controlled CT 1 wt -3.5 wt -∞ wt -∞ z = 1 Question-controlled CT 4 wt -3.5 wt -∞ wt -∞ z = 4 Question-controlled CT 7 wt -3.5 wt -∞ wt -∞ z = 7 Question-controlled CT 10 wt -3.5 wt -∞ wt -∞ z = 10 Question-controlled CT 10 (boost) wt 0 * wt -∞ wt -∞ z = 10 Specificity control (CT) Specificity-controlled CT 0 wt -3.5 wt -∞ wt -∞ z = 0 Specificity-controlled CT 2 wt -3.5 wt -∞ wt -∞ z = 2 Specificity-controlled CT 4 wt -3.5 wt -∞ wt -∞ z = 4 Specificity-controlled CT 7 wt -3.5 wt -∞ wt -∞ z = 7 Specificity-controlled CT 9 wt -3.5 wt -∞ wt -∞ z = 9 Specificity control (WD) Specificity-controlled WD -10 wt -3.5 wt -∞ wt -∞ wt -10 Specificity-controlled WD -4 wt -3.5 wt -∞ wt -∞ wt -4 Specificity-controlled WD 4 wt -3.5 wt -∞ wt -∞ wt 4 Specificity-controlled WD 6 wt -3.5 wt -∞ wt -∞ wt 6 Specificity-controlled WD 8 wt -3.5 wt -∞ wt -∞ wt 8 Response-related control (WD) ** Response-related controlled WD -10 wt -3.5 wt -∞ wt -∞ wt -∞ wt -∞ wt -10 Response-related controlled WD 0 wt -3.5 wt -∞ wt -∞ wt -∞ wt -∞ wt 0 Response-related controlled WD 5 wt -3.5 wt -∞ wt -∞ wt -∞ wt -∞ wt 5 Response-related controlled WD 10 wt -3.5 wt -∞ wt -∞ wt -∞ wt -∞ wt 10 Response-related controlled WD 13 wt -3.5 wt -∞ wt -∞ wt -∞ wt -∞ wt 13 Table 5 : Control settings for all configurations that were human-evaluated.",
"'wt' means the weight used for a weighted decoding feature and 'z =' means the setting (i.e.",
"bucket) for the control variable in conditional training.",
"* In the setting Question-controlled CT 10 (boost), the feature extrep bigram is not used for weighted decoding during beam search, but it is used to rerank the candidates after beam search.",
"See Section 6.4 for details.",
"** Note that the Response-related controlled models additionally introduce repetition controls to block internal bigram repetition and partner bigram repetition.",
"This was necessary to prevent the model from parroting the partner's last utterance.",
"In Table 8 , we find that just adding these extra repetition controls (here called Responserelated controlled WD 0, i.e.",
"increased repetition control but no response-relatedness control) outperforms our canonical Repetition-controlled baseline.",
"However, given that we discovered this later, our specificity and question controlled models are built on top of the canonical Repetition-controlled baseline.",
"Table 7 : Raw scores (mean ± std.)",
"for all models and human evaluation metrics.",
"The first eight columns are Likert metrics on a 1-4 scale (except Avoiding Repetition, which is a 1-3 scale), where higher is better (except Inquisitiveness, which has an optimal score of 3).",
"The last column, Persona Retrieval, is on a scale from 0 to 1 where higher is better.",
"The maximum of each column (excluding Human row) is in bold.",
"Model Avoiding Rep.",
"Engage Fluency Humanness Inquisitive Interesting Listening Make Sense Human and baselines * Human 2.79 ± 0.12 3.04 ± 0.11 3.36 ± 0.12 3.35 ± 0.11 2.44 ± 0.12 2.92 ± 0.11 3.32 ± 0.13 3.68 ± 0.11 * Greedy Search 2.08 ± 0.10 2.24 ± 0.11 3.03 ± 0.10 1.75 ± 0.12 1.95 ± 0.10 2.29 ± 0.13 2.62 ± 0.10 3.23 ± 0.10 * Beam Search (beam size 20) 2.08 ± 0.11 2.29 ± 0.11 3.09 ± 0.13 1.71 ± 0.13 2.42 ± 0.11 2.29 ± 0.14 2.47 ± 0.12 3.35 ± 0.13 Repetition control (WD) Extrep bigram WD -0.5 2.62 ± 0.10 2.54 ± 0.12 3.35 ± 0.12 2.13 ± 0.11 2.63 ± 0.11 2.56 ± 0.11 2.93 ± 0.11 3.48 ± 0.11 Extrep bigram WD -1.25 2.78 ± 0.09 2.82 ± 0.13 3.40 ± 0.12 2.27 ± 0.12 2.54 ± 0.09 2.76 ± 0.10 3.05 ± 0.11 3.53 ± 0.14 Extrep bigram WD -3.5 2.83 ± 0.11 2.93 ± 0.10 3.56 ± 0.10 2.43 ± 0.11 2.47 ± 0.11 2.83 ± 0.10 3.14 ± 0.10 3.62 ± 0.12 Extrep bigram WD -inf 2.74 ± 0.11 2.87 ± 0.14 3.49 ± 0.12 2.32 ± 0.13 2.56 ± 0.11 2.75 ± 0.12 3.13 ± 0.12 3.59 ± 0.12 * Repetition-controlled baseline 2.86 ± 0.12 2.82 ± 0.12 3.53 ± 0.10 2.40 ± 0.11 2.62 ± 0.13 2.84 ± 0.12 3.10 ± 0.11 3.58 ± 0.14 Question control (CT) Question-controlled CT 0 2.87 ± 0.12 2.84 ± 0.13 3.51 ± 0.10 2.46 ± 0.11 2.36 ± 0.09 2.76 ± 0.09 3.10 ± 0.10 3.49 ± 0.12 Question-controlled CT 1 2.82 ± 0.11 2.88 ± 0.11 3.42 ± 0.10 2.46 ± 0.12 2.47 ± 0.11 2.79 ± 0.13 3.14 ± 0.11 3.55 ± 0.10 Question-controlled CT 4 2.78 ± 0.12 2.88 ± 0.10 3.47 ± 0.11 2.40 ± 0.09 2.53 ± 0.13 2.83 ± 0.13 3.24 ± 0.11 3.59 ± 0.10 * Question-controlled CT 7 2.81 ± 0.10 2.99 ± 0.11 3.54 ± 0.09 2.35 ± 0.11 2.66 ± 0.12 2.92 ± 0.12 3.11 ± 0.10 3.47 ± 0.10 Question-controlled CT 10 2.67 ± 0.13 2.87 ± 0.11 3.52 ± 0.12 2.35 ± 0.12 2.63 ± 0.12 2.66 ± 0.10 2.94 ± 0.11 3.53 ± 0.12 Question-controlled CT 10 (boost) 2.68 ± 0.12 2.74 ± 0.09 3.42 ± 0.12 2.19 ± 0.13 2.79 ± 0.11 2.74 ± 0.11 3.00 ± 0.12 3.45 ± 0.13 Specificity control (CT) Specificity-controlled CT 0 2.79 ± 0.10 2.93 ± 0.09 3.44 ± 0.12 2.38 ± 0.11 2.56 ± 0.12 2.84 ± 0.12 3.12 ± 0.13 3.61 ± 0.11 Specificity-controlled CT 2 2.78 ± 0.12 2.74 ± 0.11 3.39 ± 0.13 2.31 ± 0.13 2.56 ± 0.13 2.74 ± 0.12 2.99 ± 0.11 3.47 ± 0.10 Specificity-controlled CT 4 2.82 ± 0.10 2.80 ± 0.13 3.44 ± 0.14 2.32 ± 0.13 2.51 ± 0.12 2.78 ± 0.15 3.09 ± 0.13 3.46 ± 0.13 Specificity-controlled CT 7 2.81 ± 0.12 2.91 ± 0.13 3.43 ± 0.11 2.45 ± 0.10 2.49 ± 0.11 2.81 ± 0.12 3.15 ± 0.12 3.55 ± 0.11 Specificity-controlled CT 9 2.80 ± 0.13 2.78 ± 0.10 3.41 ± 0.12 2.35 ± 0.13 2.28 ± 0.11 2.79 ± 0.11 2.91 ± 0.11 3.51 ± 0.12 Specificity control (WD) Specificity-controlled WD -10 2.76 ± 0.11 2.41 ± 0.12 3.19 ± 0.12 2.15 ± 0.11 2.28 ± 0.13 2.35 ± 0.12 2.89 ± 0.11 3.28 ± 0.12 Specificity-controlled WD -4 2.83 ± 0.10 2.76 ± 0.12 3.37 ± 0.10 2.36 ± 0.11 2.46 ± 0.11 2.62 ± 0.12 3.14 ± 0.09 3.52 ± 0.11 * Specificity-controlled WD 4 2.84 ± 0.10 2.96 ± 0.12 3.45 ± 0.13 2.44 ± 0.12 2.56 ± 0.09 2.94 ± 0.11 3.20 ± 0.10 3.54 ± 0.11 Specificity-controlled WD 6 2.81 ± 0.09 2.91 ± 0.10 3.34 ± 0.09 2.31 ± 0.11 2.53 ± 0.12 2.93 ± 0.12 3.09 ± 0.10 3.41 ± 0.12 Specificity-controlled WD 8 2.70 ± 0.11 2.39 ± 0.12 2.54 ± 0.12 1.80 ± 0.13 2.00 ± 0.10 2.49 ± 0.12 2.47 ± 0.10 2.87 ± 0.11 Response-related control (WD) Response-related controlled WD -10 2.77 ± 0.12 2.45 ± 0.12 3.26 ± 0.11 1.96 ± 0.10 2.31 ± 0.12 2.47 ± 0.12 2.73 ± 0.11 3.12 ± 0.12 Response-related controlled WD 0 2.87 ± 0.12 2.97 ± 0.11 3.55 ± 0.09 2.62 ± 0.11 2.48 ± 0.10 2.88 ± 0.12 3.21 ± 0.09 3.70 ± 0.10 Response-related controlled WD 5 2.79 ± 0.10 2.83 ± 0.09 3.35 ± 0.12 2.40 ± 0.12 2.51 ± 0.13 2.80 ± 0.13 3.13 ± 0.12 3.41 ± 0.12 Response-related controlled WD 10 2.74 ± 0.11 2.42 ± 0.12 2.93 ± 0.11 1.95 ± 0.12 2.20 ± 0.12 2.56 ± 0.12 2.90 ± 0.12 3.12 ± 0.10 Response-related controlled WD 13 2.63 ± 0.12 2.06 ± 0.11 2.40 ± 0.09 1.74 ± 0.11 2.07 ± 0.11 2.25 ± 0.12 2.49 ± 0.14 2.63 ± 0.10 Table 8 : Calibrated scores (mean ± std.)",
"for all models and human evaluation metrics.",
"The first eight columns are Likert metrics on a 1-4 scale (except Avoiding Repetition, which is a 1-3 scale), where higher is better (except Inquisitiveness, which has an optimal score of 3).",
"The last column, Persona Retrieval, is on a scale from 0 to 1 where higher is better.",
"The maximum of each column (excluding Human row) is in bold.",
"Rows marked with * are the six models included in Figure 3 Figure 9 : Calibrated human evaluation scores for all models.",
"This is the same data as in Table 8 .",
"Note: 'Repetition-controlled baseline+' in the rightmost column is 'Response-related controlled WD 0' in Table 8 .",
"See Table 5 for explanation."
]
} | {
"paper_header_number": [
"1",
"2",
"4",
"5",
"5.1",
"5.2",
"6",
"6.1",
"6.2",
"6.3",
"6.4",
"7",
"8",
"8.1",
"8.2",
"8.3",
"9"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Baseline model",
"Controllable text generation methods",
"Conditional Training (CT)",
"Weighted Decoding (WD)",
"Controlling conversational attributes",
"Repetition",
"Specificity",
"Response-relatedness",
"Question-asking",
"Comparison of control methods",
"Human evaluation results",
"Main findings",
"Effect of controlled attributes",
"A/B tests for interestingness",
"Conclusion"
]
} | GEM-SciDuet-train-133#paper-1358#slide-7 | Controlling specificity WD and CT | WD: Large range, but degenerate output at the extremes
CT: Smaller range, but generally well- formed output | WD: Large range, but degenerate output at the extremes
CT: Smaller range, but generally well- formed output | [] |
GEM-SciDuet-train-133#paper-1358#slide-8 | 1358 | What makes a good conversation? How controllable attributes affect human judgments | A good conversation requires balance -between simplicity and detail; staying on topic and changing it; asking questions and answering them. Although dialogue agents are commonly evaluated via human judgments of overall quality, the relationship between quality and these individual factors is less well-studied. In this work, we examine two controllable neural text generation methods, conditional training and weighted decoding, in order to control four important attributes for chitchat dialogue: repetition, specificity, response-relatedness and question-asking. We conduct a large-scale human evaluation to measure the effect of these control parameters on multi-turn interactive conversations on the PersonaChat task. We provide a detailed analysis of their relationship to high-level aspects of conversation, and show that by controlling combinations of these variables our models obtain clear improvements in human quality judgments. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237,
238,
239,
240,
241,
242,
243,
244,
245,
246,
247,
248,
249,
250,
251,
252,
253,
254
],
"paper_content_text": [
"Introduction Neural generation models for dialogue, despite their ubiquity in current research, are still poorly understood.",
"Well known problems, such as the genericness and repetitiveness of responses (Serban et al., 2016a) , remain without a de facto solution.",
"Strikingly, the factors that determine human judgments of overall conversation quality are almost entirely unexplored.",
"Most works have been limited to the next utterance prediction problem, whereas a multi-turn evaluation is necessary to evaluate the quality of a full conversation.",
"In this work we both (i) conduct a large-scale study to identify the fine-grained factors governing human judgments of full conversations, and (ii) develop models that apply our findings in practice, * A.S. completed most of this work at Facebook (FAIR).",
"leading to state-of-the-art performance.",
"Specifically, we identify and study eight aspects of conversation that can be measured by human judgments, while varying four types of low-level attributes that can be algorithmically controlled in neural models; see Figure 1 .",
"To control the lowlevel model attributes, we consider two simple but general algorithms: conditional training, in which the neural model is conditioned on additional control features, and weighted decoding, in which control features are added to the decoding scoring function at test time only.",
"One major result of our findings is that existing work has ignored the importance of conversational flow, as standard models (i) repeat or contradict previous statements, (ii) fail to balance specificity with genericness, and (iii) fail to balance asking questions with other dialogue acts.",
"Conducting experiments on the PersonaChat task (Zhang et al., 2018b) , we obtain significantly higher engagingness scores than the baseline by optimizing control of repetition, specificity and question-asking over multiple turns.",
"Using these findings, our best model matches the performance of the winning entry in the recent NeurIPS ConvAI2 competition (Dinan et al., 2019) , which was trained on much more data but had no control (see Section 8.1).",
"Our code, pretrained models, and full chatlogs, are available at https://parl.ai/projects/ controllable_dialogue.",
"Related Work Dialogue Dialogue evaluation is relatively well understood in goal-oriented tasks, where automated approaches can be coded by measuring task completion (Bordes et al., 2017; El Asri et al., 2017; Hastie, 2012; Henderson et al., 2014; Wen et al., 2017) .",
"Task success combined with dialogue cost can be linked to human judgments like user satisfaction via the PARADISE framework (Walker et al., 1997) .",
"However in chitchat tasks, which we study in this work, automatic metrics and their relation to human ratings are less well-understood.",
"While word-overlap metrics are effective for questionanswering and machine translation, for dialogue they have little to no correlation with human judgments (Liu et al., 2016; Novikova et al., 2017 )this is due to the open-ended nature of dialogue.",
"There are more recent attempts to find better automatic approaches, such as adversarial evaluation (Li et al., 2017b) and learning a scoring model (Lowe et al., 2017) , but their value is still unclear.",
"Nevertheless, a number of studies only use automatic metrics, with no human study at all (Lowe et al., 2015; Parthasarathi and Pineau, 2018; Serban et al., 2016b) .",
"Other works do use human evaluations (Dinan et al., 2018; Li et al., 2016a,b; Venkatesh et al., 2017; Vinyals and Le, 2015; Zhang et al., 2018b) , typically reporting just one type of judgment (either quality or appropriateness) via a Likert scale or pairwise comparison.",
"Most of those works only consider single turn evaluations, often with a shortened dialogue history, rather than full multi-turn dialogue.",
"A more comprehensive evaluation strategy has been studied within the scope of the Alexa prize (Venkatesh et al., 2017; Guo et al., 2018) by combining multiple automatic metrics designed to capture various conversational aspects (engagement, coherence, domain coverage, conversational depth and topical diversity).",
"Though these aspects have some similarity to the aspects studied here, we also focus on lower-level aspects (e.g.",
"avoiding repetition, fluency), to understand how they correspond to both our controllable attributes, and to overall quality judgments.",
"Controllable neural text generation Researchers have proposed several approaches to control aspects of RNN-based natural language generation such as sentiment, length, speaker style and tense (Fan et al., 2018; Ficler and Goldberg, 2017; Ghazvininejad et al., 2017; Hu et al., 2017; Kikuchi et al., 2016; Peng et al., 2018; Wang et al., 2017) .",
"In particular, several works use control to tackle the same common sequence-to-sequence problems we address here (particularly genericness and unrelated output), in the context of single-turn response generation (Baheti et al., 2018; Li et al., 2016a Li et al., , 2017a Shen et al., 2017; Xing et al., 2017; Zhang et al., 2018a; Zhou et al., 2017) .",
"By contrast, we focus on developing controls for, and human evaluation of, multi-turn interactive dialogue -this includes a new method (described in Section 5) to control attributes at the dialogue level rather than the utterance level.",
"In this work, we require a control method that is both general-purpose (one technique to simultaneously control many attributes) and easily tunable (the control setting is adjustable after training).",
"Given these constraints, we study two control methods: conditional training (variants of which have been described by Fan et al.",
"(2018) ; Kikuchi et al.",
"(2016) ; Peng et al.",
"(2018) ) and weighted decoding (described by Ghazvininejad et al.",
"(2017) as a general technique, and by Baheti et al.",
"(2018) to control response-relatedness).",
"To our knowledge, this work is the first to systematically compare the effectiveness of two general-purpose control methods across several attributes.",
"3 The PersonaChat dataset PersonaChat (Zhang et al., 2018b ) is a chitchat dialogue task involving two participants (two humans or a human and a bot).",
"Each participant is given a persona -a short collection of personal traits such as I'm left handed or My favorite season is spring -and are instructed to get to know each other by chatting naturally using their designated personas, for 6-8 turns.",
"The training set contains 8939 conversations and 955 personas, collected via crowdworkers, plus 1000 conversations and 100 personas for validation, and a similar number in the hidden test set.",
"The PersonaChat task was the subject of the NeurIPS 2018 ConvAI2 Challenge (Dinan et al., 2019) , in which competitors were first evaluated with respect to automatic met-rics (perplexity, hits@1 and F1 score), and then with respect to human judgment via the question \"How much did you enjoy talking to this user?\"",
"on a scale of 1-4.",
"Baseline model Our baseline model is a 2-layer LSTM sequenceto-sequence model with attention.",
"On any dialogue turn, the input x to the encoder is the entire dialogue history (separated using unique speakeridentifying tokens), with the model's own persona prepended.",
"Conditioned on this input sequence x, the decoder generates a response y.",
"Except when stated otherwise, all our models decode using beam search with beam size 20.",
"We initialized the word embedding matrix with 300-dimensional GloVe embeddings (Pennington et al., 2014) .",
"Using the ParlAI framework (Miller et al., 2017) , we pretrained the model on a dataset of 2.5 million Twitter message-response pairs, 1 then fine-tuned it on PersonaChat.",
"On the Per-sonaChat validation set, the baseline model has a perplexity of 26.83 and F1 of 17.02, which would have placed us 4th out of 26 models in the Con-vAI2 competition (Dinan et al., 2019) .",
"We attempt to improve over this baseline using control.",
"Controllable text generation methods Suppose we have a sequence-to-sequence model which gives P (y|x) = Π t P (y t |x, y 1 , .",
".",
".",
", y t−1 ), the conditional probability of a response y (the model's next utterance) given input x (the context, which in our case includes the model's own persona and the dialogue history).",
"Contrary to most previous work, which controls at the sentence level, we wish to control attributes of the output y at the dialogue levelmeaning that a single control setting is used for a whole dialogue.",
"For example, to control questionasking, we provide a control setting at the beginning of each dialogue (e.g.",
"20% questions or 70% questions) rather than providing a control setting for each utterance (e.g.",
"is a question or isn't a question).",
"With this approach, the sequence-tosequence model is able to choose what value the controlled attribute should take for any particular utterance, but we are able to choose the overall distribution.",
"We find that this approach works well -for example, the sequence-to-sequence model is generally good at detecting when to ask a question.",
"In particular, this is easier than the alternative: developing a separate process to decide, for each utterance, whether to ask a question.",
"In this section, we describe the two methods -which we call Conditional Training (CT) and Weighted Decoding (WD) -that we use to control attributes of the output y at the dialogue level.",
"Conditional Training (CT) Conditional Training (Fan et al., 2018; Kikuchi et al., 2016; Peng et al., 2018) is a method to learn a sequence-to-sequence model P (y|x, z), where z is a discrete control variable.",
"If the control attribute is naturally continuous (for example in our work, repetitiveness, specificity and response-relatedness), we use z to represent bucketed ranges.",
"For a binary attribute like questionasking, z represents an overall probability (as explained in Section 5).",
"To train a CT model, we first automatically annotate every (x, y) pair in the training set with the attribute we wish to control (for example, whether y contains a question mark).",
"During training, for each example we determine the corresponding z value (for continuous attributes, this simply means sorting into the correct bucket; for question-asking, see Section 6.4).",
"Next, the control variable z is represented via an embedding (each of the possible values of z has its own embedding).",
"For all our experiments, the embedding is of length 10; this was determined via hyperparameter tuning.",
"There are several possible ways to condition the sequence-to-sequence model on z -for example, append z to the end of the input sequence, or use z as the START symbol for the decoder.",
"We find it most effective to concatenate z to the decoder's input on every step.",
"2 Lastly, the CT model learns to produce y = y 1 , .",
".",
".",
", y T by optimizing the cross-entropy loss: loss CT = − 1 T T t=1 log P (y t |x, z, y 1 , .",
".",
".",
", y t−1 ) Our CT models are initialized with the parameters from the baseline sequence-to-sequence model P (y|x) (the new decoder parameters are initialized with small random values), then fine-tuned to optimize loss CT on the PersonaChat training set, until convergence of loss CT on the validation set.",
"Weighted Decoding (WD) Weighted Decoding (Ghazvininejad et al., 2017) is a decoding method that increases or decreases the probability of words with certain features.",
"The technique is applied only at test time, requiring no change to the training method.",
"A limitation of WD is that the controllable attribute must be defined at the word-level; any desired utterance-level attribute must be redefined via word-level features.",
"In weighted decoding, on the t th step of decoding, a partial hypothesis y <t = y 1 , .",
".",
".",
", y t−1 is expanded by computing the score for each possible next word w in the vocabulary: score(w, y <t ; x) = score(y <t ; x) + log P RNN (w|y <t , x) + i w i * f i (w; y <t , x).",
"Here, log P RNN (w|y <t , x) is the log-probability of the word w calculated by the RNN, score(y <t ; x) is the accumulated score of the already-generated words in the hypothesis y <t , and f i (w; y <t , x) are decoding features with associated weights w i .",
"There can be multiple features f i (to control multiple attributes), and the weights w i are hyperparameters to be chosen.",
"A decoding feature f i (w; y <t , x) assigns a real value to the word w, in the context of the text generated so far y <t and the context x.",
"The feature can be continuous (e.g.",
"the unigram probability of w), discrete (e.g.",
"the length of w in characters), or binary (e.g.",
"whether w starts with the same letter as the last word in y <t ).",
"A positive weight w i increases the probability of words w that score highly with respect to f i ; a negative weight decreases their probability.",
"Note that weighted decoding and conditional training can be applied simultaneously (i.e.",
"train a CT model then apply WD at test time) -a strategy we use in our experiments.",
"Controlling conversational attributes In this section, we describe how we use conditional training and weighted decoding to control four attributes: repetition, specificity, responserelatedness and question-asking.",
"We evaluate the effectiveness of both control methods via automatic metrics (i.e., measuring how well the attribute was controlled), and use our findings to select control methods and control settings to be explored further via human evaluation (Section 8).",
"Repetition Our baseline model exhibits three types of repetition, which we call external repetition (selfrepetition across utterances), internal repetition (self-repetition within utterances), and partner repetition (repeating the conversational partner).",
"To control repetition with weighted decoding, 3 we define five n-gram based decoding features (see Appendix D).",
"Three of these features (extrep bigram, intrep bigram and partnerrep bigram) identify repeating bigrams for the three repetition types.",
"The other two features (extrep unigram and intrep unigram) identify repeating content words.",
"By applying a negative weight to these features, we can reduce repetition.",
"In particular, if the weight is −∞, our method is equivalent to n-gram blocking as described by Kulikov et al.",
"(2018) .",
"We observe that repetition control is very important, thus all further control experiments include repetition control.",
"Specificity Like many sequence-to-sequence models using beam search decoding, our baseline frequently asks generic questions such as What music do you like?",
"and gives dull, unspecific responses, such as I like all kinds of music.",
"We control specificity using Normalized Inverse Document Frequency (NIDF) as a measure of word rareness.",
"4 The Inverse Document Frequency of a word w is IDF(w) = log(R/c w ) where R is the number of responses in the dataset, and c w is the number of those responses that contain w. Normalized IDF (which ranges from 0 to 1) is NIDF(w) = IDF(w) − min idf max idf − min idf (1) where min idf and max idf are the minimum and maximum IDFs, taken over all words in the vocabulary.",
"To control specificity with weighted decoding, we use NIDF as a decoding feature.",
"As shown in Table 1 , this method produces reasonable outputs when the feature weight is within a certain range, but at the extremes the outputs are nonsensical.",
"The boundary for nonsensical output differs from example to example.",
"To control specificity with conditional training, we define the specificity of an utterance y to be the mean NIDF of the words in y.",
"Thus our control variable z is mean NIDF (discretized into 10 equal-sized buckets).",
"As shown in Table 1 , this method gives outputs with a narrower NIDF range, but overall produces less nonsensical outputs.",
"Response-relatedness In conversation, it's generally desirable to produce a response that is related to the partner's last utterance; for example if the partner says My grandfather died last month, it is appropriate to say I'm so sorry.",
"Were you close to your grandfather?",
"However, our baseline model frequently responds with unrelated utterances like Do you have any pets?",
"To control response-relatedness with weighted decoding, we use the decoding feature resp rel: resp rel(w; y <t , x) = cos sim(word emb(w), sent emb( )) where word emb(w) is the GloVe embedding for the word w, sent emb( ) is the sentence embedding for the partner's last utterance (note is part of the context x), and cos sim is the cosine similarity between the two.",
"In particular, the sentence embedding sent emb(s) for an utterance s is a weighted average of the GloVe embeddings of the words in s, with the first principal component projected out; for full details, see Arora et al.",
"(2017) .",
"This method of controlling response-relatedness is similar to that described in (Baheti et al., 2018) .",
"We find that weighted decoding is effective to control the semantic relatedness of the model's response to the partner's last utterance (see Table 2 ).",
"As before, we find that extreme weights lead to nonsensical output.",
"To control response-relatedness with conditional training, we try defining the control variable z to be cos sim(sent emb(y), sent emb( )), the overall cosine similarity between the partner's last utterance and the model's response y (again, we discretize z).",
"However, we find this method ineffective -the CT model learns only a very weak connection between z and the semantic relatedness of the output (see Section 7 for more details).",
"Question-asking Considerate chitchat requires a reciprocal asking and answering of questions -asking too few or too many can appear self-centered or nosy.",
"We control question-asking in order to study these trade-offs.",
"To control question-asking with weighted decoding, we use the binary decoding feature is qn word(w), which is equal to 1 if and only if the word w is in a pre-defined list of interrogative words (how, what, when, where, which, who, whom, whose, why, ?)",
".",
"We find this is a somewhat effective method to encourage or discourage questions, but with unintended side-effects: a negative weight can discourage valid non-question utterances that happen to contain interrogative words (such as I'm learning how to knit) and a positive weight can result in degenerate utterances (such as For conditional training, we regard an utterance y as containing a question if and only if y contains a question mark.",
"We train our CT model on a control variable z with 11 possible values: {0, .",
".",
".",
", 10}.",
"As discussed in Section 5, we wish to control question-asking at the distributional, dialogue level, rather than at the binary, utterance level.",
"Thus the setting z = i means that the model should produce, on average, utterances containing '?'",
"with probability i/10.",
"During training we randomly assign examples to buckets such that each bucket i is trained on examples with the correct proportion of questions (i/10), and all buckets have the same amount of training examples.",
"We find that conditional training is effective to control question-asking -as shown in Figure 2 , by increasing z from 0 to 10, we obtain a range of question-asking rates from 1.40% to 97.72%.",
"However, when we introduce repetition control, question-asking is reduced -in particular, the z = 10 setting (which should produce 100% questions) now only produces 79.67% questions.",
"The primary problem is the weighted decoding feature extrep bigram, which discourages bigrams that have appeared in previous utterances -this prevents the model from producing bigrams that commonly occur in many questions, such as do you and what is.",
"To fix this, we introduce an extra setting z = 10 (boost), in which we do not use the feature extrep bigram for weighted decoding during beam search, but we do use it to rerank the candidates after beam search.",
"This setting, which allows the model to produce necessary questionasking bigrams, yields a 99.54% question-asking rate, at the cost of slightly increased external bigram repetition (see Appendix F).",
"For controlling question-asking, conditional training is preferable to weighted decoding for two reasons.",
"Firstly, it allows us to achieve (close to) 0% questions, 100% questions, or anything in between, without introducing the risk of degenerate output.",
"Secondly, presence-of-a-question-mark captures the true attribute of interest (questionasking) more exactly and directly than presence of interrogative words.",
"For these reasons, only the CT method is considered in the human evaluation.",
"Comparison of control methods The previous section shows that conditional training and weighted decoding are both useful techniques, with different strengths and weaknesses.",
"The primary disadvantage of conditional training is that it sometimes fails to learn the connection between the control variable z and the target output y.",
"In practice, we find the model can learn simple attributes of the output (such as the presence of '?",
"', and overall genericness), but not relationships between the input and output (such as semantic relatedness).",
"By contrast, weighted decoding can force the desired feature to appear in the output by raising the weight arbitrarily high (though this may have unintended side-effects).",
"The primary disadvantage of weighted decoding is that it risks going off-distribution when the weight is too strong.",
"By contrast, conditional training produces mostly well-formed, indistribution outputs.",
"This highlights the importance of learned control -it is safer to learn to produce output that both satisfies the control variable and is appropriate, than to alter the decoding process to satisfy the control variable, potentially trading off appropriateness in the process.",
"Other considerations include: (1) Convenience: conditional training requires retraining; weighted decoding doesn't, but is slower at test time.",
"Attribute definition: conditional training can control sentence-level attributes, but they must be discrete.",
"By contrast, weighted decoding requires word-level features, but they can be continuous.",
"Human evaluation results In order to study the effect of our controllable attributes, we conduct a large-scale human evalua-tion of 28 model configurations (see Appendix E), plus human-human conversations for comparison.",
"Approach In our evaluation, a crowdworker chats with a model (or in the human-human case, another crowdworker) for six conversational turns, then answers eight multiple-choice questions which each capture different aspects of conversational quality: avoiding repetition, interestingness, making sense, fluency, listening, inquisitiveness, humanness and engagingness.",
"The eight questions are Likert questions on a 1-4 scale, where higher is better.",
"5 To match the ConvAI2 Challenge, we also add a persona retrieval question, in which the crowdworker is asked to select which of two possible personas was the model's persona.",
"For full details of the evaluation design, see Appendix B.",
"Our evaluation is the same as the ConvAI2 Challenge evaluation, but more detailed -Con-vAI2 includes only engagingness and persona retrieval.",
"6 As in the ConvAI2 challenge, each of our 28 model configurations was evaluated by over 100 crowdworkers, and the results were adjusted for annotator variance via a Bayesian calibration (Kulikov et al., 2018) .",
"In designing our evaluation, we aimed to capture the four aspects we expected to directly improve via control (avoiding repetition, interestingness, listening, inquisitiveness), two important error classes we thought would be affected by our controls (fluency, making sense), and two overall quality measures (engagingness, humanness).",
"Main findings In this section we summarize the main findings of our human evaluation -whose full results can be found in Appendices G and H, with sample conversations in Appendix C. As Figure 3 shows, controlling for repetition, specificity and question-asking all lead to large 5 Exceptions: Avoiding repetition is a 1-3 scale, as we found this gave clearer instructions.",
"Inquisitiveness has an optimal score of 3; 1 and 2 represent too little questionasking, and 4 represents too much.",
"6 There are three other minor differences between our evaluation and ConvAI2's: (1) We fix capitalization and spacing before showing the chatbot's utterances to crowdworkers, while ConvAI2 show the raw lowercase tokenized form.",
"We found the latter interferes with fluency evaluation.",
"(2) We conduct 6 dialogue turns, while ConvAI2 conducts 4-6.",
"This was necessary to evaluate repetitiveness.",
"(3) We use (publicly-available) validation set personas, while ConvAI2 uses (hidden) test set personas.",
"This enables us to release our evaluation chatlogs.",
"engagingness improvements over the greedy and beam-search baseline models.",
"In particular, we find that controlling for multi-turn (self) repetition is important and should be incorporated alongside other attribute control methods.",
"We found no improvement by controlling response-relatedness.",
"To better understand these overall engagingness improvements, we consider the full set of human judgments, shown in Figure 4 .",
"We find that reducing repetition leads to improvements across all our aspects of conversational quality.",
"Increasing specificity shows improvements in interestingness and listening ability over the repetition-controlled baseline, while increasing question-asking shows improvements in inquisitiveness and interestingness over the repetition-controlled baseline.",
"Our most engaging model, which controls both repetition and question-asking -marked 'Question (CT)' in Figure 3 (left) -matches the engagingness of the winning entry in the ConvAI2 competition, as both models achieve a raw score 7 of 3.1 (Dinan et al., 2019) .",
"However, the Con-vAI2 winner, Lost in Conversation, was trained on approximately 12× as much data as our model.",
"Lost in Conversation is based on the OpenAI GPT Language Model (Radford et al., 2018) , which is pretrained on the BookCorpus (Zhu et al., 2015) , which contains approximately 985 million words, whereas our model is pretrained on the Twitter dataset (approximately 79 million words).",
"Altogether, our evaluation clearly shows that controlling low-level attributes over multiple turns leads to improved overall quality.",
"Effect of controlled attributes Repetition (WD) We observe that selfrepetition across utterances (external repetition) is by far the most severe form of repetition in our beam search baseline model.",
"We evaluate several settings of the extrep bigram weighted decoding feature, and find that an aggressive repetition-reduction setting (reducing bigram repetition rate to below gold data levels) is rated best.",
"We also find that blocking repeated content words improves the avoiding repetition score.",
"See Appendices E, F and G for full details.",
"As shown in Figure 3 Figure 3 : Calibrated human judgments of engagingness for the baselines and best controlled models (left); for different specificity control settings (middle); and for different question-asking control settings (right).",
"over the beam search baseline in all metrics, and achieves close-to-human scores on all metrics except humanness.",
"This striking result demonstrates that repetition is by far the biggest limiting quality factor for naive sequence-to-sequence dialogue agents.",
"The result also emphasizes the importance of multi-turn dialogue evaluation to detect the problem.",
"We refer to this model as the repetitioncontrolled baseline, and use it as a basis for all remaining experiments (i.e., we control specificity, response-relatedness and question-asking on top of these repetition-control settings).",
"Specificity (WD, CT) For our weighted decoding models, the extreme settings (very generic and very specific) score poorly in engagingness due to the frequent presence of degenerate output -see Figure 3 (middle).",
"We find that the weight = 4 setting (which is more specific than the repetitioncontrolled baseline and about as specific as the gold data) maximizes engagingness.",
"As shown in Figure 3 (left) and Figure 4 , this more-specific model is rated more interesting, engaging, and a better listener than the repetition-controlled baseline, but at the cost of reduced fluency and making sense.",
"Our CT model with z = 7 (which has a similar NIDF level as WD with weight = 4) shows similar results, but the improvements are smaller.",
"For further discussion on the interestingness of our specificity models, see Section 8.3.",
"Response-relatedness (WD) We evaluated several control settings (weight = −10, 5, 10, 13) and found that none scored better than weight = 0 (no response-relatedness control); see Appendix H. This is surprising -prior to running the human evaluation, we annotated 100 examples ourselves to determine the best control settings.",
"While we identified a more responsive setting (weight = 5) as less likely than the uncontrolled model to ignore the user, crowdworkers rated it as a slightly worse listener than the uncontrolled model.",
"One explanation for this discrepancy is that the more responsive model takes more risks, using more rare words (0.197 NIDF, up from 0.178), and thus receives a lower makes-sense score (3.41, down from 3.70).",
"We hypothesize that, compared to us, the crowdworkers are less tolerant of slightly nonsensical output, and more tolerant of generic unrelated utterances.",
"Question-asking (CT) As shown in Figure 3 (right), a question-asking rate of 65.7% (z = 7) maximizes engagingness.",
"This setting, which asks more questions than both the repetition-controlled baseline (50.0%) and the human-produced gold data (28.8%), brings us closest to human-level engagingness -see Figure 3 (left).",
"Although we find that a rate of approximately 65.7% questionasking is the most engaging, a lower level (48.9%, or z = 4) is rated the best listener.",
"Lastly, we find that although asking too many questions is less engaging, most crowdworkers will not directly criticize a chatbot that asks questions on every turnonly 11.9% of crowdworkers judged the z = 10 (boost) setting, which asks 99.5% questions, as asking too many questions.",
"8 For full details of these scores, see Appendix F and H. For time and budget reasons, we did not evaluate any models controlling both question-asking and specificity.",
"However, we expect it is possible to obtain further improvements by doing so.",
"A/B tests for interestingness Though our more-specific models yielded significant improvements in engagingness, we were surprised that they did not yield clearer improvements in interestingness.",
"To investigate further, we conducted an A/B interestingness evaluation of three specificity-controlled models, compared to the repetition-controlled baseline.",
"Crowdworkers were shown two conversations (from the main human evaluation) and asked to choose which model was more interesting (see Figure 7 for details).",
"We collected 500 samples per comparison, plus 200 additional human vs repetition-controlled baseline samples, which were used to filter for quality control.",
"After discarding low-quality crowdworkers, we have roughly 300 evaluations per comparison, with an average Cohen's κ = 0.6.",
"As shown in Table 3 , all three models were rated significantly more interesting than the repetitioncontrolled baseline.",
"This convincingly shows that producing utterances with more rare words is a valid strategy to improve interestingness.",
"We have two explanations for why these interestingness differences did not materialize in our main evaluation.",
"Firstly, interestingness is a particularly subjective metric (unlike more tangible metrics such as avoiding repetition and making sense) -this makes it hard to calibrate across crowdworkers.",
"Secondly, we suspect that in our original evaluation, the crowdworkers may have evaluated the interestingness of the task rather than the chatbot.",
"This could account for why subtle increases in conversational ability did not result in higher interestingness ratings -the PersonaChat task itself has a natural interestingness limit.",
"Conclusion What makes a good conversation?",
"Through our evaluation, we showed that a good conversation is about balance -controlling for the right level of repetition, specificity and question-asking is important for overall quality.",
"We also found that conversational aspects such as interestingness, listening, and inquisitiveness are all importantthough optimizing these can introduce a trade-off against certain types of errors (such as repetitive, disfluent, or nonsensical output).",
"Secondly, multiturn evaluation is essential to study what makes a good conversation -multiple turns are required to reveal issues such as repetition, consistency, and question-asking frequency.",
"Lastly, what do we mean by 'good'?",
"Although humanness and engagingness are both commonly used as overall quality metrics, the two are very different.",
"While our models achieved close-to-human scores on engagingness, they failed to get close on humannessshowing that a chatbot need not be human-like to be enjoyable.",
"This striking result also demonstrates the importance of measuring more than one quality metric when evaluating dialogue agents.",
"Outlook Our work shows that neural generative systems have systemic problems when applied to open-ended dialogue, some of which (e.g.",
"repetition) are only observable in the multi-turn setting.",
"Furthermore, control of low-level attributes offers a practical way to correct these problems, yielding large improvements to overall quality -in our case, comparable to systems trained on much more data.",
"Future work includes optimizing control settings automatically, and building more convincingly human-like chatbots.",
"Supplementary Material A Screenshots of human evaluation interface B Human evaluation questionnaire design Here are the questions and multiple-choice options used in the human evaluation, in the order presented: [Engagingness] How much did you enjoy talking to this user?",
"Adding w to the hypothesis y <t would create a 2-gram that appears in a previous utterance by the model extrep unigram(w, y <t , x) w is a non-stopword and w appears in a previous utterance by the model intrep bigram(w, y <t , x) Adding w to the hypothesis y <t would create a 2-gram that appears earlier in the hypothesis y <t intrep unigram(w, y <t , x) w is a non-stopword and w appears earlier in the hypothesis y <t partnerrep bigram(w, y <t , x) Adding w to the hypothesis y <t would create a 2-gram that appears in a previous utterance by the partner Repetition control (WD) Extrep bigram WD -0.5 wt -0.5 Extrep bigram WD -1.25 wt -1.25 Extrep bigram WD -3.5 wt -3.5 Extrep bigram WD -inf wt -∞ Repetition-controlled baseline wt -3.5 wt -∞ wt -∞ Question control (CT) Question-controlled CT 0 wt -3.5 wt -∞ wt -∞ z = 0 Question-controlled CT 1 wt -3.5 wt -∞ wt -∞ z = 1 Question-controlled CT 4 wt -3.5 wt -∞ wt -∞ z = 4 Question-controlled CT 7 wt -3.5 wt -∞ wt -∞ z = 7 Question-controlled CT 10 wt -3.5 wt -∞ wt -∞ z = 10 Question-controlled CT 10 (boost) wt 0 * wt -∞ wt -∞ z = 10 Specificity control (CT) Specificity-controlled CT 0 wt -3.5 wt -∞ wt -∞ z = 0 Specificity-controlled CT 2 wt -3.5 wt -∞ wt -∞ z = 2 Specificity-controlled CT 4 wt -3.5 wt -∞ wt -∞ z = 4 Specificity-controlled CT 7 wt -3.5 wt -∞ wt -∞ z = 7 Specificity-controlled CT 9 wt -3.5 wt -∞ wt -∞ z = 9 Specificity control (WD) Specificity-controlled WD -10 wt -3.5 wt -∞ wt -∞ wt -10 Specificity-controlled WD -4 wt -3.5 wt -∞ wt -∞ wt -4 Specificity-controlled WD 4 wt -3.5 wt -∞ wt -∞ wt 4 Specificity-controlled WD 6 wt -3.5 wt -∞ wt -∞ wt 6 Specificity-controlled WD 8 wt -3.5 wt -∞ wt -∞ wt 8 Response-related control (WD) ** Response-related controlled WD -10 wt -3.5 wt -∞ wt -∞ wt -∞ wt -∞ wt -10 Response-related controlled WD 0 wt -3.5 wt -∞ wt -∞ wt -∞ wt -∞ wt 0 Response-related controlled WD 5 wt -3.5 wt -∞ wt -∞ wt -∞ wt -∞ wt 5 Response-related controlled WD 10 wt -3.5 wt -∞ wt -∞ wt -∞ wt -∞ wt 10 Response-related controlled WD 13 wt -3.5 wt -∞ wt -∞ wt -∞ wt -∞ wt 13 Table 5 : Control settings for all configurations that were human-evaluated.",
"'wt' means the weight used for a weighted decoding feature and 'z =' means the setting (i.e.",
"bucket) for the control variable in conditional training.",
"* In the setting Question-controlled CT 10 (boost), the feature extrep bigram is not used for weighted decoding during beam search, but it is used to rerank the candidates after beam search.",
"See Section 6.4 for details.",
"** Note that the Response-related controlled models additionally introduce repetition controls to block internal bigram repetition and partner bigram repetition.",
"This was necessary to prevent the model from parroting the partner's last utterance.",
"In Table 8 , we find that just adding these extra repetition controls (here called Responserelated controlled WD 0, i.e.",
"increased repetition control but no response-relatedness control) outperforms our canonical Repetition-controlled baseline.",
"However, given that we discovered this later, our specificity and question controlled models are built on top of the canonical Repetition-controlled baseline.",
"Table 7 : Raw scores (mean ± std.)",
"for all models and human evaluation metrics.",
"The first eight columns are Likert metrics on a 1-4 scale (except Avoiding Repetition, which is a 1-3 scale), where higher is better (except Inquisitiveness, which has an optimal score of 3).",
"The last column, Persona Retrieval, is on a scale from 0 to 1 where higher is better.",
"The maximum of each column (excluding Human row) is in bold.",
"Model Avoiding Rep.",
"Engage Fluency Humanness Inquisitive Interesting Listening Make Sense Human and baselines * Human 2.79 ± 0.12 3.04 ± 0.11 3.36 ± 0.12 3.35 ± 0.11 2.44 ± 0.12 2.92 ± 0.11 3.32 ± 0.13 3.68 ± 0.11 * Greedy Search 2.08 ± 0.10 2.24 ± 0.11 3.03 ± 0.10 1.75 ± 0.12 1.95 ± 0.10 2.29 ± 0.13 2.62 ± 0.10 3.23 ± 0.10 * Beam Search (beam size 20) 2.08 ± 0.11 2.29 ± 0.11 3.09 ± 0.13 1.71 ± 0.13 2.42 ± 0.11 2.29 ± 0.14 2.47 ± 0.12 3.35 ± 0.13 Repetition control (WD) Extrep bigram WD -0.5 2.62 ± 0.10 2.54 ± 0.12 3.35 ± 0.12 2.13 ± 0.11 2.63 ± 0.11 2.56 ± 0.11 2.93 ± 0.11 3.48 ± 0.11 Extrep bigram WD -1.25 2.78 ± 0.09 2.82 ± 0.13 3.40 ± 0.12 2.27 ± 0.12 2.54 ± 0.09 2.76 ± 0.10 3.05 ± 0.11 3.53 ± 0.14 Extrep bigram WD -3.5 2.83 ± 0.11 2.93 ± 0.10 3.56 ± 0.10 2.43 ± 0.11 2.47 ± 0.11 2.83 ± 0.10 3.14 ± 0.10 3.62 ± 0.12 Extrep bigram WD -inf 2.74 ± 0.11 2.87 ± 0.14 3.49 ± 0.12 2.32 ± 0.13 2.56 ± 0.11 2.75 ± 0.12 3.13 ± 0.12 3.59 ± 0.12 * Repetition-controlled baseline 2.86 ± 0.12 2.82 ± 0.12 3.53 ± 0.10 2.40 ± 0.11 2.62 ± 0.13 2.84 ± 0.12 3.10 ± 0.11 3.58 ± 0.14 Question control (CT) Question-controlled CT 0 2.87 ± 0.12 2.84 ± 0.13 3.51 ± 0.10 2.46 ± 0.11 2.36 ± 0.09 2.76 ± 0.09 3.10 ± 0.10 3.49 ± 0.12 Question-controlled CT 1 2.82 ± 0.11 2.88 ± 0.11 3.42 ± 0.10 2.46 ± 0.12 2.47 ± 0.11 2.79 ± 0.13 3.14 ± 0.11 3.55 ± 0.10 Question-controlled CT 4 2.78 ± 0.12 2.88 ± 0.10 3.47 ± 0.11 2.40 ± 0.09 2.53 ± 0.13 2.83 ± 0.13 3.24 ± 0.11 3.59 ± 0.10 * Question-controlled CT 7 2.81 ± 0.10 2.99 ± 0.11 3.54 ± 0.09 2.35 ± 0.11 2.66 ± 0.12 2.92 ± 0.12 3.11 ± 0.10 3.47 ± 0.10 Question-controlled CT 10 2.67 ± 0.13 2.87 ± 0.11 3.52 ± 0.12 2.35 ± 0.12 2.63 ± 0.12 2.66 ± 0.10 2.94 ± 0.11 3.53 ± 0.12 Question-controlled CT 10 (boost) 2.68 ± 0.12 2.74 ± 0.09 3.42 ± 0.12 2.19 ± 0.13 2.79 ± 0.11 2.74 ± 0.11 3.00 ± 0.12 3.45 ± 0.13 Specificity control (CT) Specificity-controlled CT 0 2.79 ± 0.10 2.93 ± 0.09 3.44 ± 0.12 2.38 ± 0.11 2.56 ± 0.12 2.84 ± 0.12 3.12 ± 0.13 3.61 ± 0.11 Specificity-controlled CT 2 2.78 ± 0.12 2.74 ± 0.11 3.39 ± 0.13 2.31 ± 0.13 2.56 ± 0.13 2.74 ± 0.12 2.99 ± 0.11 3.47 ± 0.10 Specificity-controlled CT 4 2.82 ± 0.10 2.80 ± 0.13 3.44 ± 0.14 2.32 ± 0.13 2.51 ± 0.12 2.78 ± 0.15 3.09 ± 0.13 3.46 ± 0.13 Specificity-controlled CT 7 2.81 ± 0.12 2.91 ± 0.13 3.43 ± 0.11 2.45 ± 0.10 2.49 ± 0.11 2.81 ± 0.12 3.15 ± 0.12 3.55 ± 0.11 Specificity-controlled CT 9 2.80 ± 0.13 2.78 ± 0.10 3.41 ± 0.12 2.35 ± 0.13 2.28 ± 0.11 2.79 ± 0.11 2.91 ± 0.11 3.51 ± 0.12 Specificity control (WD) Specificity-controlled WD -10 2.76 ± 0.11 2.41 ± 0.12 3.19 ± 0.12 2.15 ± 0.11 2.28 ± 0.13 2.35 ± 0.12 2.89 ± 0.11 3.28 ± 0.12 Specificity-controlled WD -4 2.83 ± 0.10 2.76 ± 0.12 3.37 ± 0.10 2.36 ± 0.11 2.46 ± 0.11 2.62 ± 0.12 3.14 ± 0.09 3.52 ± 0.11 * Specificity-controlled WD 4 2.84 ± 0.10 2.96 ± 0.12 3.45 ± 0.13 2.44 ± 0.12 2.56 ± 0.09 2.94 ± 0.11 3.20 ± 0.10 3.54 ± 0.11 Specificity-controlled WD 6 2.81 ± 0.09 2.91 ± 0.10 3.34 ± 0.09 2.31 ± 0.11 2.53 ± 0.12 2.93 ± 0.12 3.09 ± 0.10 3.41 ± 0.12 Specificity-controlled WD 8 2.70 ± 0.11 2.39 ± 0.12 2.54 ± 0.12 1.80 ± 0.13 2.00 ± 0.10 2.49 ± 0.12 2.47 ± 0.10 2.87 ± 0.11 Response-related control (WD) Response-related controlled WD -10 2.77 ± 0.12 2.45 ± 0.12 3.26 ± 0.11 1.96 ± 0.10 2.31 ± 0.12 2.47 ± 0.12 2.73 ± 0.11 3.12 ± 0.12 Response-related controlled WD 0 2.87 ± 0.12 2.97 ± 0.11 3.55 ± 0.09 2.62 ± 0.11 2.48 ± 0.10 2.88 ± 0.12 3.21 ± 0.09 3.70 ± 0.10 Response-related controlled WD 5 2.79 ± 0.10 2.83 ± 0.09 3.35 ± 0.12 2.40 ± 0.12 2.51 ± 0.13 2.80 ± 0.13 3.13 ± 0.12 3.41 ± 0.12 Response-related controlled WD 10 2.74 ± 0.11 2.42 ± 0.12 2.93 ± 0.11 1.95 ± 0.12 2.20 ± 0.12 2.56 ± 0.12 2.90 ± 0.12 3.12 ± 0.10 Response-related controlled WD 13 2.63 ± 0.12 2.06 ± 0.11 2.40 ± 0.09 1.74 ± 0.11 2.07 ± 0.11 2.25 ± 0.12 2.49 ± 0.14 2.63 ± 0.10 Table 8 : Calibrated scores (mean ± std.)",
"for all models and human evaluation metrics.",
"The first eight columns are Likert metrics on a 1-4 scale (except Avoiding Repetition, which is a 1-3 scale), where higher is better (except Inquisitiveness, which has an optimal score of 3).",
"The last column, Persona Retrieval, is on a scale from 0 to 1 where higher is better.",
"The maximum of each column (excluding Human row) is in bold.",
"Rows marked with * are the six models included in Figure 3 Figure 9 : Calibrated human evaluation scores for all models.",
"This is the same data as in Table 8 .",
"Note: 'Repetition-controlled baseline+' in the rightmost column is 'Response-related controlled WD 0' in Table 8 .",
"See Table 5 for explanation."
]
} | {
"paper_header_number": [
"1",
"2",
"4",
"5",
"5.1",
"5.2",
"6",
"6.1",
"6.2",
"6.3",
"6.4",
"7",
"8",
"8.1",
"8.2",
"8.3",
"9"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Baseline model",
"Controllable text generation methods",
"Conditional Training (CT)",
"Weighted Decoding (WD)",
"Controlling conversational attributes",
"Repetition",
"Specificity",
"Response-relatedness",
"Question-asking",
"Comparison of control methods",
"Human evaluation results",
"Main findings",
"Effect of controlled attributes",
"A/B tests for interestingness",
"Conclusion"
]
} | GEM-SciDuet-train-133#paper-1358#slide-8 | Controlling response relatedness WD | Output is degenerate when weight is too high | Output is degenerate when weight is too high | [] |
GEM-SciDuet-train-133#paper-1358#slide-9 | 1358 | What makes a good conversation? How controllable attributes affect human judgments | A good conversation requires balance -between simplicity and detail; staying on topic and changing it; asking questions and answering them. Although dialogue agents are commonly evaluated via human judgments of overall quality, the relationship between quality and these individual factors is less well-studied. In this work, we examine two controllable neural text generation methods, conditional training and weighted decoding, in order to control four important attributes for chitchat dialogue: repetition, specificity, response-relatedness and question-asking. We conduct a large-scale human evaluation to measure the effect of these control parameters on multi-turn interactive conversations on the PersonaChat task. We provide a detailed analysis of their relationship to high-level aspects of conversation, and show that by controlling combinations of these variables our models obtain clear improvements in human quality judgments. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237,
238,
239,
240,
241,
242,
243,
244,
245,
246,
247,
248,
249,
250,
251,
252,
253,
254
],
"paper_content_text": [
"Introduction Neural generation models for dialogue, despite their ubiquity in current research, are still poorly understood.",
"Well known problems, such as the genericness and repetitiveness of responses (Serban et al., 2016a) , remain without a de facto solution.",
"Strikingly, the factors that determine human judgments of overall conversation quality are almost entirely unexplored.",
"Most works have been limited to the next utterance prediction problem, whereas a multi-turn evaluation is necessary to evaluate the quality of a full conversation.",
"In this work we both (i) conduct a large-scale study to identify the fine-grained factors governing human judgments of full conversations, and (ii) develop models that apply our findings in practice, * A.S. completed most of this work at Facebook (FAIR).",
"leading to state-of-the-art performance.",
"Specifically, we identify and study eight aspects of conversation that can be measured by human judgments, while varying four types of low-level attributes that can be algorithmically controlled in neural models; see Figure 1 .",
"To control the lowlevel model attributes, we consider two simple but general algorithms: conditional training, in which the neural model is conditioned on additional control features, and weighted decoding, in which control features are added to the decoding scoring function at test time only.",
"One major result of our findings is that existing work has ignored the importance of conversational flow, as standard models (i) repeat or contradict previous statements, (ii) fail to balance specificity with genericness, and (iii) fail to balance asking questions with other dialogue acts.",
"Conducting experiments on the PersonaChat task (Zhang et al., 2018b) , we obtain significantly higher engagingness scores than the baseline by optimizing control of repetition, specificity and question-asking over multiple turns.",
"Using these findings, our best model matches the performance of the winning entry in the recent NeurIPS ConvAI2 competition (Dinan et al., 2019) , which was trained on much more data but had no control (see Section 8.1).",
"Our code, pretrained models, and full chatlogs, are available at https://parl.ai/projects/ controllable_dialogue.",
"Related Work Dialogue Dialogue evaluation is relatively well understood in goal-oriented tasks, where automated approaches can be coded by measuring task completion (Bordes et al., 2017; El Asri et al., 2017; Hastie, 2012; Henderson et al., 2014; Wen et al., 2017) .",
"Task success combined with dialogue cost can be linked to human judgments like user satisfaction via the PARADISE framework (Walker et al., 1997) .",
"However in chitchat tasks, which we study in this work, automatic metrics and their relation to human ratings are less well-understood.",
"While word-overlap metrics are effective for questionanswering and machine translation, for dialogue they have little to no correlation with human judgments (Liu et al., 2016; Novikova et al., 2017 )this is due to the open-ended nature of dialogue.",
"There are more recent attempts to find better automatic approaches, such as adversarial evaluation (Li et al., 2017b) and learning a scoring model (Lowe et al., 2017) , but their value is still unclear.",
"Nevertheless, a number of studies only use automatic metrics, with no human study at all (Lowe et al., 2015; Parthasarathi and Pineau, 2018; Serban et al., 2016b) .",
"Other works do use human evaluations (Dinan et al., 2018; Li et al., 2016a,b; Venkatesh et al., 2017; Vinyals and Le, 2015; Zhang et al., 2018b) , typically reporting just one type of judgment (either quality or appropriateness) via a Likert scale or pairwise comparison.",
"Most of those works only consider single turn evaluations, often with a shortened dialogue history, rather than full multi-turn dialogue.",
"A more comprehensive evaluation strategy has been studied within the scope of the Alexa prize (Venkatesh et al., 2017; Guo et al., 2018) by combining multiple automatic metrics designed to capture various conversational aspects (engagement, coherence, domain coverage, conversational depth and topical diversity).",
"Though these aspects have some similarity to the aspects studied here, we also focus on lower-level aspects (e.g.",
"avoiding repetition, fluency), to understand how they correspond to both our controllable attributes, and to overall quality judgments.",
"Controllable neural text generation Researchers have proposed several approaches to control aspects of RNN-based natural language generation such as sentiment, length, speaker style and tense (Fan et al., 2018; Ficler and Goldberg, 2017; Ghazvininejad et al., 2017; Hu et al., 2017; Kikuchi et al., 2016; Peng et al., 2018; Wang et al., 2017) .",
"In particular, several works use control to tackle the same common sequence-to-sequence problems we address here (particularly genericness and unrelated output), in the context of single-turn response generation (Baheti et al., 2018; Li et al., 2016a Li et al., , 2017a Shen et al., 2017; Xing et al., 2017; Zhang et al., 2018a; Zhou et al., 2017) .",
"By contrast, we focus on developing controls for, and human evaluation of, multi-turn interactive dialogue -this includes a new method (described in Section 5) to control attributes at the dialogue level rather than the utterance level.",
"In this work, we require a control method that is both general-purpose (one technique to simultaneously control many attributes) and easily tunable (the control setting is adjustable after training).",
"Given these constraints, we study two control methods: conditional training (variants of which have been described by Fan et al.",
"(2018) ; Kikuchi et al.",
"(2016) ; Peng et al.",
"(2018) ) and weighted decoding (described by Ghazvininejad et al.",
"(2017) as a general technique, and by Baheti et al.",
"(2018) to control response-relatedness).",
"To our knowledge, this work is the first to systematically compare the effectiveness of two general-purpose control methods across several attributes.",
"3 The PersonaChat dataset PersonaChat (Zhang et al., 2018b ) is a chitchat dialogue task involving two participants (two humans or a human and a bot).",
"Each participant is given a persona -a short collection of personal traits such as I'm left handed or My favorite season is spring -and are instructed to get to know each other by chatting naturally using their designated personas, for 6-8 turns.",
"The training set contains 8939 conversations and 955 personas, collected via crowdworkers, plus 1000 conversations and 100 personas for validation, and a similar number in the hidden test set.",
"The PersonaChat task was the subject of the NeurIPS 2018 ConvAI2 Challenge (Dinan et al., 2019) , in which competitors were first evaluated with respect to automatic met-rics (perplexity, hits@1 and F1 score), and then with respect to human judgment via the question \"How much did you enjoy talking to this user?\"",
"on a scale of 1-4.",
"Baseline model Our baseline model is a 2-layer LSTM sequenceto-sequence model with attention.",
"On any dialogue turn, the input x to the encoder is the entire dialogue history (separated using unique speakeridentifying tokens), with the model's own persona prepended.",
"Conditioned on this input sequence x, the decoder generates a response y.",
"Except when stated otherwise, all our models decode using beam search with beam size 20.",
"We initialized the word embedding matrix with 300-dimensional GloVe embeddings (Pennington et al., 2014) .",
"Using the ParlAI framework (Miller et al., 2017) , we pretrained the model on a dataset of 2.5 million Twitter message-response pairs, 1 then fine-tuned it on PersonaChat.",
"On the Per-sonaChat validation set, the baseline model has a perplexity of 26.83 and F1 of 17.02, which would have placed us 4th out of 26 models in the Con-vAI2 competition (Dinan et al., 2019) .",
"We attempt to improve over this baseline using control.",
"Controllable text generation methods Suppose we have a sequence-to-sequence model which gives P (y|x) = Π t P (y t |x, y 1 , .",
".",
".",
", y t−1 ), the conditional probability of a response y (the model's next utterance) given input x (the context, which in our case includes the model's own persona and the dialogue history).",
"Contrary to most previous work, which controls at the sentence level, we wish to control attributes of the output y at the dialogue levelmeaning that a single control setting is used for a whole dialogue.",
"For example, to control questionasking, we provide a control setting at the beginning of each dialogue (e.g.",
"20% questions or 70% questions) rather than providing a control setting for each utterance (e.g.",
"is a question or isn't a question).",
"With this approach, the sequence-tosequence model is able to choose what value the controlled attribute should take for any particular utterance, but we are able to choose the overall distribution.",
"We find that this approach works well -for example, the sequence-to-sequence model is generally good at detecting when to ask a question.",
"In particular, this is easier than the alternative: developing a separate process to decide, for each utterance, whether to ask a question.",
"In this section, we describe the two methods -which we call Conditional Training (CT) and Weighted Decoding (WD) -that we use to control attributes of the output y at the dialogue level.",
"Conditional Training (CT) Conditional Training (Fan et al., 2018; Kikuchi et al., 2016; Peng et al., 2018) is a method to learn a sequence-to-sequence model P (y|x, z), where z is a discrete control variable.",
"If the control attribute is naturally continuous (for example in our work, repetitiveness, specificity and response-relatedness), we use z to represent bucketed ranges.",
"For a binary attribute like questionasking, z represents an overall probability (as explained in Section 5).",
"To train a CT model, we first automatically annotate every (x, y) pair in the training set with the attribute we wish to control (for example, whether y contains a question mark).",
"During training, for each example we determine the corresponding z value (for continuous attributes, this simply means sorting into the correct bucket; for question-asking, see Section 6.4).",
"Next, the control variable z is represented via an embedding (each of the possible values of z has its own embedding).",
"For all our experiments, the embedding is of length 10; this was determined via hyperparameter tuning.",
"There are several possible ways to condition the sequence-to-sequence model on z -for example, append z to the end of the input sequence, or use z as the START symbol for the decoder.",
"We find it most effective to concatenate z to the decoder's input on every step.",
"2 Lastly, the CT model learns to produce y = y 1 , .",
".",
".",
", y T by optimizing the cross-entropy loss: loss CT = − 1 T T t=1 log P (y t |x, z, y 1 , .",
".",
".",
", y t−1 ) Our CT models are initialized with the parameters from the baseline sequence-to-sequence model P (y|x) (the new decoder parameters are initialized with small random values), then fine-tuned to optimize loss CT on the PersonaChat training set, until convergence of loss CT on the validation set.",
"Weighted Decoding (WD) Weighted Decoding (Ghazvininejad et al., 2017) is a decoding method that increases or decreases the probability of words with certain features.",
"The technique is applied only at test time, requiring no change to the training method.",
"A limitation of WD is that the controllable attribute must be defined at the word-level; any desired utterance-level attribute must be redefined via word-level features.",
"In weighted decoding, on the t th step of decoding, a partial hypothesis y <t = y 1 , .",
".",
".",
", y t−1 is expanded by computing the score for each possible next word w in the vocabulary: score(w, y <t ; x) = score(y <t ; x) + log P RNN (w|y <t , x) + i w i * f i (w; y <t , x).",
"Here, log P RNN (w|y <t , x) is the log-probability of the word w calculated by the RNN, score(y <t ; x) is the accumulated score of the already-generated words in the hypothesis y <t , and f i (w; y <t , x) are decoding features with associated weights w i .",
"There can be multiple features f i (to control multiple attributes), and the weights w i are hyperparameters to be chosen.",
"A decoding feature f i (w; y <t , x) assigns a real value to the word w, in the context of the text generated so far y <t and the context x.",
"The feature can be continuous (e.g.",
"the unigram probability of w), discrete (e.g.",
"the length of w in characters), or binary (e.g.",
"whether w starts with the same letter as the last word in y <t ).",
"A positive weight w i increases the probability of words w that score highly with respect to f i ; a negative weight decreases their probability.",
"Note that weighted decoding and conditional training can be applied simultaneously (i.e.",
"train a CT model then apply WD at test time) -a strategy we use in our experiments.",
"Controlling conversational attributes In this section, we describe how we use conditional training and weighted decoding to control four attributes: repetition, specificity, responserelatedness and question-asking.",
"We evaluate the effectiveness of both control methods via automatic metrics (i.e., measuring how well the attribute was controlled), and use our findings to select control methods and control settings to be explored further via human evaluation (Section 8).",
"Repetition Our baseline model exhibits three types of repetition, which we call external repetition (selfrepetition across utterances), internal repetition (self-repetition within utterances), and partner repetition (repeating the conversational partner).",
"To control repetition with weighted decoding, 3 we define five n-gram based decoding features (see Appendix D).",
"Three of these features (extrep bigram, intrep bigram and partnerrep bigram) identify repeating bigrams for the three repetition types.",
"The other two features (extrep unigram and intrep unigram) identify repeating content words.",
"By applying a negative weight to these features, we can reduce repetition.",
"In particular, if the weight is −∞, our method is equivalent to n-gram blocking as described by Kulikov et al.",
"(2018) .",
"We observe that repetition control is very important, thus all further control experiments include repetition control.",
"Specificity Like many sequence-to-sequence models using beam search decoding, our baseline frequently asks generic questions such as What music do you like?",
"and gives dull, unspecific responses, such as I like all kinds of music.",
"We control specificity using Normalized Inverse Document Frequency (NIDF) as a measure of word rareness.",
"4 The Inverse Document Frequency of a word w is IDF(w) = log(R/c w ) where R is the number of responses in the dataset, and c w is the number of those responses that contain w. Normalized IDF (which ranges from 0 to 1) is NIDF(w) = IDF(w) − min idf max idf − min idf (1) where min idf and max idf are the minimum and maximum IDFs, taken over all words in the vocabulary.",
"To control specificity with weighted decoding, we use NIDF as a decoding feature.",
"As shown in Table 1 , this method produces reasonable outputs when the feature weight is within a certain range, but at the extremes the outputs are nonsensical.",
"The boundary for nonsensical output differs from example to example.",
"To control specificity with conditional training, we define the specificity of an utterance y to be the mean NIDF of the words in y.",
"Thus our control variable z is mean NIDF (discretized into 10 equal-sized buckets).",
"As shown in Table 1 , this method gives outputs with a narrower NIDF range, but overall produces less nonsensical outputs.",
"Response-relatedness In conversation, it's generally desirable to produce a response that is related to the partner's last utterance; for example if the partner says My grandfather died last month, it is appropriate to say I'm so sorry.",
"Were you close to your grandfather?",
"However, our baseline model frequently responds with unrelated utterances like Do you have any pets?",
"To control response-relatedness with weighted decoding, we use the decoding feature resp rel: resp rel(w; y <t , x) = cos sim(word emb(w), sent emb( )) where word emb(w) is the GloVe embedding for the word w, sent emb( ) is the sentence embedding for the partner's last utterance (note is part of the context x), and cos sim is the cosine similarity between the two.",
"In particular, the sentence embedding sent emb(s) for an utterance s is a weighted average of the GloVe embeddings of the words in s, with the first principal component projected out; for full details, see Arora et al.",
"(2017) .",
"This method of controlling response-relatedness is similar to that described in (Baheti et al., 2018) .",
"We find that weighted decoding is effective to control the semantic relatedness of the model's response to the partner's last utterance (see Table 2 ).",
"As before, we find that extreme weights lead to nonsensical output.",
"To control response-relatedness with conditional training, we try defining the control variable z to be cos sim(sent emb(y), sent emb( )), the overall cosine similarity between the partner's last utterance and the model's response y (again, we discretize z).",
"However, we find this method ineffective -the CT model learns only a very weak connection between z and the semantic relatedness of the output (see Section 7 for more details).",
"Question-asking Considerate chitchat requires a reciprocal asking and answering of questions -asking too few or too many can appear self-centered or nosy.",
"We control question-asking in order to study these trade-offs.",
"To control question-asking with weighted decoding, we use the binary decoding feature is qn word(w), which is equal to 1 if and only if the word w is in a pre-defined list of interrogative words (how, what, when, where, which, who, whom, whose, why, ?)",
".",
"We find this is a somewhat effective method to encourage or discourage questions, but with unintended side-effects: a negative weight can discourage valid non-question utterances that happen to contain interrogative words (such as I'm learning how to knit) and a positive weight can result in degenerate utterances (such as For conditional training, we regard an utterance y as containing a question if and only if y contains a question mark.",
"We train our CT model on a control variable z with 11 possible values: {0, .",
".",
".",
", 10}.",
"As discussed in Section 5, we wish to control question-asking at the distributional, dialogue level, rather than at the binary, utterance level.",
"Thus the setting z = i means that the model should produce, on average, utterances containing '?'",
"with probability i/10.",
"During training we randomly assign examples to buckets such that each bucket i is trained on examples with the correct proportion of questions (i/10), and all buckets have the same amount of training examples.",
"We find that conditional training is effective to control question-asking -as shown in Figure 2 , by increasing z from 0 to 10, we obtain a range of question-asking rates from 1.40% to 97.72%.",
"However, when we introduce repetition control, question-asking is reduced -in particular, the z = 10 setting (which should produce 100% questions) now only produces 79.67% questions.",
"The primary problem is the weighted decoding feature extrep bigram, which discourages bigrams that have appeared in previous utterances -this prevents the model from producing bigrams that commonly occur in many questions, such as do you and what is.",
"To fix this, we introduce an extra setting z = 10 (boost), in which we do not use the feature extrep bigram for weighted decoding during beam search, but we do use it to rerank the candidates after beam search.",
"This setting, which allows the model to produce necessary questionasking bigrams, yields a 99.54% question-asking rate, at the cost of slightly increased external bigram repetition (see Appendix F).",
"For controlling question-asking, conditional training is preferable to weighted decoding for two reasons.",
"Firstly, it allows us to achieve (close to) 0% questions, 100% questions, or anything in between, without introducing the risk of degenerate output.",
"Secondly, presence-of-a-question-mark captures the true attribute of interest (questionasking) more exactly and directly than presence of interrogative words.",
"For these reasons, only the CT method is considered in the human evaluation.",
"Comparison of control methods The previous section shows that conditional training and weighted decoding are both useful techniques, with different strengths and weaknesses.",
"The primary disadvantage of conditional training is that it sometimes fails to learn the connection between the control variable z and the target output y.",
"In practice, we find the model can learn simple attributes of the output (such as the presence of '?",
"', and overall genericness), but not relationships between the input and output (such as semantic relatedness).",
"By contrast, weighted decoding can force the desired feature to appear in the output by raising the weight arbitrarily high (though this may have unintended side-effects).",
"The primary disadvantage of weighted decoding is that it risks going off-distribution when the weight is too strong.",
"By contrast, conditional training produces mostly well-formed, indistribution outputs.",
"This highlights the importance of learned control -it is safer to learn to produce output that both satisfies the control variable and is appropriate, than to alter the decoding process to satisfy the control variable, potentially trading off appropriateness in the process.",
"Other considerations include: (1) Convenience: conditional training requires retraining; weighted decoding doesn't, but is slower at test time.",
"Attribute definition: conditional training can control sentence-level attributes, but they must be discrete.",
"By contrast, weighted decoding requires word-level features, but they can be continuous.",
"Human evaluation results In order to study the effect of our controllable attributes, we conduct a large-scale human evalua-tion of 28 model configurations (see Appendix E), plus human-human conversations for comparison.",
"Approach In our evaluation, a crowdworker chats with a model (or in the human-human case, another crowdworker) for six conversational turns, then answers eight multiple-choice questions which each capture different aspects of conversational quality: avoiding repetition, interestingness, making sense, fluency, listening, inquisitiveness, humanness and engagingness.",
"The eight questions are Likert questions on a 1-4 scale, where higher is better.",
"5 To match the ConvAI2 Challenge, we also add a persona retrieval question, in which the crowdworker is asked to select which of two possible personas was the model's persona.",
"For full details of the evaluation design, see Appendix B.",
"Our evaluation is the same as the ConvAI2 Challenge evaluation, but more detailed -Con-vAI2 includes only engagingness and persona retrieval.",
"6 As in the ConvAI2 challenge, each of our 28 model configurations was evaluated by over 100 crowdworkers, and the results were adjusted for annotator variance via a Bayesian calibration (Kulikov et al., 2018) .",
"In designing our evaluation, we aimed to capture the four aspects we expected to directly improve via control (avoiding repetition, interestingness, listening, inquisitiveness), two important error classes we thought would be affected by our controls (fluency, making sense), and two overall quality measures (engagingness, humanness).",
"Main findings In this section we summarize the main findings of our human evaluation -whose full results can be found in Appendices G and H, with sample conversations in Appendix C. As Figure 3 shows, controlling for repetition, specificity and question-asking all lead to large 5 Exceptions: Avoiding repetition is a 1-3 scale, as we found this gave clearer instructions.",
"Inquisitiveness has an optimal score of 3; 1 and 2 represent too little questionasking, and 4 represents too much.",
"6 There are three other minor differences between our evaluation and ConvAI2's: (1) We fix capitalization and spacing before showing the chatbot's utterances to crowdworkers, while ConvAI2 show the raw lowercase tokenized form.",
"We found the latter interferes with fluency evaluation.",
"(2) We conduct 6 dialogue turns, while ConvAI2 conducts 4-6.",
"This was necessary to evaluate repetitiveness.",
"(3) We use (publicly-available) validation set personas, while ConvAI2 uses (hidden) test set personas.",
"This enables us to release our evaluation chatlogs.",
"engagingness improvements over the greedy and beam-search baseline models.",
"In particular, we find that controlling for multi-turn (self) repetition is important and should be incorporated alongside other attribute control methods.",
"We found no improvement by controlling response-relatedness.",
"To better understand these overall engagingness improvements, we consider the full set of human judgments, shown in Figure 4 .",
"We find that reducing repetition leads to improvements across all our aspects of conversational quality.",
"Increasing specificity shows improvements in interestingness and listening ability over the repetition-controlled baseline, while increasing question-asking shows improvements in inquisitiveness and interestingness over the repetition-controlled baseline.",
"Our most engaging model, which controls both repetition and question-asking -marked 'Question (CT)' in Figure 3 (left) -matches the engagingness of the winning entry in the ConvAI2 competition, as both models achieve a raw score 7 of 3.1 (Dinan et al., 2019) .",
"However, the Con-vAI2 winner, Lost in Conversation, was trained on approximately 12× as much data as our model.",
"Lost in Conversation is based on the OpenAI GPT Language Model (Radford et al., 2018) , which is pretrained on the BookCorpus (Zhu et al., 2015) , which contains approximately 985 million words, whereas our model is pretrained on the Twitter dataset (approximately 79 million words).",
"Altogether, our evaluation clearly shows that controlling low-level attributes over multiple turns leads to improved overall quality.",
"Effect of controlled attributes Repetition (WD) We observe that selfrepetition across utterances (external repetition) is by far the most severe form of repetition in our beam search baseline model.",
"We evaluate several settings of the extrep bigram weighted decoding feature, and find that an aggressive repetition-reduction setting (reducing bigram repetition rate to below gold data levels) is rated best.",
"We also find that blocking repeated content words improves the avoiding repetition score.",
"See Appendices E, F and G for full details.",
"As shown in Figure 3 Figure 3 : Calibrated human judgments of engagingness for the baselines and best controlled models (left); for different specificity control settings (middle); and for different question-asking control settings (right).",
"over the beam search baseline in all metrics, and achieves close-to-human scores on all metrics except humanness.",
"This striking result demonstrates that repetition is by far the biggest limiting quality factor for naive sequence-to-sequence dialogue agents.",
"The result also emphasizes the importance of multi-turn dialogue evaluation to detect the problem.",
"We refer to this model as the repetitioncontrolled baseline, and use it as a basis for all remaining experiments (i.e., we control specificity, response-relatedness and question-asking on top of these repetition-control settings).",
"Specificity (WD, CT) For our weighted decoding models, the extreme settings (very generic and very specific) score poorly in engagingness due to the frequent presence of degenerate output -see Figure 3 (middle).",
"We find that the weight = 4 setting (which is more specific than the repetitioncontrolled baseline and about as specific as the gold data) maximizes engagingness.",
"As shown in Figure 3 (left) and Figure 4 , this more-specific model is rated more interesting, engaging, and a better listener than the repetition-controlled baseline, but at the cost of reduced fluency and making sense.",
"Our CT model with z = 7 (which has a similar NIDF level as WD with weight = 4) shows similar results, but the improvements are smaller.",
"For further discussion on the interestingness of our specificity models, see Section 8.3.",
"Response-relatedness (WD) We evaluated several control settings (weight = −10, 5, 10, 13) and found that none scored better than weight = 0 (no response-relatedness control); see Appendix H. This is surprising -prior to running the human evaluation, we annotated 100 examples ourselves to determine the best control settings.",
"While we identified a more responsive setting (weight = 5) as less likely than the uncontrolled model to ignore the user, crowdworkers rated it as a slightly worse listener than the uncontrolled model.",
"One explanation for this discrepancy is that the more responsive model takes more risks, using more rare words (0.197 NIDF, up from 0.178), and thus receives a lower makes-sense score (3.41, down from 3.70).",
"We hypothesize that, compared to us, the crowdworkers are less tolerant of slightly nonsensical output, and more tolerant of generic unrelated utterances.",
"Question-asking (CT) As shown in Figure 3 (right), a question-asking rate of 65.7% (z = 7) maximizes engagingness.",
"This setting, which asks more questions than both the repetition-controlled baseline (50.0%) and the human-produced gold data (28.8%), brings us closest to human-level engagingness -see Figure 3 (left).",
"Although we find that a rate of approximately 65.7% questionasking is the most engaging, a lower level (48.9%, or z = 4) is rated the best listener.",
"Lastly, we find that although asking too many questions is less engaging, most crowdworkers will not directly criticize a chatbot that asks questions on every turnonly 11.9% of crowdworkers judged the z = 10 (boost) setting, which asks 99.5% questions, as asking too many questions.",
"8 For full details of these scores, see Appendix F and H. For time and budget reasons, we did not evaluate any models controlling both question-asking and specificity.",
"However, we expect it is possible to obtain further improvements by doing so.",
"A/B tests for interestingness Though our more-specific models yielded significant improvements in engagingness, we were surprised that they did not yield clearer improvements in interestingness.",
"To investigate further, we conducted an A/B interestingness evaluation of three specificity-controlled models, compared to the repetition-controlled baseline.",
"Crowdworkers were shown two conversations (from the main human evaluation) and asked to choose which model was more interesting (see Figure 7 for details).",
"We collected 500 samples per comparison, plus 200 additional human vs repetition-controlled baseline samples, which were used to filter for quality control.",
"After discarding low-quality crowdworkers, we have roughly 300 evaluations per comparison, with an average Cohen's κ = 0.6.",
"As shown in Table 3 , all three models were rated significantly more interesting than the repetitioncontrolled baseline.",
"This convincingly shows that producing utterances with more rare words is a valid strategy to improve interestingness.",
"We have two explanations for why these interestingness differences did not materialize in our main evaluation.",
"Firstly, interestingness is a particularly subjective metric (unlike more tangible metrics such as avoiding repetition and making sense) -this makes it hard to calibrate across crowdworkers.",
"Secondly, we suspect that in our original evaluation, the crowdworkers may have evaluated the interestingness of the task rather than the chatbot.",
"This could account for why subtle increases in conversational ability did not result in higher interestingness ratings -the PersonaChat task itself has a natural interestingness limit.",
"Conclusion What makes a good conversation?",
"Through our evaluation, we showed that a good conversation is about balance -controlling for the right level of repetition, specificity and question-asking is important for overall quality.",
"We also found that conversational aspects such as interestingness, listening, and inquisitiveness are all importantthough optimizing these can introduce a trade-off against certain types of errors (such as repetitive, disfluent, or nonsensical output).",
"Secondly, multiturn evaluation is essential to study what makes a good conversation -multiple turns are required to reveal issues such as repetition, consistency, and question-asking frequency.",
"Lastly, what do we mean by 'good'?",
"Although humanness and engagingness are both commonly used as overall quality metrics, the two are very different.",
"While our models achieved close-to-human scores on engagingness, they failed to get close on humannessshowing that a chatbot need not be human-like to be enjoyable.",
"This striking result also demonstrates the importance of measuring more than one quality metric when evaluating dialogue agents.",
"Outlook Our work shows that neural generative systems have systemic problems when applied to open-ended dialogue, some of which (e.g.",
"repetition) are only observable in the multi-turn setting.",
"Furthermore, control of low-level attributes offers a practical way to correct these problems, yielding large improvements to overall quality -in our case, comparable to systems trained on much more data.",
"Future work includes optimizing control settings automatically, and building more convincingly human-like chatbots.",
"Supplementary Material A Screenshots of human evaluation interface B Human evaluation questionnaire design Here are the questions and multiple-choice options used in the human evaluation, in the order presented: [Engagingness] How much did you enjoy talking to this user?",
"Adding w to the hypothesis y <t would create a 2-gram that appears in a previous utterance by the model extrep unigram(w, y <t , x) w is a non-stopword and w appears in a previous utterance by the model intrep bigram(w, y <t , x) Adding w to the hypothesis y <t would create a 2-gram that appears earlier in the hypothesis y <t intrep unigram(w, y <t , x) w is a non-stopword and w appears earlier in the hypothesis y <t partnerrep bigram(w, y <t , x) Adding w to the hypothesis y <t would create a 2-gram that appears in a previous utterance by the partner Repetition control (WD) Extrep bigram WD -0.5 wt -0.5 Extrep bigram WD -1.25 wt -1.25 Extrep bigram WD -3.5 wt -3.5 Extrep bigram WD -inf wt -∞ Repetition-controlled baseline wt -3.5 wt -∞ wt -∞ Question control (CT) Question-controlled CT 0 wt -3.5 wt -∞ wt -∞ z = 0 Question-controlled CT 1 wt -3.5 wt -∞ wt -∞ z = 1 Question-controlled CT 4 wt -3.5 wt -∞ wt -∞ z = 4 Question-controlled CT 7 wt -3.5 wt -∞ wt -∞ z = 7 Question-controlled CT 10 wt -3.5 wt -∞ wt -∞ z = 10 Question-controlled CT 10 (boost) wt 0 * wt -∞ wt -∞ z = 10 Specificity control (CT) Specificity-controlled CT 0 wt -3.5 wt -∞ wt -∞ z = 0 Specificity-controlled CT 2 wt -3.5 wt -∞ wt -∞ z = 2 Specificity-controlled CT 4 wt -3.5 wt -∞ wt -∞ z = 4 Specificity-controlled CT 7 wt -3.5 wt -∞ wt -∞ z = 7 Specificity-controlled CT 9 wt -3.5 wt -∞ wt -∞ z = 9 Specificity control (WD) Specificity-controlled WD -10 wt -3.5 wt -∞ wt -∞ wt -10 Specificity-controlled WD -4 wt -3.5 wt -∞ wt -∞ wt -4 Specificity-controlled WD 4 wt -3.5 wt -∞ wt -∞ wt 4 Specificity-controlled WD 6 wt -3.5 wt -∞ wt -∞ wt 6 Specificity-controlled WD 8 wt -3.5 wt -∞ wt -∞ wt 8 Response-related control (WD) ** Response-related controlled WD -10 wt -3.5 wt -∞ wt -∞ wt -∞ wt -∞ wt -10 Response-related controlled WD 0 wt -3.5 wt -∞ wt -∞ wt -∞ wt -∞ wt 0 Response-related controlled WD 5 wt -3.5 wt -∞ wt -∞ wt -∞ wt -∞ wt 5 Response-related controlled WD 10 wt -3.5 wt -∞ wt -∞ wt -∞ wt -∞ wt 10 Response-related controlled WD 13 wt -3.5 wt -∞ wt -∞ wt -∞ wt -∞ wt 13 Table 5 : Control settings for all configurations that were human-evaluated.",
"'wt' means the weight used for a weighted decoding feature and 'z =' means the setting (i.e.",
"bucket) for the control variable in conditional training.",
"* In the setting Question-controlled CT 10 (boost), the feature extrep bigram is not used for weighted decoding during beam search, but it is used to rerank the candidates after beam search.",
"See Section 6.4 for details.",
"** Note that the Response-related controlled models additionally introduce repetition controls to block internal bigram repetition and partner bigram repetition.",
"This was necessary to prevent the model from parroting the partner's last utterance.",
"In Table 8 , we find that just adding these extra repetition controls (here called Responserelated controlled WD 0, i.e.",
"increased repetition control but no response-relatedness control) outperforms our canonical Repetition-controlled baseline.",
"However, given that we discovered this later, our specificity and question controlled models are built on top of the canonical Repetition-controlled baseline.",
"Table 7 : Raw scores (mean ± std.)",
"for all models and human evaluation metrics.",
"The first eight columns are Likert metrics on a 1-4 scale (except Avoiding Repetition, which is a 1-3 scale), where higher is better (except Inquisitiveness, which has an optimal score of 3).",
"The last column, Persona Retrieval, is on a scale from 0 to 1 where higher is better.",
"The maximum of each column (excluding Human row) is in bold.",
"Model Avoiding Rep.",
"Engage Fluency Humanness Inquisitive Interesting Listening Make Sense Human and baselines * Human 2.79 ± 0.12 3.04 ± 0.11 3.36 ± 0.12 3.35 ± 0.11 2.44 ± 0.12 2.92 ± 0.11 3.32 ± 0.13 3.68 ± 0.11 * Greedy Search 2.08 ± 0.10 2.24 ± 0.11 3.03 ± 0.10 1.75 ± 0.12 1.95 ± 0.10 2.29 ± 0.13 2.62 ± 0.10 3.23 ± 0.10 * Beam Search (beam size 20) 2.08 ± 0.11 2.29 ± 0.11 3.09 ± 0.13 1.71 ± 0.13 2.42 ± 0.11 2.29 ± 0.14 2.47 ± 0.12 3.35 ± 0.13 Repetition control (WD) Extrep bigram WD -0.5 2.62 ± 0.10 2.54 ± 0.12 3.35 ± 0.12 2.13 ± 0.11 2.63 ± 0.11 2.56 ± 0.11 2.93 ± 0.11 3.48 ± 0.11 Extrep bigram WD -1.25 2.78 ± 0.09 2.82 ± 0.13 3.40 ± 0.12 2.27 ± 0.12 2.54 ± 0.09 2.76 ± 0.10 3.05 ± 0.11 3.53 ± 0.14 Extrep bigram WD -3.5 2.83 ± 0.11 2.93 ± 0.10 3.56 ± 0.10 2.43 ± 0.11 2.47 ± 0.11 2.83 ± 0.10 3.14 ± 0.10 3.62 ± 0.12 Extrep bigram WD -inf 2.74 ± 0.11 2.87 ± 0.14 3.49 ± 0.12 2.32 ± 0.13 2.56 ± 0.11 2.75 ± 0.12 3.13 ± 0.12 3.59 ± 0.12 * Repetition-controlled baseline 2.86 ± 0.12 2.82 ± 0.12 3.53 ± 0.10 2.40 ± 0.11 2.62 ± 0.13 2.84 ± 0.12 3.10 ± 0.11 3.58 ± 0.14 Question control (CT) Question-controlled CT 0 2.87 ± 0.12 2.84 ± 0.13 3.51 ± 0.10 2.46 ± 0.11 2.36 ± 0.09 2.76 ± 0.09 3.10 ± 0.10 3.49 ± 0.12 Question-controlled CT 1 2.82 ± 0.11 2.88 ± 0.11 3.42 ± 0.10 2.46 ± 0.12 2.47 ± 0.11 2.79 ± 0.13 3.14 ± 0.11 3.55 ± 0.10 Question-controlled CT 4 2.78 ± 0.12 2.88 ± 0.10 3.47 ± 0.11 2.40 ± 0.09 2.53 ± 0.13 2.83 ± 0.13 3.24 ± 0.11 3.59 ± 0.10 * Question-controlled CT 7 2.81 ± 0.10 2.99 ± 0.11 3.54 ± 0.09 2.35 ± 0.11 2.66 ± 0.12 2.92 ± 0.12 3.11 ± 0.10 3.47 ± 0.10 Question-controlled CT 10 2.67 ± 0.13 2.87 ± 0.11 3.52 ± 0.12 2.35 ± 0.12 2.63 ± 0.12 2.66 ± 0.10 2.94 ± 0.11 3.53 ± 0.12 Question-controlled CT 10 (boost) 2.68 ± 0.12 2.74 ± 0.09 3.42 ± 0.12 2.19 ± 0.13 2.79 ± 0.11 2.74 ± 0.11 3.00 ± 0.12 3.45 ± 0.13 Specificity control (CT) Specificity-controlled CT 0 2.79 ± 0.10 2.93 ± 0.09 3.44 ± 0.12 2.38 ± 0.11 2.56 ± 0.12 2.84 ± 0.12 3.12 ± 0.13 3.61 ± 0.11 Specificity-controlled CT 2 2.78 ± 0.12 2.74 ± 0.11 3.39 ± 0.13 2.31 ± 0.13 2.56 ± 0.13 2.74 ± 0.12 2.99 ± 0.11 3.47 ± 0.10 Specificity-controlled CT 4 2.82 ± 0.10 2.80 ± 0.13 3.44 ± 0.14 2.32 ± 0.13 2.51 ± 0.12 2.78 ± 0.15 3.09 ± 0.13 3.46 ± 0.13 Specificity-controlled CT 7 2.81 ± 0.12 2.91 ± 0.13 3.43 ± 0.11 2.45 ± 0.10 2.49 ± 0.11 2.81 ± 0.12 3.15 ± 0.12 3.55 ± 0.11 Specificity-controlled CT 9 2.80 ± 0.13 2.78 ± 0.10 3.41 ± 0.12 2.35 ± 0.13 2.28 ± 0.11 2.79 ± 0.11 2.91 ± 0.11 3.51 ± 0.12 Specificity control (WD) Specificity-controlled WD -10 2.76 ± 0.11 2.41 ± 0.12 3.19 ± 0.12 2.15 ± 0.11 2.28 ± 0.13 2.35 ± 0.12 2.89 ± 0.11 3.28 ± 0.12 Specificity-controlled WD -4 2.83 ± 0.10 2.76 ± 0.12 3.37 ± 0.10 2.36 ± 0.11 2.46 ± 0.11 2.62 ± 0.12 3.14 ± 0.09 3.52 ± 0.11 * Specificity-controlled WD 4 2.84 ± 0.10 2.96 ± 0.12 3.45 ± 0.13 2.44 ± 0.12 2.56 ± 0.09 2.94 ± 0.11 3.20 ± 0.10 3.54 ± 0.11 Specificity-controlled WD 6 2.81 ± 0.09 2.91 ± 0.10 3.34 ± 0.09 2.31 ± 0.11 2.53 ± 0.12 2.93 ± 0.12 3.09 ± 0.10 3.41 ± 0.12 Specificity-controlled WD 8 2.70 ± 0.11 2.39 ± 0.12 2.54 ± 0.12 1.80 ± 0.13 2.00 ± 0.10 2.49 ± 0.12 2.47 ± 0.10 2.87 ± 0.11 Response-related control (WD) Response-related controlled WD -10 2.77 ± 0.12 2.45 ± 0.12 3.26 ± 0.11 1.96 ± 0.10 2.31 ± 0.12 2.47 ± 0.12 2.73 ± 0.11 3.12 ± 0.12 Response-related controlled WD 0 2.87 ± 0.12 2.97 ± 0.11 3.55 ± 0.09 2.62 ± 0.11 2.48 ± 0.10 2.88 ± 0.12 3.21 ± 0.09 3.70 ± 0.10 Response-related controlled WD 5 2.79 ± 0.10 2.83 ± 0.09 3.35 ± 0.12 2.40 ± 0.12 2.51 ± 0.13 2.80 ± 0.13 3.13 ± 0.12 3.41 ± 0.12 Response-related controlled WD 10 2.74 ± 0.11 2.42 ± 0.12 2.93 ± 0.11 1.95 ± 0.12 2.20 ± 0.12 2.56 ± 0.12 2.90 ± 0.12 3.12 ± 0.10 Response-related controlled WD 13 2.63 ± 0.12 2.06 ± 0.11 2.40 ± 0.09 1.74 ± 0.11 2.07 ± 0.11 2.25 ± 0.12 2.49 ± 0.14 2.63 ± 0.10 Table 8 : Calibrated scores (mean ± std.)",
"for all models and human evaluation metrics.",
"The first eight columns are Likert metrics on a 1-4 scale (except Avoiding Repetition, which is a 1-3 scale), where higher is better (except Inquisitiveness, which has an optimal score of 3).",
"The last column, Persona Retrieval, is on a scale from 0 to 1 where higher is better.",
"The maximum of each column (excluding Human row) is in bold.",
"Rows marked with * are the six models included in Figure 3 Figure 9 : Calibrated human evaluation scores for all models.",
"This is the same data as in Table 8 .",
"Note: 'Repetition-controlled baseline+' in the rightmost column is 'Response-related controlled WD 0' in Table 8 .",
"See Table 5 for explanation."
]
} | {
"paper_header_number": [
"1",
"2",
"4",
"5",
"5.1",
"5.2",
"6",
"6.1",
"6.2",
"6.3",
"6.4",
"7",
"8",
"8.1",
"8.2",
"8.3",
"9"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Baseline model",
"Controllable text generation methods",
"Conditional Training (CT)",
"Weighted Decoding (WD)",
"Controlling conversational attributes",
"Repetition",
"Specificity",
"Response-relatedness",
"Question-asking",
"Comparison of control methods",
"Human evaluation results",
"Main findings",
"Effect of controlled attributes",
"A/B tests for interestingness",
"Conclusion"
]
} | GEM-SciDuet-train-133#paper-1358#slide-9 | Q2 How does control affect human eval | Reduce n-gram repetition to human level
(reduce genericness) to human level
Increase response- relatedness (similarity to last utterance) | Reduce n-gram repetition to human level
(reduce genericness) to human level
Increase response- relatedness (similarity to last utterance) | [] |
GEM-SciDuet-train-133#paper-1358#slide-10 | 1358 | What makes a good conversation? How controllable attributes affect human judgments | A good conversation requires balance -between simplicity and detail; staying on topic and changing it; asking questions and answering them. Although dialogue agents are commonly evaluated via human judgments of overall quality, the relationship between quality and these individual factors is less well-studied. In this work, we examine two controllable neural text generation methods, conditional training and weighted decoding, in order to control four important attributes for chitchat dialogue: repetition, specificity, response-relatedness and question-asking. We conduct a large-scale human evaluation to measure the effect of these control parameters on multi-turn interactive conversations on the PersonaChat task. We provide a detailed analysis of their relationship to high-level aspects of conversation, and show that by controlling combinations of these variables our models obtain clear improvements in human quality judgments. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237,
238,
239,
240,
241,
242,
243,
244,
245,
246,
247,
248,
249,
250,
251,
252,
253,
254
],
"paper_content_text": [
"Introduction Neural generation models for dialogue, despite their ubiquity in current research, are still poorly understood.",
"Well known problems, such as the genericness and repetitiveness of responses (Serban et al., 2016a) , remain without a de facto solution.",
"Strikingly, the factors that determine human judgments of overall conversation quality are almost entirely unexplored.",
"Most works have been limited to the next utterance prediction problem, whereas a multi-turn evaluation is necessary to evaluate the quality of a full conversation.",
"In this work we both (i) conduct a large-scale study to identify the fine-grained factors governing human judgments of full conversations, and (ii) develop models that apply our findings in practice, * A.S. completed most of this work at Facebook (FAIR).",
"leading to state-of-the-art performance.",
"Specifically, we identify and study eight aspects of conversation that can be measured by human judgments, while varying four types of low-level attributes that can be algorithmically controlled in neural models; see Figure 1 .",
"To control the lowlevel model attributes, we consider two simple but general algorithms: conditional training, in which the neural model is conditioned on additional control features, and weighted decoding, in which control features are added to the decoding scoring function at test time only.",
"One major result of our findings is that existing work has ignored the importance of conversational flow, as standard models (i) repeat or contradict previous statements, (ii) fail to balance specificity with genericness, and (iii) fail to balance asking questions with other dialogue acts.",
"Conducting experiments on the PersonaChat task (Zhang et al., 2018b) , we obtain significantly higher engagingness scores than the baseline by optimizing control of repetition, specificity and question-asking over multiple turns.",
"Using these findings, our best model matches the performance of the winning entry in the recent NeurIPS ConvAI2 competition (Dinan et al., 2019) , which was trained on much more data but had no control (see Section 8.1).",
"Our code, pretrained models, and full chatlogs, are available at https://parl.ai/projects/ controllable_dialogue.",
"Related Work Dialogue Dialogue evaluation is relatively well understood in goal-oriented tasks, where automated approaches can be coded by measuring task completion (Bordes et al., 2017; El Asri et al., 2017; Hastie, 2012; Henderson et al., 2014; Wen et al., 2017) .",
"Task success combined with dialogue cost can be linked to human judgments like user satisfaction via the PARADISE framework (Walker et al., 1997) .",
"However in chitchat tasks, which we study in this work, automatic metrics and their relation to human ratings are less well-understood.",
"While word-overlap metrics are effective for questionanswering and machine translation, for dialogue they have little to no correlation with human judgments (Liu et al., 2016; Novikova et al., 2017 )this is due to the open-ended nature of dialogue.",
"There are more recent attempts to find better automatic approaches, such as adversarial evaluation (Li et al., 2017b) and learning a scoring model (Lowe et al., 2017) , but their value is still unclear.",
"Nevertheless, a number of studies only use automatic metrics, with no human study at all (Lowe et al., 2015; Parthasarathi and Pineau, 2018; Serban et al., 2016b) .",
"Other works do use human evaluations (Dinan et al., 2018; Li et al., 2016a,b; Venkatesh et al., 2017; Vinyals and Le, 2015; Zhang et al., 2018b) , typically reporting just one type of judgment (either quality or appropriateness) via a Likert scale or pairwise comparison.",
"Most of those works only consider single turn evaluations, often with a shortened dialogue history, rather than full multi-turn dialogue.",
"A more comprehensive evaluation strategy has been studied within the scope of the Alexa prize (Venkatesh et al., 2017; Guo et al., 2018) by combining multiple automatic metrics designed to capture various conversational aspects (engagement, coherence, domain coverage, conversational depth and topical diversity).",
"Though these aspects have some similarity to the aspects studied here, we also focus on lower-level aspects (e.g.",
"avoiding repetition, fluency), to understand how they correspond to both our controllable attributes, and to overall quality judgments.",
"Controllable neural text generation Researchers have proposed several approaches to control aspects of RNN-based natural language generation such as sentiment, length, speaker style and tense (Fan et al., 2018; Ficler and Goldberg, 2017; Ghazvininejad et al., 2017; Hu et al., 2017; Kikuchi et al., 2016; Peng et al., 2018; Wang et al., 2017) .",
"In particular, several works use control to tackle the same common sequence-to-sequence problems we address here (particularly genericness and unrelated output), in the context of single-turn response generation (Baheti et al., 2018; Li et al., 2016a Li et al., , 2017a Shen et al., 2017; Xing et al., 2017; Zhang et al., 2018a; Zhou et al., 2017) .",
"By contrast, we focus on developing controls for, and human evaluation of, multi-turn interactive dialogue -this includes a new method (described in Section 5) to control attributes at the dialogue level rather than the utterance level.",
"In this work, we require a control method that is both general-purpose (one technique to simultaneously control many attributes) and easily tunable (the control setting is adjustable after training).",
"Given these constraints, we study two control methods: conditional training (variants of which have been described by Fan et al.",
"(2018) ; Kikuchi et al.",
"(2016) ; Peng et al.",
"(2018) ) and weighted decoding (described by Ghazvininejad et al.",
"(2017) as a general technique, and by Baheti et al.",
"(2018) to control response-relatedness).",
"To our knowledge, this work is the first to systematically compare the effectiveness of two general-purpose control methods across several attributes.",
"3 The PersonaChat dataset PersonaChat (Zhang et al., 2018b ) is a chitchat dialogue task involving two participants (two humans or a human and a bot).",
"Each participant is given a persona -a short collection of personal traits such as I'm left handed or My favorite season is spring -and are instructed to get to know each other by chatting naturally using their designated personas, for 6-8 turns.",
"The training set contains 8939 conversations and 955 personas, collected via crowdworkers, plus 1000 conversations and 100 personas for validation, and a similar number in the hidden test set.",
"The PersonaChat task was the subject of the NeurIPS 2018 ConvAI2 Challenge (Dinan et al., 2019) , in which competitors were first evaluated with respect to automatic met-rics (perplexity, hits@1 and F1 score), and then with respect to human judgment via the question \"How much did you enjoy talking to this user?\"",
"on a scale of 1-4.",
"Baseline model Our baseline model is a 2-layer LSTM sequenceto-sequence model with attention.",
"On any dialogue turn, the input x to the encoder is the entire dialogue history (separated using unique speakeridentifying tokens), with the model's own persona prepended.",
"Conditioned on this input sequence x, the decoder generates a response y.",
"Except when stated otherwise, all our models decode using beam search with beam size 20.",
"We initialized the word embedding matrix with 300-dimensional GloVe embeddings (Pennington et al., 2014) .",
"Using the ParlAI framework (Miller et al., 2017) , we pretrained the model on a dataset of 2.5 million Twitter message-response pairs, 1 then fine-tuned it on PersonaChat.",
"On the Per-sonaChat validation set, the baseline model has a perplexity of 26.83 and F1 of 17.02, which would have placed us 4th out of 26 models in the Con-vAI2 competition (Dinan et al., 2019) .",
"We attempt to improve over this baseline using control.",
"Controllable text generation methods Suppose we have a sequence-to-sequence model which gives P (y|x) = Π t P (y t |x, y 1 , .",
".",
".",
", y t−1 ), the conditional probability of a response y (the model's next utterance) given input x (the context, which in our case includes the model's own persona and the dialogue history).",
"Contrary to most previous work, which controls at the sentence level, we wish to control attributes of the output y at the dialogue levelmeaning that a single control setting is used for a whole dialogue.",
"For example, to control questionasking, we provide a control setting at the beginning of each dialogue (e.g.",
"20% questions or 70% questions) rather than providing a control setting for each utterance (e.g.",
"is a question or isn't a question).",
"With this approach, the sequence-tosequence model is able to choose what value the controlled attribute should take for any particular utterance, but we are able to choose the overall distribution.",
"We find that this approach works well -for example, the sequence-to-sequence model is generally good at detecting when to ask a question.",
"In particular, this is easier than the alternative: developing a separate process to decide, for each utterance, whether to ask a question.",
"In this section, we describe the two methods -which we call Conditional Training (CT) and Weighted Decoding (WD) -that we use to control attributes of the output y at the dialogue level.",
"Conditional Training (CT) Conditional Training (Fan et al., 2018; Kikuchi et al., 2016; Peng et al., 2018) is a method to learn a sequence-to-sequence model P (y|x, z), where z is a discrete control variable.",
"If the control attribute is naturally continuous (for example in our work, repetitiveness, specificity and response-relatedness), we use z to represent bucketed ranges.",
"For a binary attribute like questionasking, z represents an overall probability (as explained in Section 5).",
"To train a CT model, we first automatically annotate every (x, y) pair in the training set with the attribute we wish to control (for example, whether y contains a question mark).",
"During training, for each example we determine the corresponding z value (for continuous attributes, this simply means sorting into the correct bucket; for question-asking, see Section 6.4).",
"Next, the control variable z is represented via an embedding (each of the possible values of z has its own embedding).",
"For all our experiments, the embedding is of length 10; this was determined via hyperparameter tuning.",
"There are several possible ways to condition the sequence-to-sequence model on z -for example, append z to the end of the input sequence, or use z as the START symbol for the decoder.",
"We find it most effective to concatenate z to the decoder's input on every step.",
"2 Lastly, the CT model learns to produce y = y 1 , .",
".",
".",
", y T by optimizing the cross-entropy loss: loss CT = − 1 T T t=1 log P (y t |x, z, y 1 , .",
".",
".",
", y t−1 ) Our CT models are initialized with the parameters from the baseline sequence-to-sequence model P (y|x) (the new decoder parameters are initialized with small random values), then fine-tuned to optimize loss CT on the PersonaChat training set, until convergence of loss CT on the validation set.",
"Weighted Decoding (WD) Weighted Decoding (Ghazvininejad et al., 2017) is a decoding method that increases or decreases the probability of words with certain features.",
"The technique is applied only at test time, requiring no change to the training method.",
"A limitation of WD is that the controllable attribute must be defined at the word-level; any desired utterance-level attribute must be redefined via word-level features.",
"In weighted decoding, on the t th step of decoding, a partial hypothesis y <t = y 1 , .",
".",
".",
", y t−1 is expanded by computing the score for each possible next word w in the vocabulary: score(w, y <t ; x) = score(y <t ; x) + log P RNN (w|y <t , x) + i w i * f i (w; y <t , x).",
"Here, log P RNN (w|y <t , x) is the log-probability of the word w calculated by the RNN, score(y <t ; x) is the accumulated score of the already-generated words in the hypothesis y <t , and f i (w; y <t , x) are decoding features with associated weights w i .",
"There can be multiple features f i (to control multiple attributes), and the weights w i are hyperparameters to be chosen.",
"A decoding feature f i (w; y <t , x) assigns a real value to the word w, in the context of the text generated so far y <t and the context x.",
"The feature can be continuous (e.g.",
"the unigram probability of w), discrete (e.g.",
"the length of w in characters), or binary (e.g.",
"whether w starts with the same letter as the last word in y <t ).",
"A positive weight w i increases the probability of words w that score highly with respect to f i ; a negative weight decreases their probability.",
"Note that weighted decoding and conditional training can be applied simultaneously (i.e.",
"train a CT model then apply WD at test time) -a strategy we use in our experiments.",
"Controlling conversational attributes In this section, we describe how we use conditional training and weighted decoding to control four attributes: repetition, specificity, responserelatedness and question-asking.",
"We evaluate the effectiveness of both control methods via automatic metrics (i.e., measuring how well the attribute was controlled), and use our findings to select control methods and control settings to be explored further via human evaluation (Section 8).",
"Repetition Our baseline model exhibits three types of repetition, which we call external repetition (selfrepetition across utterances), internal repetition (self-repetition within utterances), and partner repetition (repeating the conversational partner).",
"To control repetition with weighted decoding, 3 we define five n-gram based decoding features (see Appendix D).",
"Three of these features (extrep bigram, intrep bigram and partnerrep bigram) identify repeating bigrams for the three repetition types.",
"The other two features (extrep unigram and intrep unigram) identify repeating content words.",
"By applying a negative weight to these features, we can reduce repetition.",
"In particular, if the weight is −∞, our method is equivalent to n-gram blocking as described by Kulikov et al.",
"(2018) .",
"We observe that repetition control is very important, thus all further control experiments include repetition control.",
"Specificity Like many sequence-to-sequence models using beam search decoding, our baseline frequently asks generic questions such as What music do you like?",
"and gives dull, unspecific responses, such as I like all kinds of music.",
"We control specificity using Normalized Inverse Document Frequency (NIDF) as a measure of word rareness.",
"4 The Inverse Document Frequency of a word w is IDF(w) = log(R/c w ) where R is the number of responses in the dataset, and c w is the number of those responses that contain w. Normalized IDF (which ranges from 0 to 1) is NIDF(w) = IDF(w) − min idf max idf − min idf (1) where min idf and max idf are the minimum and maximum IDFs, taken over all words in the vocabulary.",
"To control specificity with weighted decoding, we use NIDF as a decoding feature.",
"As shown in Table 1 , this method produces reasonable outputs when the feature weight is within a certain range, but at the extremes the outputs are nonsensical.",
"The boundary for nonsensical output differs from example to example.",
"To control specificity with conditional training, we define the specificity of an utterance y to be the mean NIDF of the words in y.",
"Thus our control variable z is mean NIDF (discretized into 10 equal-sized buckets).",
"As shown in Table 1 , this method gives outputs with a narrower NIDF range, but overall produces less nonsensical outputs.",
"Response-relatedness In conversation, it's generally desirable to produce a response that is related to the partner's last utterance; for example if the partner says My grandfather died last month, it is appropriate to say I'm so sorry.",
"Were you close to your grandfather?",
"However, our baseline model frequently responds with unrelated utterances like Do you have any pets?",
"To control response-relatedness with weighted decoding, we use the decoding feature resp rel: resp rel(w; y <t , x) = cos sim(word emb(w), sent emb( )) where word emb(w) is the GloVe embedding for the word w, sent emb( ) is the sentence embedding for the partner's last utterance (note is part of the context x), and cos sim is the cosine similarity between the two.",
"In particular, the sentence embedding sent emb(s) for an utterance s is a weighted average of the GloVe embeddings of the words in s, with the first principal component projected out; for full details, see Arora et al.",
"(2017) .",
"This method of controlling response-relatedness is similar to that described in (Baheti et al., 2018) .",
"We find that weighted decoding is effective to control the semantic relatedness of the model's response to the partner's last utterance (see Table 2 ).",
"As before, we find that extreme weights lead to nonsensical output.",
"To control response-relatedness with conditional training, we try defining the control variable z to be cos sim(sent emb(y), sent emb( )), the overall cosine similarity between the partner's last utterance and the model's response y (again, we discretize z).",
"However, we find this method ineffective -the CT model learns only a very weak connection between z and the semantic relatedness of the output (see Section 7 for more details).",
"Question-asking Considerate chitchat requires a reciprocal asking and answering of questions -asking too few or too many can appear self-centered or nosy.",
"We control question-asking in order to study these trade-offs.",
"To control question-asking with weighted decoding, we use the binary decoding feature is qn word(w), which is equal to 1 if and only if the word w is in a pre-defined list of interrogative words (how, what, when, where, which, who, whom, whose, why, ?)",
".",
"We find this is a somewhat effective method to encourage or discourage questions, but with unintended side-effects: a negative weight can discourage valid non-question utterances that happen to contain interrogative words (such as I'm learning how to knit) and a positive weight can result in degenerate utterances (such as For conditional training, we regard an utterance y as containing a question if and only if y contains a question mark.",
"We train our CT model on a control variable z with 11 possible values: {0, .",
".",
".",
", 10}.",
"As discussed in Section 5, we wish to control question-asking at the distributional, dialogue level, rather than at the binary, utterance level.",
"Thus the setting z = i means that the model should produce, on average, utterances containing '?'",
"with probability i/10.",
"During training we randomly assign examples to buckets such that each bucket i is trained on examples with the correct proportion of questions (i/10), and all buckets have the same amount of training examples.",
"We find that conditional training is effective to control question-asking -as shown in Figure 2 , by increasing z from 0 to 10, we obtain a range of question-asking rates from 1.40% to 97.72%.",
"However, when we introduce repetition control, question-asking is reduced -in particular, the z = 10 setting (which should produce 100% questions) now only produces 79.67% questions.",
"The primary problem is the weighted decoding feature extrep bigram, which discourages bigrams that have appeared in previous utterances -this prevents the model from producing bigrams that commonly occur in many questions, such as do you and what is.",
"To fix this, we introduce an extra setting z = 10 (boost), in which we do not use the feature extrep bigram for weighted decoding during beam search, but we do use it to rerank the candidates after beam search.",
"This setting, which allows the model to produce necessary questionasking bigrams, yields a 99.54% question-asking rate, at the cost of slightly increased external bigram repetition (see Appendix F).",
"For controlling question-asking, conditional training is preferable to weighted decoding for two reasons.",
"Firstly, it allows us to achieve (close to) 0% questions, 100% questions, or anything in between, without introducing the risk of degenerate output.",
"Secondly, presence-of-a-question-mark captures the true attribute of interest (questionasking) more exactly and directly than presence of interrogative words.",
"For these reasons, only the CT method is considered in the human evaluation.",
"Comparison of control methods The previous section shows that conditional training and weighted decoding are both useful techniques, with different strengths and weaknesses.",
"The primary disadvantage of conditional training is that it sometimes fails to learn the connection between the control variable z and the target output y.",
"In practice, we find the model can learn simple attributes of the output (such as the presence of '?",
"', and overall genericness), but not relationships between the input and output (such as semantic relatedness).",
"By contrast, weighted decoding can force the desired feature to appear in the output by raising the weight arbitrarily high (though this may have unintended side-effects).",
"The primary disadvantage of weighted decoding is that it risks going off-distribution when the weight is too strong.",
"By contrast, conditional training produces mostly well-formed, indistribution outputs.",
"This highlights the importance of learned control -it is safer to learn to produce output that both satisfies the control variable and is appropriate, than to alter the decoding process to satisfy the control variable, potentially trading off appropriateness in the process.",
"Other considerations include: (1) Convenience: conditional training requires retraining; weighted decoding doesn't, but is slower at test time.",
"Attribute definition: conditional training can control sentence-level attributes, but they must be discrete.",
"By contrast, weighted decoding requires word-level features, but they can be continuous.",
"Human evaluation results In order to study the effect of our controllable attributes, we conduct a large-scale human evalua-tion of 28 model configurations (see Appendix E), plus human-human conversations for comparison.",
"Approach In our evaluation, a crowdworker chats with a model (or in the human-human case, another crowdworker) for six conversational turns, then answers eight multiple-choice questions which each capture different aspects of conversational quality: avoiding repetition, interestingness, making sense, fluency, listening, inquisitiveness, humanness and engagingness.",
"The eight questions are Likert questions on a 1-4 scale, where higher is better.",
"5 To match the ConvAI2 Challenge, we also add a persona retrieval question, in which the crowdworker is asked to select which of two possible personas was the model's persona.",
"For full details of the evaluation design, see Appendix B.",
"Our evaluation is the same as the ConvAI2 Challenge evaluation, but more detailed -Con-vAI2 includes only engagingness and persona retrieval.",
"6 As in the ConvAI2 challenge, each of our 28 model configurations was evaluated by over 100 crowdworkers, and the results were adjusted for annotator variance via a Bayesian calibration (Kulikov et al., 2018) .",
"In designing our evaluation, we aimed to capture the four aspects we expected to directly improve via control (avoiding repetition, interestingness, listening, inquisitiveness), two important error classes we thought would be affected by our controls (fluency, making sense), and two overall quality measures (engagingness, humanness).",
"Main findings In this section we summarize the main findings of our human evaluation -whose full results can be found in Appendices G and H, with sample conversations in Appendix C. As Figure 3 shows, controlling for repetition, specificity and question-asking all lead to large 5 Exceptions: Avoiding repetition is a 1-3 scale, as we found this gave clearer instructions.",
"Inquisitiveness has an optimal score of 3; 1 and 2 represent too little questionasking, and 4 represents too much.",
"6 There are three other minor differences between our evaluation and ConvAI2's: (1) We fix capitalization and spacing before showing the chatbot's utterances to crowdworkers, while ConvAI2 show the raw lowercase tokenized form.",
"We found the latter interferes with fluency evaluation.",
"(2) We conduct 6 dialogue turns, while ConvAI2 conducts 4-6.",
"This was necessary to evaluate repetitiveness.",
"(3) We use (publicly-available) validation set personas, while ConvAI2 uses (hidden) test set personas.",
"This enables us to release our evaluation chatlogs.",
"engagingness improvements over the greedy and beam-search baseline models.",
"In particular, we find that controlling for multi-turn (self) repetition is important and should be incorporated alongside other attribute control methods.",
"We found no improvement by controlling response-relatedness.",
"To better understand these overall engagingness improvements, we consider the full set of human judgments, shown in Figure 4 .",
"We find that reducing repetition leads to improvements across all our aspects of conversational quality.",
"Increasing specificity shows improvements in interestingness and listening ability over the repetition-controlled baseline, while increasing question-asking shows improvements in inquisitiveness and interestingness over the repetition-controlled baseline.",
"Our most engaging model, which controls both repetition and question-asking -marked 'Question (CT)' in Figure 3 (left) -matches the engagingness of the winning entry in the ConvAI2 competition, as both models achieve a raw score 7 of 3.1 (Dinan et al., 2019) .",
"However, the Con-vAI2 winner, Lost in Conversation, was trained on approximately 12× as much data as our model.",
"Lost in Conversation is based on the OpenAI GPT Language Model (Radford et al., 2018) , which is pretrained on the BookCorpus (Zhu et al., 2015) , which contains approximately 985 million words, whereas our model is pretrained on the Twitter dataset (approximately 79 million words).",
"Altogether, our evaluation clearly shows that controlling low-level attributes over multiple turns leads to improved overall quality.",
"Effect of controlled attributes Repetition (WD) We observe that selfrepetition across utterances (external repetition) is by far the most severe form of repetition in our beam search baseline model.",
"We evaluate several settings of the extrep bigram weighted decoding feature, and find that an aggressive repetition-reduction setting (reducing bigram repetition rate to below gold data levels) is rated best.",
"We also find that blocking repeated content words improves the avoiding repetition score.",
"See Appendices E, F and G for full details.",
"As shown in Figure 3 Figure 3 : Calibrated human judgments of engagingness for the baselines and best controlled models (left); for different specificity control settings (middle); and for different question-asking control settings (right).",
"over the beam search baseline in all metrics, and achieves close-to-human scores on all metrics except humanness.",
"This striking result demonstrates that repetition is by far the biggest limiting quality factor for naive sequence-to-sequence dialogue agents.",
"The result also emphasizes the importance of multi-turn dialogue evaluation to detect the problem.",
"We refer to this model as the repetitioncontrolled baseline, and use it as a basis for all remaining experiments (i.e., we control specificity, response-relatedness and question-asking on top of these repetition-control settings).",
"Specificity (WD, CT) For our weighted decoding models, the extreme settings (very generic and very specific) score poorly in engagingness due to the frequent presence of degenerate output -see Figure 3 (middle).",
"We find that the weight = 4 setting (which is more specific than the repetitioncontrolled baseline and about as specific as the gold data) maximizes engagingness.",
"As shown in Figure 3 (left) and Figure 4 , this more-specific model is rated more interesting, engaging, and a better listener than the repetition-controlled baseline, but at the cost of reduced fluency and making sense.",
"Our CT model with z = 7 (which has a similar NIDF level as WD with weight = 4) shows similar results, but the improvements are smaller.",
"For further discussion on the interestingness of our specificity models, see Section 8.3.",
"Response-relatedness (WD) We evaluated several control settings (weight = −10, 5, 10, 13) and found that none scored better than weight = 0 (no response-relatedness control); see Appendix H. This is surprising -prior to running the human evaluation, we annotated 100 examples ourselves to determine the best control settings.",
"While we identified a more responsive setting (weight = 5) as less likely than the uncontrolled model to ignore the user, crowdworkers rated it as a slightly worse listener than the uncontrolled model.",
"One explanation for this discrepancy is that the more responsive model takes more risks, using more rare words (0.197 NIDF, up from 0.178), and thus receives a lower makes-sense score (3.41, down from 3.70).",
"We hypothesize that, compared to us, the crowdworkers are less tolerant of slightly nonsensical output, and more tolerant of generic unrelated utterances.",
"Question-asking (CT) As shown in Figure 3 (right), a question-asking rate of 65.7% (z = 7) maximizes engagingness.",
"This setting, which asks more questions than both the repetition-controlled baseline (50.0%) and the human-produced gold data (28.8%), brings us closest to human-level engagingness -see Figure 3 (left).",
"Although we find that a rate of approximately 65.7% questionasking is the most engaging, a lower level (48.9%, or z = 4) is rated the best listener.",
"Lastly, we find that although asking too many questions is less engaging, most crowdworkers will not directly criticize a chatbot that asks questions on every turnonly 11.9% of crowdworkers judged the z = 10 (boost) setting, which asks 99.5% questions, as asking too many questions.",
"8 For full details of these scores, see Appendix F and H. For time and budget reasons, we did not evaluate any models controlling both question-asking and specificity.",
"However, we expect it is possible to obtain further improvements by doing so.",
"A/B tests for interestingness Though our more-specific models yielded significant improvements in engagingness, we were surprised that they did not yield clearer improvements in interestingness.",
"To investigate further, we conducted an A/B interestingness evaluation of three specificity-controlled models, compared to the repetition-controlled baseline.",
"Crowdworkers were shown two conversations (from the main human evaluation) and asked to choose which model was more interesting (see Figure 7 for details).",
"We collected 500 samples per comparison, plus 200 additional human vs repetition-controlled baseline samples, which were used to filter for quality control.",
"After discarding low-quality crowdworkers, we have roughly 300 evaluations per comparison, with an average Cohen's κ = 0.6.",
"As shown in Table 3 , all three models were rated significantly more interesting than the repetitioncontrolled baseline.",
"This convincingly shows that producing utterances with more rare words is a valid strategy to improve interestingness.",
"We have two explanations for why these interestingness differences did not materialize in our main evaluation.",
"Firstly, interestingness is a particularly subjective metric (unlike more tangible metrics such as avoiding repetition and making sense) -this makes it hard to calibrate across crowdworkers.",
"Secondly, we suspect that in our original evaluation, the crowdworkers may have evaluated the interestingness of the task rather than the chatbot.",
"This could account for why subtle increases in conversational ability did not result in higher interestingness ratings -the PersonaChat task itself has a natural interestingness limit.",
"Conclusion What makes a good conversation?",
"Through our evaluation, we showed that a good conversation is about balance -controlling for the right level of repetition, specificity and question-asking is important for overall quality.",
"We also found that conversational aspects such as interestingness, listening, and inquisitiveness are all importantthough optimizing these can introduce a trade-off against certain types of errors (such as repetitive, disfluent, or nonsensical output).",
"Secondly, multiturn evaluation is essential to study what makes a good conversation -multiple turns are required to reveal issues such as repetition, consistency, and question-asking frequency.",
"Lastly, what do we mean by 'good'?",
"Although humanness and engagingness are both commonly used as overall quality metrics, the two are very different.",
"While our models achieved close-to-human scores on engagingness, they failed to get close on humannessshowing that a chatbot need not be human-like to be enjoyable.",
"This striking result also demonstrates the importance of measuring more than one quality metric when evaluating dialogue agents.",
"Outlook Our work shows that neural generative systems have systemic problems when applied to open-ended dialogue, some of which (e.g.",
"repetition) are only observable in the multi-turn setting.",
"Furthermore, control of low-level attributes offers a practical way to correct these problems, yielding large improvements to overall quality -in our case, comparable to systems trained on much more data.",
"Future work includes optimizing control settings automatically, and building more convincingly human-like chatbots.",
"Supplementary Material A Screenshots of human evaluation interface B Human evaluation questionnaire design Here are the questions and multiple-choice options used in the human evaluation, in the order presented: [Engagingness] How much did you enjoy talking to this user?",
"Adding w to the hypothesis y <t would create a 2-gram that appears in a previous utterance by the model extrep unigram(w, y <t , x) w is a non-stopword and w appears in a previous utterance by the model intrep bigram(w, y <t , x) Adding w to the hypothesis y <t would create a 2-gram that appears earlier in the hypothesis y <t intrep unigram(w, y <t , x) w is a non-stopword and w appears earlier in the hypothesis y <t partnerrep bigram(w, y <t , x) Adding w to the hypothesis y <t would create a 2-gram that appears in a previous utterance by the partner Repetition control (WD) Extrep bigram WD -0.5 wt -0.5 Extrep bigram WD -1.25 wt -1.25 Extrep bigram WD -3.5 wt -3.5 Extrep bigram WD -inf wt -∞ Repetition-controlled baseline wt -3.5 wt -∞ wt -∞ Question control (CT) Question-controlled CT 0 wt -3.5 wt -∞ wt -∞ z = 0 Question-controlled CT 1 wt -3.5 wt -∞ wt -∞ z = 1 Question-controlled CT 4 wt -3.5 wt -∞ wt -∞ z = 4 Question-controlled CT 7 wt -3.5 wt -∞ wt -∞ z = 7 Question-controlled CT 10 wt -3.5 wt -∞ wt -∞ z = 10 Question-controlled CT 10 (boost) wt 0 * wt -∞ wt -∞ z = 10 Specificity control (CT) Specificity-controlled CT 0 wt -3.5 wt -∞ wt -∞ z = 0 Specificity-controlled CT 2 wt -3.5 wt -∞ wt -∞ z = 2 Specificity-controlled CT 4 wt -3.5 wt -∞ wt -∞ z = 4 Specificity-controlled CT 7 wt -3.5 wt -∞ wt -∞ z = 7 Specificity-controlled CT 9 wt -3.5 wt -∞ wt -∞ z = 9 Specificity control (WD) Specificity-controlled WD -10 wt -3.5 wt -∞ wt -∞ wt -10 Specificity-controlled WD -4 wt -3.5 wt -∞ wt -∞ wt -4 Specificity-controlled WD 4 wt -3.5 wt -∞ wt -∞ wt 4 Specificity-controlled WD 6 wt -3.5 wt -∞ wt -∞ wt 6 Specificity-controlled WD 8 wt -3.5 wt -∞ wt -∞ wt 8 Response-related control (WD) ** Response-related controlled WD -10 wt -3.5 wt -∞ wt -∞ wt -∞ wt -∞ wt -10 Response-related controlled WD 0 wt -3.5 wt -∞ wt -∞ wt -∞ wt -∞ wt 0 Response-related controlled WD 5 wt -3.5 wt -∞ wt -∞ wt -∞ wt -∞ wt 5 Response-related controlled WD 10 wt -3.5 wt -∞ wt -∞ wt -∞ wt -∞ wt 10 Response-related controlled WD 13 wt -3.5 wt -∞ wt -∞ wt -∞ wt -∞ wt 13 Table 5 : Control settings for all configurations that were human-evaluated.",
"'wt' means the weight used for a weighted decoding feature and 'z =' means the setting (i.e.",
"bucket) for the control variable in conditional training.",
"* In the setting Question-controlled CT 10 (boost), the feature extrep bigram is not used for weighted decoding during beam search, but it is used to rerank the candidates after beam search.",
"See Section 6.4 for details.",
"** Note that the Response-related controlled models additionally introduce repetition controls to block internal bigram repetition and partner bigram repetition.",
"This was necessary to prevent the model from parroting the partner's last utterance.",
"In Table 8 , we find that just adding these extra repetition controls (here called Responserelated controlled WD 0, i.e.",
"increased repetition control but no response-relatedness control) outperforms our canonical Repetition-controlled baseline.",
"However, given that we discovered this later, our specificity and question controlled models are built on top of the canonical Repetition-controlled baseline.",
"Table 7 : Raw scores (mean ± std.)",
"for all models and human evaluation metrics.",
"The first eight columns are Likert metrics on a 1-4 scale (except Avoiding Repetition, which is a 1-3 scale), where higher is better (except Inquisitiveness, which has an optimal score of 3).",
"The last column, Persona Retrieval, is on a scale from 0 to 1 where higher is better.",
"The maximum of each column (excluding Human row) is in bold.",
"Model Avoiding Rep.",
"Engage Fluency Humanness Inquisitive Interesting Listening Make Sense Human and baselines * Human 2.79 ± 0.12 3.04 ± 0.11 3.36 ± 0.12 3.35 ± 0.11 2.44 ± 0.12 2.92 ± 0.11 3.32 ± 0.13 3.68 ± 0.11 * Greedy Search 2.08 ± 0.10 2.24 ± 0.11 3.03 ± 0.10 1.75 ± 0.12 1.95 ± 0.10 2.29 ± 0.13 2.62 ± 0.10 3.23 ± 0.10 * Beam Search (beam size 20) 2.08 ± 0.11 2.29 ± 0.11 3.09 ± 0.13 1.71 ± 0.13 2.42 ± 0.11 2.29 ± 0.14 2.47 ± 0.12 3.35 ± 0.13 Repetition control (WD) Extrep bigram WD -0.5 2.62 ± 0.10 2.54 ± 0.12 3.35 ± 0.12 2.13 ± 0.11 2.63 ± 0.11 2.56 ± 0.11 2.93 ± 0.11 3.48 ± 0.11 Extrep bigram WD -1.25 2.78 ± 0.09 2.82 ± 0.13 3.40 ± 0.12 2.27 ± 0.12 2.54 ± 0.09 2.76 ± 0.10 3.05 ± 0.11 3.53 ± 0.14 Extrep bigram WD -3.5 2.83 ± 0.11 2.93 ± 0.10 3.56 ± 0.10 2.43 ± 0.11 2.47 ± 0.11 2.83 ± 0.10 3.14 ± 0.10 3.62 ± 0.12 Extrep bigram WD -inf 2.74 ± 0.11 2.87 ± 0.14 3.49 ± 0.12 2.32 ± 0.13 2.56 ± 0.11 2.75 ± 0.12 3.13 ± 0.12 3.59 ± 0.12 * Repetition-controlled baseline 2.86 ± 0.12 2.82 ± 0.12 3.53 ± 0.10 2.40 ± 0.11 2.62 ± 0.13 2.84 ± 0.12 3.10 ± 0.11 3.58 ± 0.14 Question control (CT) Question-controlled CT 0 2.87 ± 0.12 2.84 ± 0.13 3.51 ± 0.10 2.46 ± 0.11 2.36 ± 0.09 2.76 ± 0.09 3.10 ± 0.10 3.49 ± 0.12 Question-controlled CT 1 2.82 ± 0.11 2.88 ± 0.11 3.42 ± 0.10 2.46 ± 0.12 2.47 ± 0.11 2.79 ± 0.13 3.14 ± 0.11 3.55 ± 0.10 Question-controlled CT 4 2.78 ± 0.12 2.88 ± 0.10 3.47 ± 0.11 2.40 ± 0.09 2.53 ± 0.13 2.83 ± 0.13 3.24 ± 0.11 3.59 ± 0.10 * Question-controlled CT 7 2.81 ± 0.10 2.99 ± 0.11 3.54 ± 0.09 2.35 ± 0.11 2.66 ± 0.12 2.92 ± 0.12 3.11 ± 0.10 3.47 ± 0.10 Question-controlled CT 10 2.67 ± 0.13 2.87 ± 0.11 3.52 ± 0.12 2.35 ± 0.12 2.63 ± 0.12 2.66 ± 0.10 2.94 ± 0.11 3.53 ± 0.12 Question-controlled CT 10 (boost) 2.68 ± 0.12 2.74 ± 0.09 3.42 ± 0.12 2.19 ± 0.13 2.79 ± 0.11 2.74 ± 0.11 3.00 ± 0.12 3.45 ± 0.13 Specificity control (CT) Specificity-controlled CT 0 2.79 ± 0.10 2.93 ± 0.09 3.44 ± 0.12 2.38 ± 0.11 2.56 ± 0.12 2.84 ± 0.12 3.12 ± 0.13 3.61 ± 0.11 Specificity-controlled CT 2 2.78 ± 0.12 2.74 ± 0.11 3.39 ± 0.13 2.31 ± 0.13 2.56 ± 0.13 2.74 ± 0.12 2.99 ± 0.11 3.47 ± 0.10 Specificity-controlled CT 4 2.82 ± 0.10 2.80 ± 0.13 3.44 ± 0.14 2.32 ± 0.13 2.51 ± 0.12 2.78 ± 0.15 3.09 ± 0.13 3.46 ± 0.13 Specificity-controlled CT 7 2.81 ± 0.12 2.91 ± 0.13 3.43 ± 0.11 2.45 ± 0.10 2.49 ± 0.11 2.81 ± 0.12 3.15 ± 0.12 3.55 ± 0.11 Specificity-controlled CT 9 2.80 ± 0.13 2.78 ± 0.10 3.41 ± 0.12 2.35 ± 0.13 2.28 ± 0.11 2.79 ± 0.11 2.91 ± 0.11 3.51 ± 0.12 Specificity control (WD) Specificity-controlled WD -10 2.76 ± 0.11 2.41 ± 0.12 3.19 ± 0.12 2.15 ± 0.11 2.28 ± 0.13 2.35 ± 0.12 2.89 ± 0.11 3.28 ± 0.12 Specificity-controlled WD -4 2.83 ± 0.10 2.76 ± 0.12 3.37 ± 0.10 2.36 ± 0.11 2.46 ± 0.11 2.62 ± 0.12 3.14 ± 0.09 3.52 ± 0.11 * Specificity-controlled WD 4 2.84 ± 0.10 2.96 ± 0.12 3.45 ± 0.13 2.44 ± 0.12 2.56 ± 0.09 2.94 ± 0.11 3.20 ± 0.10 3.54 ± 0.11 Specificity-controlled WD 6 2.81 ± 0.09 2.91 ± 0.10 3.34 ± 0.09 2.31 ± 0.11 2.53 ± 0.12 2.93 ± 0.12 3.09 ± 0.10 3.41 ± 0.12 Specificity-controlled WD 8 2.70 ± 0.11 2.39 ± 0.12 2.54 ± 0.12 1.80 ± 0.13 2.00 ± 0.10 2.49 ± 0.12 2.47 ± 0.10 2.87 ± 0.11 Response-related control (WD) Response-related controlled WD -10 2.77 ± 0.12 2.45 ± 0.12 3.26 ± 0.11 1.96 ± 0.10 2.31 ± 0.12 2.47 ± 0.12 2.73 ± 0.11 3.12 ± 0.12 Response-related controlled WD 0 2.87 ± 0.12 2.97 ± 0.11 3.55 ± 0.09 2.62 ± 0.11 2.48 ± 0.10 2.88 ± 0.12 3.21 ± 0.09 3.70 ± 0.10 Response-related controlled WD 5 2.79 ± 0.10 2.83 ± 0.09 3.35 ± 0.12 2.40 ± 0.12 2.51 ± 0.13 2.80 ± 0.13 3.13 ± 0.12 3.41 ± 0.12 Response-related controlled WD 10 2.74 ± 0.11 2.42 ± 0.12 2.93 ± 0.11 1.95 ± 0.12 2.20 ± 0.12 2.56 ± 0.12 2.90 ± 0.12 3.12 ± 0.10 Response-related controlled WD 13 2.63 ± 0.12 2.06 ± 0.11 2.40 ± 0.09 1.74 ± 0.11 2.07 ± 0.11 2.25 ± 0.12 2.49 ± 0.14 2.63 ± 0.10 Table 8 : Calibrated scores (mean ± std.)",
"for all models and human evaluation metrics.",
"The first eight columns are Likert metrics on a 1-4 scale (except Avoiding Repetition, which is a 1-3 scale), where higher is better (except Inquisitiveness, which has an optimal score of 3).",
"The last column, Persona Retrieval, is on a scale from 0 to 1 where higher is better.",
"The maximum of each column (excluding Human row) is in bold.",
"Rows marked with * are the six models included in Figure 3 Figure 9 : Calibrated human evaluation scores for all models.",
"This is the same data as in Table 8 .",
"Note: 'Repetition-controlled baseline+' in the rightmost column is 'Response-related controlled WD 0' in Table 8 .",
"See Table 5 for explanation."
]
} | {
"paper_header_number": [
"1",
"2",
"4",
"5",
"5.1",
"5.2",
"6",
"6.1",
"6.2",
"6.3",
"6.4",
"7",
"8",
"8.1",
"8.2",
"8.3",
"9"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Baseline model",
"Controllable text generation methods",
"Conditional Training (CT)",
"Weighted Decoding (WD)",
"Controlling conversational attributes",
"Repetition",
"Specificity",
"Response-relatedness",
"Question-asking",
"Comparison of control methods",
"Human evaluation results",
"Main findings",
"Effect of controlled attributes",
"A/B tests for interestingness",
"Conclusion"
]
} | GEM-SciDuet-train-133#paper-1358#slide-10 | Q3 Can we make a better chatbot overall | Yes! By controlling repetition, specificity and question-asking, we achieve near-human engagingness (i.e. enjoyability) ratings.
Our raw engagingness score matches the
ConvAI2 competition winner's GPT-based model, even though ours is:
much smaller (2 layers vs 12) trained on 12x less data
However: On the humanness (i.e. Turing test) metric, our models are nowhere near human-level! | Yes! By controlling repetition, specificity and question-asking, we achieve near-human engagingness (i.e. enjoyability) ratings.
Our raw engagingness score matches the
ConvAI2 competition winner's GPT-based model, even though ours is:
much smaller (2 layers vs 12) trained on 12x less data
However: On the humanness (i.e. Turing test) metric, our models are nowhere near human-level! | [] |
GEM-SciDuet-train-133#paper-1358#slide-11 | 1358 | What makes a good conversation? How controllable attributes affect human judgments | A good conversation requires balance -between simplicity and detail; staying on topic and changing it; asking questions and answering them. Although dialogue agents are commonly evaluated via human judgments of overall quality, the relationship between quality and these individual factors is less well-studied. In this work, we examine two controllable neural text generation methods, conditional training and weighted decoding, in order to control four important attributes for chitchat dialogue: repetition, specificity, response-relatedness and question-asking. We conduct a large-scale human evaluation to measure the effect of these control parameters on multi-turn interactive conversations on the PersonaChat task. We provide a detailed analysis of their relationship to high-level aspects of conversation, and show that by controlling combinations of these variables our models obtain clear improvements in human quality judgments. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237,
238,
239,
240,
241,
242,
243,
244,
245,
246,
247,
248,
249,
250,
251,
252,
253,
254
],
"paper_content_text": [
"Introduction Neural generation models for dialogue, despite their ubiquity in current research, are still poorly understood.",
"Well known problems, such as the genericness and repetitiveness of responses (Serban et al., 2016a) , remain without a de facto solution.",
"Strikingly, the factors that determine human judgments of overall conversation quality are almost entirely unexplored.",
"Most works have been limited to the next utterance prediction problem, whereas a multi-turn evaluation is necessary to evaluate the quality of a full conversation.",
"In this work we both (i) conduct a large-scale study to identify the fine-grained factors governing human judgments of full conversations, and (ii) develop models that apply our findings in practice, * A.S. completed most of this work at Facebook (FAIR).",
"leading to state-of-the-art performance.",
"Specifically, we identify and study eight aspects of conversation that can be measured by human judgments, while varying four types of low-level attributes that can be algorithmically controlled in neural models; see Figure 1 .",
"To control the lowlevel model attributes, we consider two simple but general algorithms: conditional training, in which the neural model is conditioned on additional control features, and weighted decoding, in which control features are added to the decoding scoring function at test time only.",
"One major result of our findings is that existing work has ignored the importance of conversational flow, as standard models (i) repeat or contradict previous statements, (ii) fail to balance specificity with genericness, and (iii) fail to balance asking questions with other dialogue acts.",
"Conducting experiments on the PersonaChat task (Zhang et al., 2018b) , we obtain significantly higher engagingness scores than the baseline by optimizing control of repetition, specificity and question-asking over multiple turns.",
"Using these findings, our best model matches the performance of the winning entry in the recent NeurIPS ConvAI2 competition (Dinan et al., 2019) , which was trained on much more data but had no control (see Section 8.1).",
"Our code, pretrained models, and full chatlogs, are available at https://parl.ai/projects/ controllable_dialogue.",
"Related Work Dialogue Dialogue evaluation is relatively well understood in goal-oriented tasks, where automated approaches can be coded by measuring task completion (Bordes et al., 2017; El Asri et al., 2017; Hastie, 2012; Henderson et al., 2014; Wen et al., 2017) .",
"Task success combined with dialogue cost can be linked to human judgments like user satisfaction via the PARADISE framework (Walker et al., 1997) .",
"However in chitchat tasks, which we study in this work, automatic metrics and their relation to human ratings are less well-understood.",
"While word-overlap metrics are effective for questionanswering and machine translation, for dialogue they have little to no correlation with human judgments (Liu et al., 2016; Novikova et al., 2017 )this is due to the open-ended nature of dialogue.",
"There are more recent attempts to find better automatic approaches, such as adversarial evaluation (Li et al., 2017b) and learning a scoring model (Lowe et al., 2017) , but their value is still unclear.",
"Nevertheless, a number of studies only use automatic metrics, with no human study at all (Lowe et al., 2015; Parthasarathi and Pineau, 2018; Serban et al., 2016b) .",
"Other works do use human evaluations (Dinan et al., 2018; Li et al., 2016a,b; Venkatesh et al., 2017; Vinyals and Le, 2015; Zhang et al., 2018b) , typically reporting just one type of judgment (either quality or appropriateness) via a Likert scale or pairwise comparison.",
"Most of those works only consider single turn evaluations, often with a shortened dialogue history, rather than full multi-turn dialogue.",
"A more comprehensive evaluation strategy has been studied within the scope of the Alexa prize (Venkatesh et al., 2017; Guo et al., 2018) by combining multiple automatic metrics designed to capture various conversational aspects (engagement, coherence, domain coverage, conversational depth and topical diversity).",
"Though these aspects have some similarity to the aspects studied here, we also focus on lower-level aspects (e.g.",
"avoiding repetition, fluency), to understand how they correspond to both our controllable attributes, and to overall quality judgments.",
"Controllable neural text generation Researchers have proposed several approaches to control aspects of RNN-based natural language generation such as sentiment, length, speaker style and tense (Fan et al., 2018; Ficler and Goldberg, 2017; Ghazvininejad et al., 2017; Hu et al., 2017; Kikuchi et al., 2016; Peng et al., 2018; Wang et al., 2017) .",
"In particular, several works use control to tackle the same common sequence-to-sequence problems we address here (particularly genericness and unrelated output), in the context of single-turn response generation (Baheti et al., 2018; Li et al., 2016a Li et al., , 2017a Shen et al., 2017; Xing et al., 2017; Zhang et al., 2018a; Zhou et al., 2017) .",
"By contrast, we focus on developing controls for, and human evaluation of, multi-turn interactive dialogue -this includes a new method (described in Section 5) to control attributes at the dialogue level rather than the utterance level.",
"In this work, we require a control method that is both general-purpose (one technique to simultaneously control many attributes) and easily tunable (the control setting is adjustable after training).",
"Given these constraints, we study two control methods: conditional training (variants of which have been described by Fan et al.",
"(2018) ; Kikuchi et al.",
"(2016) ; Peng et al.",
"(2018) ) and weighted decoding (described by Ghazvininejad et al.",
"(2017) as a general technique, and by Baheti et al.",
"(2018) to control response-relatedness).",
"To our knowledge, this work is the first to systematically compare the effectiveness of two general-purpose control methods across several attributes.",
"3 The PersonaChat dataset PersonaChat (Zhang et al., 2018b ) is a chitchat dialogue task involving two participants (two humans or a human and a bot).",
"Each participant is given a persona -a short collection of personal traits such as I'm left handed or My favorite season is spring -and are instructed to get to know each other by chatting naturally using their designated personas, for 6-8 turns.",
"The training set contains 8939 conversations and 955 personas, collected via crowdworkers, plus 1000 conversations and 100 personas for validation, and a similar number in the hidden test set.",
"The PersonaChat task was the subject of the NeurIPS 2018 ConvAI2 Challenge (Dinan et al., 2019) , in which competitors were first evaluated with respect to automatic met-rics (perplexity, hits@1 and F1 score), and then with respect to human judgment via the question \"How much did you enjoy talking to this user?\"",
"on a scale of 1-4.",
"Baseline model Our baseline model is a 2-layer LSTM sequenceto-sequence model with attention.",
"On any dialogue turn, the input x to the encoder is the entire dialogue history (separated using unique speakeridentifying tokens), with the model's own persona prepended.",
"Conditioned on this input sequence x, the decoder generates a response y.",
"Except when stated otherwise, all our models decode using beam search with beam size 20.",
"We initialized the word embedding matrix with 300-dimensional GloVe embeddings (Pennington et al., 2014) .",
"Using the ParlAI framework (Miller et al., 2017) , we pretrained the model on a dataset of 2.5 million Twitter message-response pairs, 1 then fine-tuned it on PersonaChat.",
"On the Per-sonaChat validation set, the baseline model has a perplexity of 26.83 and F1 of 17.02, which would have placed us 4th out of 26 models in the Con-vAI2 competition (Dinan et al., 2019) .",
"We attempt to improve over this baseline using control.",
"Controllable text generation methods Suppose we have a sequence-to-sequence model which gives P (y|x) = Π t P (y t |x, y 1 , .",
".",
".",
", y t−1 ), the conditional probability of a response y (the model's next utterance) given input x (the context, which in our case includes the model's own persona and the dialogue history).",
"Contrary to most previous work, which controls at the sentence level, we wish to control attributes of the output y at the dialogue levelmeaning that a single control setting is used for a whole dialogue.",
"For example, to control questionasking, we provide a control setting at the beginning of each dialogue (e.g.",
"20% questions or 70% questions) rather than providing a control setting for each utterance (e.g.",
"is a question or isn't a question).",
"With this approach, the sequence-tosequence model is able to choose what value the controlled attribute should take for any particular utterance, but we are able to choose the overall distribution.",
"We find that this approach works well -for example, the sequence-to-sequence model is generally good at detecting when to ask a question.",
"In particular, this is easier than the alternative: developing a separate process to decide, for each utterance, whether to ask a question.",
"In this section, we describe the two methods -which we call Conditional Training (CT) and Weighted Decoding (WD) -that we use to control attributes of the output y at the dialogue level.",
"Conditional Training (CT) Conditional Training (Fan et al., 2018; Kikuchi et al., 2016; Peng et al., 2018) is a method to learn a sequence-to-sequence model P (y|x, z), where z is a discrete control variable.",
"If the control attribute is naturally continuous (for example in our work, repetitiveness, specificity and response-relatedness), we use z to represent bucketed ranges.",
"For a binary attribute like questionasking, z represents an overall probability (as explained in Section 5).",
"To train a CT model, we first automatically annotate every (x, y) pair in the training set with the attribute we wish to control (for example, whether y contains a question mark).",
"During training, for each example we determine the corresponding z value (for continuous attributes, this simply means sorting into the correct bucket; for question-asking, see Section 6.4).",
"Next, the control variable z is represented via an embedding (each of the possible values of z has its own embedding).",
"For all our experiments, the embedding is of length 10; this was determined via hyperparameter tuning.",
"There are several possible ways to condition the sequence-to-sequence model on z -for example, append z to the end of the input sequence, or use z as the START symbol for the decoder.",
"We find it most effective to concatenate z to the decoder's input on every step.",
"2 Lastly, the CT model learns to produce y = y 1 , .",
".",
".",
", y T by optimizing the cross-entropy loss: loss CT = − 1 T T t=1 log P (y t |x, z, y 1 , .",
".",
".",
", y t−1 ) Our CT models are initialized with the parameters from the baseline sequence-to-sequence model P (y|x) (the new decoder parameters are initialized with small random values), then fine-tuned to optimize loss CT on the PersonaChat training set, until convergence of loss CT on the validation set.",
"Weighted Decoding (WD) Weighted Decoding (Ghazvininejad et al., 2017) is a decoding method that increases or decreases the probability of words with certain features.",
"The technique is applied only at test time, requiring no change to the training method.",
"A limitation of WD is that the controllable attribute must be defined at the word-level; any desired utterance-level attribute must be redefined via word-level features.",
"In weighted decoding, on the t th step of decoding, a partial hypothesis y <t = y 1 , .",
".",
".",
", y t−1 is expanded by computing the score for each possible next word w in the vocabulary: score(w, y <t ; x) = score(y <t ; x) + log P RNN (w|y <t , x) + i w i * f i (w; y <t , x).",
"Here, log P RNN (w|y <t , x) is the log-probability of the word w calculated by the RNN, score(y <t ; x) is the accumulated score of the already-generated words in the hypothesis y <t , and f i (w; y <t , x) are decoding features with associated weights w i .",
"There can be multiple features f i (to control multiple attributes), and the weights w i are hyperparameters to be chosen.",
"A decoding feature f i (w; y <t , x) assigns a real value to the word w, in the context of the text generated so far y <t and the context x.",
"The feature can be continuous (e.g.",
"the unigram probability of w), discrete (e.g.",
"the length of w in characters), or binary (e.g.",
"whether w starts with the same letter as the last word in y <t ).",
"A positive weight w i increases the probability of words w that score highly with respect to f i ; a negative weight decreases their probability.",
"Note that weighted decoding and conditional training can be applied simultaneously (i.e.",
"train a CT model then apply WD at test time) -a strategy we use in our experiments.",
"Controlling conversational attributes In this section, we describe how we use conditional training and weighted decoding to control four attributes: repetition, specificity, responserelatedness and question-asking.",
"We evaluate the effectiveness of both control methods via automatic metrics (i.e., measuring how well the attribute was controlled), and use our findings to select control methods and control settings to be explored further via human evaluation (Section 8).",
"Repetition Our baseline model exhibits three types of repetition, which we call external repetition (selfrepetition across utterances), internal repetition (self-repetition within utterances), and partner repetition (repeating the conversational partner).",
"To control repetition with weighted decoding, 3 we define five n-gram based decoding features (see Appendix D).",
"Three of these features (extrep bigram, intrep bigram and partnerrep bigram) identify repeating bigrams for the three repetition types.",
"The other two features (extrep unigram and intrep unigram) identify repeating content words.",
"By applying a negative weight to these features, we can reduce repetition.",
"In particular, if the weight is −∞, our method is equivalent to n-gram blocking as described by Kulikov et al.",
"(2018) .",
"We observe that repetition control is very important, thus all further control experiments include repetition control.",
"Specificity Like many sequence-to-sequence models using beam search decoding, our baseline frequently asks generic questions such as What music do you like?",
"and gives dull, unspecific responses, such as I like all kinds of music.",
"We control specificity using Normalized Inverse Document Frequency (NIDF) as a measure of word rareness.",
"4 The Inverse Document Frequency of a word w is IDF(w) = log(R/c w ) where R is the number of responses in the dataset, and c w is the number of those responses that contain w. Normalized IDF (which ranges from 0 to 1) is NIDF(w) = IDF(w) − min idf max idf − min idf (1) where min idf and max idf are the minimum and maximum IDFs, taken over all words in the vocabulary.",
"To control specificity with weighted decoding, we use NIDF as a decoding feature.",
"As shown in Table 1 , this method produces reasonable outputs when the feature weight is within a certain range, but at the extremes the outputs are nonsensical.",
"The boundary for nonsensical output differs from example to example.",
"To control specificity with conditional training, we define the specificity of an utterance y to be the mean NIDF of the words in y.",
"Thus our control variable z is mean NIDF (discretized into 10 equal-sized buckets).",
"As shown in Table 1 , this method gives outputs with a narrower NIDF range, but overall produces less nonsensical outputs.",
"Response-relatedness In conversation, it's generally desirable to produce a response that is related to the partner's last utterance; for example if the partner says My grandfather died last month, it is appropriate to say I'm so sorry.",
"Were you close to your grandfather?",
"However, our baseline model frequently responds with unrelated utterances like Do you have any pets?",
"To control response-relatedness with weighted decoding, we use the decoding feature resp rel: resp rel(w; y <t , x) = cos sim(word emb(w), sent emb( )) where word emb(w) is the GloVe embedding for the word w, sent emb( ) is the sentence embedding for the partner's last utterance (note is part of the context x), and cos sim is the cosine similarity between the two.",
"In particular, the sentence embedding sent emb(s) for an utterance s is a weighted average of the GloVe embeddings of the words in s, with the first principal component projected out; for full details, see Arora et al.",
"(2017) .",
"This method of controlling response-relatedness is similar to that described in (Baheti et al., 2018) .",
"We find that weighted decoding is effective to control the semantic relatedness of the model's response to the partner's last utterance (see Table 2 ).",
"As before, we find that extreme weights lead to nonsensical output.",
"To control response-relatedness with conditional training, we try defining the control variable z to be cos sim(sent emb(y), sent emb( )), the overall cosine similarity between the partner's last utterance and the model's response y (again, we discretize z).",
"However, we find this method ineffective -the CT model learns only a very weak connection between z and the semantic relatedness of the output (see Section 7 for more details).",
"Question-asking Considerate chitchat requires a reciprocal asking and answering of questions -asking too few or too many can appear self-centered or nosy.",
"We control question-asking in order to study these trade-offs.",
"To control question-asking with weighted decoding, we use the binary decoding feature is qn word(w), which is equal to 1 if and only if the word w is in a pre-defined list of interrogative words (how, what, when, where, which, who, whom, whose, why, ?)",
".",
"We find this is a somewhat effective method to encourage or discourage questions, but with unintended side-effects: a negative weight can discourage valid non-question utterances that happen to contain interrogative words (such as I'm learning how to knit) and a positive weight can result in degenerate utterances (such as For conditional training, we regard an utterance y as containing a question if and only if y contains a question mark.",
"We train our CT model on a control variable z with 11 possible values: {0, .",
".",
".",
", 10}.",
"As discussed in Section 5, we wish to control question-asking at the distributional, dialogue level, rather than at the binary, utterance level.",
"Thus the setting z = i means that the model should produce, on average, utterances containing '?'",
"with probability i/10.",
"During training we randomly assign examples to buckets such that each bucket i is trained on examples with the correct proportion of questions (i/10), and all buckets have the same amount of training examples.",
"We find that conditional training is effective to control question-asking -as shown in Figure 2 , by increasing z from 0 to 10, we obtain a range of question-asking rates from 1.40% to 97.72%.",
"However, when we introduce repetition control, question-asking is reduced -in particular, the z = 10 setting (which should produce 100% questions) now only produces 79.67% questions.",
"The primary problem is the weighted decoding feature extrep bigram, which discourages bigrams that have appeared in previous utterances -this prevents the model from producing bigrams that commonly occur in many questions, such as do you and what is.",
"To fix this, we introduce an extra setting z = 10 (boost), in which we do not use the feature extrep bigram for weighted decoding during beam search, but we do use it to rerank the candidates after beam search.",
"This setting, which allows the model to produce necessary questionasking bigrams, yields a 99.54% question-asking rate, at the cost of slightly increased external bigram repetition (see Appendix F).",
"For controlling question-asking, conditional training is preferable to weighted decoding for two reasons.",
"Firstly, it allows us to achieve (close to) 0% questions, 100% questions, or anything in between, without introducing the risk of degenerate output.",
"Secondly, presence-of-a-question-mark captures the true attribute of interest (questionasking) more exactly and directly than presence of interrogative words.",
"For these reasons, only the CT method is considered in the human evaluation.",
"Comparison of control methods The previous section shows that conditional training and weighted decoding are both useful techniques, with different strengths and weaknesses.",
"The primary disadvantage of conditional training is that it sometimes fails to learn the connection between the control variable z and the target output y.",
"In practice, we find the model can learn simple attributes of the output (such as the presence of '?",
"', and overall genericness), but not relationships between the input and output (such as semantic relatedness).",
"By contrast, weighted decoding can force the desired feature to appear in the output by raising the weight arbitrarily high (though this may have unintended side-effects).",
"The primary disadvantage of weighted decoding is that it risks going off-distribution when the weight is too strong.",
"By contrast, conditional training produces mostly well-formed, indistribution outputs.",
"This highlights the importance of learned control -it is safer to learn to produce output that both satisfies the control variable and is appropriate, than to alter the decoding process to satisfy the control variable, potentially trading off appropriateness in the process.",
"Other considerations include: (1) Convenience: conditional training requires retraining; weighted decoding doesn't, but is slower at test time.",
"Attribute definition: conditional training can control sentence-level attributes, but they must be discrete.",
"By contrast, weighted decoding requires word-level features, but they can be continuous.",
"Human evaluation results In order to study the effect of our controllable attributes, we conduct a large-scale human evalua-tion of 28 model configurations (see Appendix E), plus human-human conversations for comparison.",
"Approach In our evaluation, a crowdworker chats with a model (or in the human-human case, another crowdworker) for six conversational turns, then answers eight multiple-choice questions which each capture different aspects of conversational quality: avoiding repetition, interestingness, making sense, fluency, listening, inquisitiveness, humanness and engagingness.",
"The eight questions are Likert questions on a 1-4 scale, where higher is better.",
"5 To match the ConvAI2 Challenge, we also add a persona retrieval question, in which the crowdworker is asked to select which of two possible personas was the model's persona.",
"For full details of the evaluation design, see Appendix B.",
"Our evaluation is the same as the ConvAI2 Challenge evaluation, but more detailed -Con-vAI2 includes only engagingness and persona retrieval.",
"6 As in the ConvAI2 challenge, each of our 28 model configurations was evaluated by over 100 crowdworkers, and the results were adjusted for annotator variance via a Bayesian calibration (Kulikov et al., 2018) .",
"In designing our evaluation, we aimed to capture the four aspects we expected to directly improve via control (avoiding repetition, interestingness, listening, inquisitiveness), two important error classes we thought would be affected by our controls (fluency, making sense), and two overall quality measures (engagingness, humanness).",
"Main findings In this section we summarize the main findings of our human evaluation -whose full results can be found in Appendices G and H, with sample conversations in Appendix C. As Figure 3 shows, controlling for repetition, specificity and question-asking all lead to large 5 Exceptions: Avoiding repetition is a 1-3 scale, as we found this gave clearer instructions.",
"Inquisitiveness has an optimal score of 3; 1 and 2 represent too little questionasking, and 4 represents too much.",
"6 There are three other minor differences between our evaluation and ConvAI2's: (1) We fix capitalization and spacing before showing the chatbot's utterances to crowdworkers, while ConvAI2 show the raw lowercase tokenized form.",
"We found the latter interferes with fluency evaluation.",
"(2) We conduct 6 dialogue turns, while ConvAI2 conducts 4-6.",
"This was necessary to evaluate repetitiveness.",
"(3) We use (publicly-available) validation set personas, while ConvAI2 uses (hidden) test set personas.",
"This enables us to release our evaluation chatlogs.",
"engagingness improvements over the greedy and beam-search baseline models.",
"In particular, we find that controlling for multi-turn (self) repetition is important and should be incorporated alongside other attribute control methods.",
"We found no improvement by controlling response-relatedness.",
"To better understand these overall engagingness improvements, we consider the full set of human judgments, shown in Figure 4 .",
"We find that reducing repetition leads to improvements across all our aspects of conversational quality.",
"Increasing specificity shows improvements in interestingness and listening ability over the repetition-controlled baseline, while increasing question-asking shows improvements in inquisitiveness and interestingness over the repetition-controlled baseline.",
"Our most engaging model, which controls both repetition and question-asking -marked 'Question (CT)' in Figure 3 (left) -matches the engagingness of the winning entry in the ConvAI2 competition, as both models achieve a raw score 7 of 3.1 (Dinan et al., 2019) .",
"However, the Con-vAI2 winner, Lost in Conversation, was trained on approximately 12× as much data as our model.",
"Lost in Conversation is based on the OpenAI GPT Language Model (Radford et al., 2018) , which is pretrained on the BookCorpus (Zhu et al., 2015) , which contains approximately 985 million words, whereas our model is pretrained on the Twitter dataset (approximately 79 million words).",
"Altogether, our evaluation clearly shows that controlling low-level attributes over multiple turns leads to improved overall quality.",
"Effect of controlled attributes Repetition (WD) We observe that selfrepetition across utterances (external repetition) is by far the most severe form of repetition in our beam search baseline model.",
"We evaluate several settings of the extrep bigram weighted decoding feature, and find that an aggressive repetition-reduction setting (reducing bigram repetition rate to below gold data levels) is rated best.",
"We also find that blocking repeated content words improves the avoiding repetition score.",
"See Appendices E, F and G for full details.",
"As shown in Figure 3 Figure 3 : Calibrated human judgments of engagingness for the baselines and best controlled models (left); for different specificity control settings (middle); and for different question-asking control settings (right).",
"over the beam search baseline in all metrics, and achieves close-to-human scores on all metrics except humanness.",
"This striking result demonstrates that repetition is by far the biggest limiting quality factor for naive sequence-to-sequence dialogue agents.",
"The result also emphasizes the importance of multi-turn dialogue evaluation to detect the problem.",
"We refer to this model as the repetitioncontrolled baseline, and use it as a basis for all remaining experiments (i.e., we control specificity, response-relatedness and question-asking on top of these repetition-control settings).",
"Specificity (WD, CT) For our weighted decoding models, the extreme settings (very generic and very specific) score poorly in engagingness due to the frequent presence of degenerate output -see Figure 3 (middle).",
"We find that the weight = 4 setting (which is more specific than the repetitioncontrolled baseline and about as specific as the gold data) maximizes engagingness.",
"As shown in Figure 3 (left) and Figure 4 , this more-specific model is rated more interesting, engaging, and a better listener than the repetition-controlled baseline, but at the cost of reduced fluency and making sense.",
"Our CT model with z = 7 (which has a similar NIDF level as WD with weight = 4) shows similar results, but the improvements are smaller.",
"For further discussion on the interestingness of our specificity models, see Section 8.3.",
"Response-relatedness (WD) We evaluated several control settings (weight = −10, 5, 10, 13) and found that none scored better than weight = 0 (no response-relatedness control); see Appendix H. This is surprising -prior to running the human evaluation, we annotated 100 examples ourselves to determine the best control settings.",
"While we identified a more responsive setting (weight = 5) as less likely than the uncontrolled model to ignore the user, crowdworkers rated it as a slightly worse listener than the uncontrolled model.",
"One explanation for this discrepancy is that the more responsive model takes more risks, using more rare words (0.197 NIDF, up from 0.178), and thus receives a lower makes-sense score (3.41, down from 3.70).",
"We hypothesize that, compared to us, the crowdworkers are less tolerant of slightly nonsensical output, and more tolerant of generic unrelated utterances.",
"Question-asking (CT) As shown in Figure 3 (right), a question-asking rate of 65.7% (z = 7) maximizes engagingness.",
"This setting, which asks more questions than both the repetition-controlled baseline (50.0%) and the human-produced gold data (28.8%), brings us closest to human-level engagingness -see Figure 3 (left).",
"Although we find that a rate of approximately 65.7% questionasking is the most engaging, a lower level (48.9%, or z = 4) is rated the best listener.",
"Lastly, we find that although asking too many questions is less engaging, most crowdworkers will not directly criticize a chatbot that asks questions on every turnonly 11.9% of crowdworkers judged the z = 10 (boost) setting, which asks 99.5% questions, as asking too many questions.",
"8 For full details of these scores, see Appendix F and H. For time and budget reasons, we did not evaluate any models controlling both question-asking and specificity.",
"However, we expect it is possible to obtain further improvements by doing so.",
"A/B tests for interestingness Though our more-specific models yielded significant improvements in engagingness, we were surprised that they did not yield clearer improvements in interestingness.",
"To investigate further, we conducted an A/B interestingness evaluation of three specificity-controlled models, compared to the repetition-controlled baseline.",
"Crowdworkers were shown two conversations (from the main human evaluation) and asked to choose which model was more interesting (see Figure 7 for details).",
"We collected 500 samples per comparison, plus 200 additional human vs repetition-controlled baseline samples, which were used to filter for quality control.",
"After discarding low-quality crowdworkers, we have roughly 300 evaluations per comparison, with an average Cohen's κ = 0.6.",
"As shown in Table 3 , all three models were rated significantly more interesting than the repetitioncontrolled baseline.",
"This convincingly shows that producing utterances with more rare words is a valid strategy to improve interestingness.",
"We have two explanations for why these interestingness differences did not materialize in our main evaluation.",
"Firstly, interestingness is a particularly subjective metric (unlike more tangible metrics such as avoiding repetition and making sense) -this makes it hard to calibrate across crowdworkers.",
"Secondly, we suspect that in our original evaluation, the crowdworkers may have evaluated the interestingness of the task rather than the chatbot.",
"This could account for why subtle increases in conversational ability did not result in higher interestingness ratings -the PersonaChat task itself has a natural interestingness limit.",
"Conclusion What makes a good conversation?",
"Through our evaluation, we showed that a good conversation is about balance -controlling for the right level of repetition, specificity and question-asking is important for overall quality.",
"We also found that conversational aspects such as interestingness, listening, and inquisitiveness are all importantthough optimizing these can introduce a trade-off against certain types of errors (such as repetitive, disfluent, or nonsensical output).",
"Secondly, multiturn evaluation is essential to study what makes a good conversation -multiple turns are required to reveal issues such as repetition, consistency, and question-asking frequency.",
"Lastly, what do we mean by 'good'?",
"Although humanness and engagingness are both commonly used as overall quality metrics, the two are very different.",
"While our models achieved close-to-human scores on engagingness, they failed to get close on humannessshowing that a chatbot need not be human-like to be enjoyable.",
"This striking result also demonstrates the importance of measuring more than one quality metric when evaluating dialogue agents.",
"Outlook Our work shows that neural generative systems have systemic problems when applied to open-ended dialogue, some of which (e.g.",
"repetition) are only observable in the multi-turn setting.",
"Furthermore, control of low-level attributes offers a practical way to correct these problems, yielding large improvements to overall quality -in our case, comparable to systems trained on much more data.",
"Future work includes optimizing control settings automatically, and building more convincingly human-like chatbots.",
"Supplementary Material A Screenshots of human evaluation interface B Human evaluation questionnaire design Here are the questions and multiple-choice options used in the human evaluation, in the order presented: [Engagingness] How much did you enjoy talking to this user?",
"Adding w to the hypothesis y <t would create a 2-gram that appears in a previous utterance by the model extrep unigram(w, y <t , x) w is a non-stopword and w appears in a previous utterance by the model intrep bigram(w, y <t , x) Adding w to the hypothesis y <t would create a 2-gram that appears earlier in the hypothesis y <t intrep unigram(w, y <t , x) w is a non-stopword and w appears earlier in the hypothesis y <t partnerrep bigram(w, y <t , x) Adding w to the hypothesis y <t would create a 2-gram that appears in a previous utterance by the partner Repetition control (WD) Extrep bigram WD -0.5 wt -0.5 Extrep bigram WD -1.25 wt -1.25 Extrep bigram WD -3.5 wt -3.5 Extrep bigram WD -inf wt -∞ Repetition-controlled baseline wt -3.5 wt -∞ wt -∞ Question control (CT) Question-controlled CT 0 wt -3.5 wt -∞ wt -∞ z = 0 Question-controlled CT 1 wt -3.5 wt -∞ wt -∞ z = 1 Question-controlled CT 4 wt -3.5 wt -∞ wt -∞ z = 4 Question-controlled CT 7 wt -3.5 wt -∞ wt -∞ z = 7 Question-controlled CT 10 wt -3.5 wt -∞ wt -∞ z = 10 Question-controlled CT 10 (boost) wt 0 * wt -∞ wt -∞ z = 10 Specificity control (CT) Specificity-controlled CT 0 wt -3.5 wt -∞ wt -∞ z = 0 Specificity-controlled CT 2 wt -3.5 wt -∞ wt -∞ z = 2 Specificity-controlled CT 4 wt -3.5 wt -∞ wt -∞ z = 4 Specificity-controlled CT 7 wt -3.5 wt -∞ wt -∞ z = 7 Specificity-controlled CT 9 wt -3.5 wt -∞ wt -∞ z = 9 Specificity control (WD) Specificity-controlled WD -10 wt -3.5 wt -∞ wt -∞ wt -10 Specificity-controlled WD -4 wt -3.5 wt -∞ wt -∞ wt -4 Specificity-controlled WD 4 wt -3.5 wt -∞ wt -∞ wt 4 Specificity-controlled WD 6 wt -3.5 wt -∞ wt -∞ wt 6 Specificity-controlled WD 8 wt -3.5 wt -∞ wt -∞ wt 8 Response-related control (WD) ** Response-related controlled WD -10 wt -3.5 wt -∞ wt -∞ wt -∞ wt -∞ wt -10 Response-related controlled WD 0 wt -3.5 wt -∞ wt -∞ wt -∞ wt -∞ wt 0 Response-related controlled WD 5 wt -3.5 wt -∞ wt -∞ wt -∞ wt -∞ wt 5 Response-related controlled WD 10 wt -3.5 wt -∞ wt -∞ wt -∞ wt -∞ wt 10 Response-related controlled WD 13 wt -3.5 wt -∞ wt -∞ wt -∞ wt -∞ wt 13 Table 5 : Control settings for all configurations that were human-evaluated.",
"'wt' means the weight used for a weighted decoding feature and 'z =' means the setting (i.e.",
"bucket) for the control variable in conditional training.",
"* In the setting Question-controlled CT 10 (boost), the feature extrep bigram is not used for weighted decoding during beam search, but it is used to rerank the candidates after beam search.",
"See Section 6.4 for details.",
"** Note that the Response-related controlled models additionally introduce repetition controls to block internal bigram repetition and partner bigram repetition.",
"This was necessary to prevent the model from parroting the partner's last utterance.",
"In Table 8 , we find that just adding these extra repetition controls (here called Responserelated controlled WD 0, i.e.",
"increased repetition control but no response-relatedness control) outperforms our canonical Repetition-controlled baseline.",
"However, given that we discovered this later, our specificity and question controlled models are built on top of the canonical Repetition-controlled baseline.",
"Table 7 : Raw scores (mean ± std.)",
"for all models and human evaluation metrics.",
"The first eight columns are Likert metrics on a 1-4 scale (except Avoiding Repetition, which is a 1-3 scale), where higher is better (except Inquisitiveness, which has an optimal score of 3).",
"The last column, Persona Retrieval, is on a scale from 0 to 1 where higher is better.",
"The maximum of each column (excluding Human row) is in bold.",
"Model Avoiding Rep.",
"Engage Fluency Humanness Inquisitive Interesting Listening Make Sense Human and baselines * Human 2.79 ± 0.12 3.04 ± 0.11 3.36 ± 0.12 3.35 ± 0.11 2.44 ± 0.12 2.92 ± 0.11 3.32 ± 0.13 3.68 ± 0.11 * Greedy Search 2.08 ± 0.10 2.24 ± 0.11 3.03 ± 0.10 1.75 ± 0.12 1.95 ± 0.10 2.29 ± 0.13 2.62 ± 0.10 3.23 ± 0.10 * Beam Search (beam size 20) 2.08 ± 0.11 2.29 ± 0.11 3.09 ± 0.13 1.71 ± 0.13 2.42 ± 0.11 2.29 ± 0.14 2.47 ± 0.12 3.35 ± 0.13 Repetition control (WD) Extrep bigram WD -0.5 2.62 ± 0.10 2.54 ± 0.12 3.35 ± 0.12 2.13 ± 0.11 2.63 ± 0.11 2.56 ± 0.11 2.93 ± 0.11 3.48 ± 0.11 Extrep bigram WD -1.25 2.78 ± 0.09 2.82 ± 0.13 3.40 ± 0.12 2.27 ± 0.12 2.54 ± 0.09 2.76 ± 0.10 3.05 ± 0.11 3.53 ± 0.14 Extrep bigram WD -3.5 2.83 ± 0.11 2.93 ± 0.10 3.56 ± 0.10 2.43 ± 0.11 2.47 ± 0.11 2.83 ± 0.10 3.14 ± 0.10 3.62 ± 0.12 Extrep bigram WD -inf 2.74 ± 0.11 2.87 ± 0.14 3.49 ± 0.12 2.32 ± 0.13 2.56 ± 0.11 2.75 ± 0.12 3.13 ± 0.12 3.59 ± 0.12 * Repetition-controlled baseline 2.86 ± 0.12 2.82 ± 0.12 3.53 ± 0.10 2.40 ± 0.11 2.62 ± 0.13 2.84 ± 0.12 3.10 ± 0.11 3.58 ± 0.14 Question control (CT) Question-controlled CT 0 2.87 ± 0.12 2.84 ± 0.13 3.51 ± 0.10 2.46 ± 0.11 2.36 ± 0.09 2.76 ± 0.09 3.10 ± 0.10 3.49 ± 0.12 Question-controlled CT 1 2.82 ± 0.11 2.88 ± 0.11 3.42 ± 0.10 2.46 ± 0.12 2.47 ± 0.11 2.79 ± 0.13 3.14 ± 0.11 3.55 ± 0.10 Question-controlled CT 4 2.78 ± 0.12 2.88 ± 0.10 3.47 ± 0.11 2.40 ± 0.09 2.53 ± 0.13 2.83 ± 0.13 3.24 ± 0.11 3.59 ± 0.10 * Question-controlled CT 7 2.81 ± 0.10 2.99 ± 0.11 3.54 ± 0.09 2.35 ± 0.11 2.66 ± 0.12 2.92 ± 0.12 3.11 ± 0.10 3.47 ± 0.10 Question-controlled CT 10 2.67 ± 0.13 2.87 ± 0.11 3.52 ± 0.12 2.35 ± 0.12 2.63 ± 0.12 2.66 ± 0.10 2.94 ± 0.11 3.53 ± 0.12 Question-controlled CT 10 (boost) 2.68 ± 0.12 2.74 ± 0.09 3.42 ± 0.12 2.19 ± 0.13 2.79 ± 0.11 2.74 ± 0.11 3.00 ± 0.12 3.45 ± 0.13 Specificity control (CT) Specificity-controlled CT 0 2.79 ± 0.10 2.93 ± 0.09 3.44 ± 0.12 2.38 ± 0.11 2.56 ± 0.12 2.84 ± 0.12 3.12 ± 0.13 3.61 ± 0.11 Specificity-controlled CT 2 2.78 ± 0.12 2.74 ± 0.11 3.39 ± 0.13 2.31 ± 0.13 2.56 ± 0.13 2.74 ± 0.12 2.99 ± 0.11 3.47 ± 0.10 Specificity-controlled CT 4 2.82 ± 0.10 2.80 ± 0.13 3.44 ± 0.14 2.32 ± 0.13 2.51 ± 0.12 2.78 ± 0.15 3.09 ± 0.13 3.46 ± 0.13 Specificity-controlled CT 7 2.81 ± 0.12 2.91 ± 0.13 3.43 ± 0.11 2.45 ± 0.10 2.49 ± 0.11 2.81 ± 0.12 3.15 ± 0.12 3.55 ± 0.11 Specificity-controlled CT 9 2.80 ± 0.13 2.78 ± 0.10 3.41 ± 0.12 2.35 ± 0.13 2.28 ± 0.11 2.79 ± 0.11 2.91 ± 0.11 3.51 ± 0.12 Specificity control (WD) Specificity-controlled WD -10 2.76 ± 0.11 2.41 ± 0.12 3.19 ± 0.12 2.15 ± 0.11 2.28 ± 0.13 2.35 ± 0.12 2.89 ± 0.11 3.28 ± 0.12 Specificity-controlled WD -4 2.83 ± 0.10 2.76 ± 0.12 3.37 ± 0.10 2.36 ± 0.11 2.46 ± 0.11 2.62 ± 0.12 3.14 ± 0.09 3.52 ± 0.11 * Specificity-controlled WD 4 2.84 ± 0.10 2.96 ± 0.12 3.45 ± 0.13 2.44 ± 0.12 2.56 ± 0.09 2.94 ± 0.11 3.20 ± 0.10 3.54 ± 0.11 Specificity-controlled WD 6 2.81 ± 0.09 2.91 ± 0.10 3.34 ± 0.09 2.31 ± 0.11 2.53 ± 0.12 2.93 ± 0.12 3.09 ± 0.10 3.41 ± 0.12 Specificity-controlled WD 8 2.70 ± 0.11 2.39 ± 0.12 2.54 ± 0.12 1.80 ± 0.13 2.00 ± 0.10 2.49 ± 0.12 2.47 ± 0.10 2.87 ± 0.11 Response-related control (WD) Response-related controlled WD -10 2.77 ± 0.12 2.45 ± 0.12 3.26 ± 0.11 1.96 ± 0.10 2.31 ± 0.12 2.47 ± 0.12 2.73 ± 0.11 3.12 ± 0.12 Response-related controlled WD 0 2.87 ± 0.12 2.97 ± 0.11 3.55 ± 0.09 2.62 ± 0.11 2.48 ± 0.10 2.88 ± 0.12 3.21 ± 0.09 3.70 ± 0.10 Response-related controlled WD 5 2.79 ± 0.10 2.83 ± 0.09 3.35 ± 0.12 2.40 ± 0.12 2.51 ± 0.13 2.80 ± 0.13 3.13 ± 0.12 3.41 ± 0.12 Response-related controlled WD 10 2.74 ± 0.11 2.42 ± 0.12 2.93 ± 0.11 1.95 ± 0.12 2.20 ± 0.12 2.56 ± 0.12 2.90 ± 0.12 3.12 ± 0.10 Response-related controlled WD 13 2.63 ± 0.12 2.06 ± 0.11 2.40 ± 0.09 1.74 ± 0.11 2.07 ± 0.11 2.25 ± 0.12 2.49 ± 0.14 2.63 ± 0.10 Table 8 : Calibrated scores (mean ± std.)",
"for all models and human evaluation metrics.",
"The first eight columns are Likert metrics on a 1-4 scale (except Avoiding Repetition, which is a 1-3 scale), where higher is better (except Inquisitiveness, which has an optimal score of 3).",
"The last column, Persona Retrieval, is on a scale from 0 to 1 where higher is better.",
"The maximum of each column (excluding Human row) is in bold.",
"Rows marked with * are the six models included in Figure 3 Figure 9 : Calibrated human evaluation scores for all models.",
"This is the same data as in Table 8 .",
"Note: 'Repetition-controlled baseline+' in the rightmost column is 'Response-related controlled WD 0' in Table 8 .",
"See Table 5 for explanation."
]
} | {
"paper_header_number": [
"1",
"2",
"4",
"5",
"5.1",
"5.2",
"6",
"6.1",
"6.2",
"6.3",
"6.4",
"7",
"8",
"8.1",
"8.2",
"8.3",
"9"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Baseline model",
"Controllable text generation methods",
"Conditional Training (CT)",
"Weighted Decoding (WD)",
"Controlling conversational attributes",
"Repetition",
"Specificity",
"Response-relatedness",
"Question-asking",
"Comparison of control methods",
"Human evaluation results",
"Main findings",
"Effect of controlled attributes",
"A/B tests for interestingness",
"Conclusion"
]
} | GEM-SciDuet-train-133#paper-1358#slide-11 | Engagingness vs Humanness | Finding: Our bots are (almost) as engaging as humans, but they're clearly non-human.
2. On this task, the human "engagingness" performance may be artificially low.
Turkers chatting for money are less engaging than people chatting for fun.
This may be why the human-level engagingness scores are easy to match. | Finding: Our bots are (almost) as engaging as humans, but they're clearly non-human.
2. On this task, the human "engagingness" performance may be artificially low.
Turkers chatting for money are less engaging than people chatting for fun.
This may be why the human-level engagingness scores are easy to match. | [] |
GEM-SciDuet-train-133#paper-1358#slide-12 | 1358 | What makes a good conversation? How controllable attributes affect human judgments | A good conversation requires balance -between simplicity and detail; staying on topic and changing it; asking questions and answering them. Although dialogue agents are commonly evaluated via human judgments of overall quality, the relationship between quality and these individual factors is less well-studied. In this work, we examine two controllable neural text generation methods, conditional training and weighted decoding, in order to control four important attributes for chitchat dialogue: repetition, specificity, response-relatedness and question-asking. We conduct a large-scale human evaluation to measure the effect of these control parameters on multi-turn interactive conversations on the PersonaChat task. We provide a detailed analysis of their relationship to high-level aspects of conversation, and show that by controlling combinations of these variables our models obtain clear improvements in human quality judgments. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226,
227,
228,
229,
230,
231,
232,
233,
234,
235,
236,
237,
238,
239,
240,
241,
242,
243,
244,
245,
246,
247,
248,
249,
250,
251,
252,
253,
254
],
"paper_content_text": [
"Introduction Neural generation models for dialogue, despite their ubiquity in current research, are still poorly understood.",
"Well known problems, such as the genericness and repetitiveness of responses (Serban et al., 2016a) , remain without a de facto solution.",
"Strikingly, the factors that determine human judgments of overall conversation quality are almost entirely unexplored.",
"Most works have been limited to the next utterance prediction problem, whereas a multi-turn evaluation is necessary to evaluate the quality of a full conversation.",
"In this work we both (i) conduct a large-scale study to identify the fine-grained factors governing human judgments of full conversations, and (ii) develop models that apply our findings in practice, * A.S. completed most of this work at Facebook (FAIR).",
"leading to state-of-the-art performance.",
"Specifically, we identify and study eight aspects of conversation that can be measured by human judgments, while varying four types of low-level attributes that can be algorithmically controlled in neural models; see Figure 1 .",
"To control the lowlevel model attributes, we consider two simple but general algorithms: conditional training, in which the neural model is conditioned on additional control features, and weighted decoding, in which control features are added to the decoding scoring function at test time only.",
"One major result of our findings is that existing work has ignored the importance of conversational flow, as standard models (i) repeat or contradict previous statements, (ii) fail to balance specificity with genericness, and (iii) fail to balance asking questions with other dialogue acts.",
"Conducting experiments on the PersonaChat task (Zhang et al., 2018b) , we obtain significantly higher engagingness scores than the baseline by optimizing control of repetition, specificity and question-asking over multiple turns.",
"Using these findings, our best model matches the performance of the winning entry in the recent NeurIPS ConvAI2 competition (Dinan et al., 2019) , which was trained on much more data but had no control (see Section 8.1).",
"Our code, pretrained models, and full chatlogs, are available at https://parl.ai/projects/ controllable_dialogue.",
"Related Work Dialogue Dialogue evaluation is relatively well understood in goal-oriented tasks, where automated approaches can be coded by measuring task completion (Bordes et al., 2017; El Asri et al., 2017; Hastie, 2012; Henderson et al., 2014; Wen et al., 2017) .",
"Task success combined with dialogue cost can be linked to human judgments like user satisfaction via the PARADISE framework (Walker et al., 1997) .",
"However in chitchat tasks, which we study in this work, automatic metrics and their relation to human ratings are less well-understood.",
"While word-overlap metrics are effective for questionanswering and machine translation, for dialogue they have little to no correlation with human judgments (Liu et al., 2016; Novikova et al., 2017 )this is due to the open-ended nature of dialogue.",
"There are more recent attempts to find better automatic approaches, such as adversarial evaluation (Li et al., 2017b) and learning a scoring model (Lowe et al., 2017) , but their value is still unclear.",
"Nevertheless, a number of studies only use automatic metrics, with no human study at all (Lowe et al., 2015; Parthasarathi and Pineau, 2018; Serban et al., 2016b) .",
"Other works do use human evaluations (Dinan et al., 2018; Li et al., 2016a,b; Venkatesh et al., 2017; Vinyals and Le, 2015; Zhang et al., 2018b) , typically reporting just one type of judgment (either quality or appropriateness) via a Likert scale or pairwise comparison.",
"Most of those works only consider single turn evaluations, often with a shortened dialogue history, rather than full multi-turn dialogue.",
"A more comprehensive evaluation strategy has been studied within the scope of the Alexa prize (Venkatesh et al., 2017; Guo et al., 2018) by combining multiple automatic metrics designed to capture various conversational aspects (engagement, coherence, domain coverage, conversational depth and topical diversity).",
"Though these aspects have some similarity to the aspects studied here, we also focus on lower-level aspects (e.g.",
"avoiding repetition, fluency), to understand how they correspond to both our controllable attributes, and to overall quality judgments.",
"Controllable neural text generation Researchers have proposed several approaches to control aspects of RNN-based natural language generation such as sentiment, length, speaker style and tense (Fan et al., 2018; Ficler and Goldberg, 2017; Ghazvininejad et al., 2017; Hu et al., 2017; Kikuchi et al., 2016; Peng et al., 2018; Wang et al., 2017) .",
"In particular, several works use control to tackle the same common sequence-to-sequence problems we address here (particularly genericness and unrelated output), in the context of single-turn response generation (Baheti et al., 2018; Li et al., 2016a Li et al., , 2017a Shen et al., 2017; Xing et al., 2017; Zhang et al., 2018a; Zhou et al., 2017) .",
"By contrast, we focus on developing controls for, and human evaluation of, multi-turn interactive dialogue -this includes a new method (described in Section 5) to control attributes at the dialogue level rather than the utterance level.",
"In this work, we require a control method that is both general-purpose (one technique to simultaneously control many attributes) and easily tunable (the control setting is adjustable after training).",
"Given these constraints, we study two control methods: conditional training (variants of which have been described by Fan et al.",
"(2018) ; Kikuchi et al.",
"(2016) ; Peng et al.",
"(2018) ) and weighted decoding (described by Ghazvininejad et al.",
"(2017) as a general technique, and by Baheti et al.",
"(2018) to control response-relatedness).",
"To our knowledge, this work is the first to systematically compare the effectiveness of two general-purpose control methods across several attributes.",
"3 The PersonaChat dataset PersonaChat (Zhang et al., 2018b ) is a chitchat dialogue task involving two participants (two humans or a human and a bot).",
"Each participant is given a persona -a short collection of personal traits such as I'm left handed or My favorite season is spring -and are instructed to get to know each other by chatting naturally using their designated personas, for 6-8 turns.",
"The training set contains 8939 conversations and 955 personas, collected via crowdworkers, plus 1000 conversations and 100 personas for validation, and a similar number in the hidden test set.",
"The PersonaChat task was the subject of the NeurIPS 2018 ConvAI2 Challenge (Dinan et al., 2019) , in which competitors were first evaluated with respect to automatic met-rics (perplexity, hits@1 and F1 score), and then with respect to human judgment via the question \"How much did you enjoy talking to this user?\"",
"on a scale of 1-4.",
"Baseline model Our baseline model is a 2-layer LSTM sequenceto-sequence model with attention.",
"On any dialogue turn, the input x to the encoder is the entire dialogue history (separated using unique speakeridentifying tokens), with the model's own persona prepended.",
"Conditioned on this input sequence x, the decoder generates a response y.",
"Except when stated otherwise, all our models decode using beam search with beam size 20.",
"We initialized the word embedding matrix with 300-dimensional GloVe embeddings (Pennington et al., 2014) .",
"Using the ParlAI framework (Miller et al., 2017) , we pretrained the model on a dataset of 2.5 million Twitter message-response pairs, 1 then fine-tuned it on PersonaChat.",
"On the Per-sonaChat validation set, the baseline model has a perplexity of 26.83 and F1 of 17.02, which would have placed us 4th out of 26 models in the Con-vAI2 competition (Dinan et al., 2019) .",
"We attempt to improve over this baseline using control.",
"Controllable text generation methods Suppose we have a sequence-to-sequence model which gives P (y|x) = Π t P (y t |x, y 1 , .",
".",
".",
", y t−1 ), the conditional probability of a response y (the model's next utterance) given input x (the context, which in our case includes the model's own persona and the dialogue history).",
"Contrary to most previous work, which controls at the sentence level, we wish to control attributes of the output y at the dialogue levelmeaning that a single control setting is used for a whole dialogue.",
"For example, to control questionasking, we provide a control setting at the beginning of each dialogue (e.g.",
"20% questions or 70% questions) rather than providing a control setting for each utterance (e.g.",
"is a question or isn't a question).",
"With this approach, the sequence-tosequence model is able to choose what value the controlled attribute should take for any particular utterance, but we are able to choose the overall distribution.",
"We find that this approach works well -for example, the sequence-to-sequence model is generally good at detecting when to ask a question.",
"In particular, this is easier than the alternative: developing a separate process to decide, for each utterance, whether to ask a question.",
"In this section, we describe the two methods -which we call Conditional Training (CT) and Weighted Decoding (WD) -that we use to control attributes of the output y at the dialogue level.",
"Conditional Training (CT) Conditional Training (Fan et al., 2018; Kikuchi et al., 2016; Peng et al., 2018) is a method to learn a sequence-to-sequence model P (y|x, z), where z is a discrete control variable.",
"If the control attribute is naturally continuous (for example in our work, repetitiveness, specificity and response-relatedness), we use z to represent bucketed ranges.",
"For a binary attribute like questionasking, z represents an overall probability (as explained in Section 5).",
"To train a CT model, we first automatically annotate every (x, y) pair in the training set with the attribute we wish to control (for example, whether y contains a question mark).",
"During training, for each example we determine the corresponding z value (for continuous attributes, this simply means sorting into the correct bucket; for question-asking, see Section 6.4).",
"Next, the control variable z is represented via an embedding (each of the possible values of z has its own embedding).",
"For all our experiments, the embedding is of length 10; this was determined via hyperparameter tuning.",
"There are several possible ways to condition the sequence-to-sequence model on z -for example, append z to the end of the input sequence, or use z as the START symbol for the decoder.",
"We find it most effective to concatenate z to the decoder's input on every step.",
"2 Lastly, the CT model learns to produce y = y 1 , .",
".",
".",
", y T by optimizing the cross-entropy loss: loss CT = − 1 T T t=1 log P (y t |x, z, y 1 , .",
".",
".",
", y t−1 ) Our CT models are initialized with the parameters from the baseline sequence-to-sequence model P (y|x) (the new decoder parameters are initialized with small random values), then fine-tuned to optimize loss CT on the PersonaChat training set, until convergence of loss CT on the validation set.",
"Weighted Decoding (WD) Weighted Decoding (Ghazvininejad et al., 2017) is a decoding method that increases or decreases the probability of words with certain features.",
"The technique is applied only at test time, requiring no change to the training method.",
"A limitation of WD is that the controllable attribute must be defined at the word-level; any desired utterance-level attribute must be redefined via word-level features.",
"In weighted decoding, on the t th step of decoding, a partial hypothesis y <t = y 1 , .",
".",
".",
", y t−1 is expanded by computing the score for each possible next word w in the vocabulary: score(w, y <t ; x) = score(y <t ; x) + log P RNN (w|y <t , x) + i w i * f i (w; y <t , x).",
"Here, log P RNN (w|y <t , x) is the log-probability of the word w calculated by the RNN, score(y <t ; x) is the accumulated score of the already-generated words in the hypothesis y <t , and f i (w; y <t , x) are decoding features with associated weights w i .",
"There can be multiple features f i (to control multiple attributes), and the weights w i are hyperparameters to be chosen.",
"A decoding feature f i (w; y <t , x) assigns a real value to the word w, in the context of the text generated so far y <t and the context x.",
"The feature can be continuous (e.g.",
"the unigram probability of w), discrete (e.g.",
"the length of w in characters), or binary (e.g.",
"whether w starts with the same letter as the last word in y <t ).",
"A positive weight w i increases the probability of words w that score highly with respect to f i ; a negative weight decreases their probability.",
"Note that weighted decoding and conditional training can be applied simultaneously (i.e.",
"train a CT model then apply WD at test time) -a strategy we use in our experiments.",
"Controlling conversational attributes In this section, we describe how we use conditional training and weighted decoding to control four attributes: repetition, specificity, responserelatedness and question-asking.",
"We evaluate the effectiveness of both control methods via automatic metrics (i.e., measuring how well the attribute was controlled), and use our findings to select control methods and control settings to be explored further via human evaluation (Section 8).",
"Repetition Our baseline model exhibits three types of repetition, which we call external repetition (selfrepetition across utterances), internal repetition (self-repetition within utterances), and partner repetition (repeating the conversational partner).",
"To control repetition with weighted decoding, 3 we define five n-gram based decoding features (see Appendix D).",
"Three of these features (extrep bigram, intrep bigram and partnerrep bigram) identify repeating bigrams for the three repetition types.",
"The other two features (extrep unigram and intrep unigram) identify repeating content words.",
"By applying a negative weight to these features, we can reduce repetition.",
"In particular, if the weight is −∞, our method is equivalent to n-gram blocking as described by Kulikov et al.",
"(2018) .",
"We observe that repetition control is very important, thus all further control experiments include repetition control.",
"Specificity Like many sequence-to-sequence models using beam search decoding, our baseline frequently asks generic questions such as What music do you like?",
"and gives dull, unspecific responses, such as I like all kinds of music.",
"We control specificity using Normalized Inverse Document Frequency (NIDF) as a measure of word rareness.",
"4 The Inverse Document Frequency of a word w is IDF(w) = log(R/c w ) where R is the number of responses in the dataset, and c w is the number of those responses that contain w. Normalized IDF (which ranges from 0 to 1) is NIDF(w) = IDF(w) − min idf max idf − min idf (1) where min idf and max idf are the minimum and maximum IDFs, taken over all words in the vocabulary.",
"To control specificity with weighted decoding, we use NIDF as a decoding feature.",
"As shown in Table 1 , this method produces reasonable outputs when the feature weight is within a certain range, but at the extremes the outputs are nonsensical.",
"The boundary for nonsensical output differs from example to example.",
"To control specificity with conditional training, we define the specificity of an utterance y to be the mean NIDF of the words in y.",
"Thus our control variable z is mean NIDF (discretized into 10 equal-sized buckets).",
"As shown in Table 1 , this method gives outputs with a narrower NIDF range, but overall produces less nonsensical outputs.",
"Response-relatedness In conversation, it's generally desirable to produce a response that is related to the partner's last utterance; for example if the partner says My grandfather died last month, it is appropriate to say I'm so sorry.",
"Were you close to your grandfather?",
"However, our baseline model frequently responds with unrelated utterances like Do you have any pets?",
"To control response-relatedness with weighted decoding, we use the decoding feature resp rel: resp rel(w; y <t , x) = cos sim(word emb(w), sent emb( )) where word emb(w) is the GloVe embedding for the word w, sent emb( ) is the sentence embedding for the partner's last utterance (note is part of the context x), and cos sim is the cosine similarity between the two.",
"In particular, the sentence embedding sent emb(s) for an utterance s is a weighted average of the GloVe embeddings of the words in s, with the first principal component projected out; for full details, see Arora et al.",
"(2017) .",
"This method of controlling response-relatedness is similar to that described in (Baheti et al., 2018) .",
"We find that weighted decoding is effective to control the semantic relatedness of the model's response to the partner's last utterance (see Table 2 ).",
"As before, we find that extreme weights lead to nonsensical output.",
"To control response-relatedness with conditional training, we try defining the control variable z to be cos sim(sent emb(y), sent emb( )), the overall cosine similarity between the partner's last utterance and the model's response y (again, we discretize z).",
"However, we find this method ineffective -the CT model learns only a very weak connection between z and the semantic relatedness of the output (see Section 7 for more details).",
"Question-asking Considerate chitchat requires a reciprocal asking and answering of questions -asking too few or too many can appear self-centered or nosy.",
"We control question-asking in order to study these trade-offs.",
"To control question-asking with weighted decoding, we use the binary decoding feature is qn word(w), which is equal to 1 if and only if the word w is in a pre-defined list of interrogative words (how, what, when, where, which, who, whom, whose, why, ?)",
".",
"We find this is a somewhat effective method to encourage or discourage questions, but with unintended side-effects: a negative weight can discourage valid non-question utterances that happen to contain interrogative words (such as I'm learning how to knit) and a positive weight can result in degenerate utterances (such as For conditional training, we regard an utterance y as containing a question if and only if y contains a question mark.",
"We train our CT model on a control variable z with 11 possible values: {0, .",
".",
".",
", 10}.",
"As discussed in Section 5, we wish to control question-asking at the distributional, dialogue level, rather than at the binary, utterance level.",
"Thus the setting z = i means that the model should produce, on average, utterances containing '?'",
"with probability i/10.",
"During training we randomly assign examples to buckets such that each bucket i is trained on examples with the correct proportion of questions (i/10), and all buckets have the same amount of training examples.",
"We find that conditional training is effective to control question-asking -as shown in Figure 2 , by increasing z from 0 to 10, we obtain a range of question-asking rates from 1.40% to 97.72%.",
"However, when we introduce repetition control, question-asking is reduced -in particular, the z = 10 setting (which should produce 100% questions) now only produces 79.67% questions.",
"The primary problem is the weighted decoding feature extrep bigram, which discourages bigrams that have appeared in previous utterances -this prevents the model from producing bigrams that commonly occur in many questions, such as do you and what is.",
"To fix this, we introduce an extra setting z = 10 (boost), in which we do not use the feature extrep bigram for weighted decoding during beam search, but we do use it to rerank the candidates after beam search.",
"This setting, which allows the model to produce necessary questionasking bigrams, yields a 99.54% question-asking rate, at the cost of slightly increased external bigram repetition (see Appendix F).",
"For controlling question-asking, conditional training is preferable to weighted decoding for two reasons.",
"Firstly, it allows us to achieve (close to) 0% questions, 100% questions, or anything in between, without introducing the risk of degenerate output.",
"Secondly, presence-of-a-question-mark captures the true attribute of interest (questionasking) more exactly and directly than presence of interrogative words.",
"For these reasons, only the CT method is considered in the human evaluation.",
"Comparison of control methods The previous section shows that conditional training and weighted decoding are both useful techniques, with different strengths and weaknesses.",
"The primary disadvantage of conditional training is that it sometimes fails to learn the connection between the control variable z and the target output y.",
"In practice, we find the model can learn simple attributes of the output (such as the presence of '?",
"', and overall genericness), but not relationships between the input and output (such as semantic relatedness).",
"By contrast, weighted decoding can force the desired feature to appear in the output by raising the weight arbitrarily high (though this may have unintended side-effects).",
"The primary disadvantage of weighted decoding is that it risks going off-distribution when the weight is too strong.",
"By contrast, conditional training produces mostly well-formed, indistribution outputs.",
"This highlights the importance of learned control -it is safer to learn to produce output that both satisfies the control variable and is appropriate, than to alter the decoding process to satisfy the control variable, potentially trading off appropriateness in the process.",
"Other considerations include: (1) Convenience: conditional training requires retraining; weighted decoding doesn't, but is slower at test time.",
"Attribute definition: conditional training can control sentence-level attributes, but they must be discrete.",
"By contrast, weighted decoding requires word-level features, but they can be continuous.",
"Human evaluation results In order to study the effect of our controllable attributes, we conduct a large-scale human evalua-tion of 28 model configurations (see Appendix E), plus human-human conversations for comparison.",
"Approach In our evaluation, a crowdworker chats with a model (or in the human-human case, another crowdworker) for six conversational turns, then answers eight multiple-choice questions which each capture different aspects of conversational quality: avoiding repetition, interestingness, making sense, fluency, listening, inquisitiveness, humanness and engagingness.",
"The eight questions are Likert questions on a 1-4 scale, where higher is better.",
"5 To match the ConvAI2 Challenge, we also add a persona retrieval question, in which the crowdworker is asked to select which of two possible personas was the model's persona.",
"For full details of the evaluation design, see Appendix B.",
"Our evaluation is the same as the ConvAI2 Challenge evaluation, but more detailed -Con-vAI2 includes only engagingness and persona retrieval.",
"6 As in the ConvAI2 challenge, each of our 28 model configurations was evaluated by over 100 crowdworkers, and the results were adjusted for annotator variance via a Bayesian calibration (Kulikov et al., 2018) .",
"In designing our evaluation, we aimed to capture the four aspects we expected to directly improve via control (avoiding repetition, interestingness, listening, inquisitiveness), two important error classes we thought would be affected by our controls (fluency, making sense), and two overall quality measures (engagingness, humanness).",
"Main findings In this section we summarize the main findings of our human evaluation -whose full results can be found in Appendices G and H, with sample conversations in Appendix C. As Figure 3 shows, controlling for repetition, specificity and question-asking all lead to large 5 Exceptions: Avoiding repetition is a 1-3 scale, as we found this gave clearer instructions.",
"Inquisitiveness has an optimal score of 3; 1 and 2 represent too little questionasking, and 4 represents too much.",
"6 There are three other minor differences between our evaluation and ConvAI2's: (1) We fix capitalization and spacing before showing the chatbot's utterances to crowdworkers, while ConvAI2 show the raw lowercase tokenized form.",
"We found the latter interferes with fluency evaluation.",
"(2) We conduct 6 dialogue turns, while ConvAI2 conducts 4-6.",
"This was necessary to evaluate repetitiveness.",
"(3) We use (publicly-available) validation set personas, while ConvAI2 uses (hidden) test set personas.",
"This enables us to release our evaluation chatlogs.",
"engagingness improvements over the greedy and beam-search baseline models.",
"In particular, we find that controlling for multi-turn (self) repetition is important and should be incorporated alongside other attribute control methods.",
"We found no improvement by controlling response-relatedness.",
"To better understand these overall engagingness improvements, we consider the full set of human judgments, shown in Figure 4 .",
"We find that reducing repetition leads to improvements across all our aspects of conversational quality.",
"Increasing specificity shows improvements in interestingness and listening ability over the repetition-controlled baseline, while increasing question-asking shows improvements in inquisitiveness and interestingness over the repetition-controlled baseline.",
"Our most engaging model, which controls both repetition and question-asking -marked 'Question (CT)' in Figure 3 (left) -matches the engagingness of the winning entry in the ConvAI2 competition, as both models achieve a raw score 7 of 3.1 (Dinan et al., 2019) .",
"However, the Con-vAI2 winner, Lost in Conversation, was trained on approximately 12× as much data as our model.",
"Lost in Conversation is based on the OpenAI GPT Language Model (Radford et al., 2018) , which is pretrained on the BookCorpus (Zhu et al., 2015) , which contains approximately 985 million words, whereas our model is pretrained on the Twitter dataset (approximately 79 million words).",
"Altogether, our evaluation clearly shows that controlling low-level attributes over multiple turns leads to improved overall quality.",
"Effect of controlled attributes Repetition (WD) We observe that selfrepetition across utterances (external repetition) is by far the most severe form of repetition in our beam search baseline model.",
"We evaluate several settings of the extrep bigram weighted decoding feature, and find that an aggressive repetition-reduction setting (reducing bigram repetition rate to below gold data levels) is rated best.",
"We also find that blocking repeated content words improves the avoiding repetition score.",
"See Appendices E, F and G for full details.",
"As shown in Figure 3 Figure 3 : Calibrated human judgments of engagingness for the baselines and best controlled models (left); for different specificity control settings (middle); and for different question-asking control settings (right).",
"over the beam search baseline in all metrics, and achieves close-to-human scores on all metrics except humanness.",
"This striking result demonstrates that repetition is by far the biggest limiting quality factor for naive sequence-to-sequence dialogue agents.",
"The result also emphasizes the importance of multi-turn dialogue evaluation to detect the problem.",
"We refer to this model as the repetitioncontrolled baseline, and use it as a basis for all remaining experiments (i.e., we control specificity, response-relatedness and question-asking on top of these repetition-control settings).",
"Specificity (WD, CT) For our weighted decoding models, the extreme settings (very generic and very specific) score poorly in engagingness due to the frequent presence of degenerate output -see Figure 3 (middle).",
"We find that the weight = 4 setting (which is more specific than the repetitioncontrolled baseline and about as specific as the gold data) maximizes engagingness.",
"As shown in Figure 3 (left) and Figure 4 , this more-specific model is rated more interesting, engaging, and a better listener than the repetition-controlled baseline, but at the cost of reduced fluency and making sense.",
"Our CT model with z = 7 (which has a similar NIDF level as WD with weight = 4) shows similar results, but the improvements are smaller.",
"For further discussion on the interestingness of our specificity models, see Section 8.3.",
"Response-relatedness (WD) We evaluated several control settings (weight = −10, 5, 10, 13) and found that none scored better than weight = 0 (no response-relatedness control); see Appendix H. This is surprising -prior to running the human evaluation, we annotated 100 examples ourselves to determine the best control settings.",
"While we identified a more responsive setting (weight = 5) as less likely than the uncontrolled model to ignore the user, crowdworkers rated it as a slightly worse listener than the uncontrolled model.",
"One explanation for this discrepancy is that the more responsive model takes more risks, using more rare words (0.197 NIDF, up from 0.178), and thus receives a lower makes-sense score (3.41, down from 3.70).",
"We hypothesize that, compared to us, the crowdworkers are less tolerant of slightly nonsensical output, and more tolerant of generic unrelated utterances.",
"Question-asking (CT) As shown in Figure 3 (right), a question-asking rate of 65.7% (z = 7) maximizes engagingness.",
"This setting, which asks more questions than both the repetition-controlled baseline (50.0%) and the human-produced gold data (28.8%), brings us closest to human-level engagingness -see Figure 3 (left).",
"Although we find that a rate of approximately 65.7% questionasking is the most engaging, a lower level (48.9%, or z = 4) is rated the best listener.",
"Lastly, we find that although asking too many questions is less engaging, most crowdworkers will not directly criticize a chatbot that asks questions on every turnonly 11.9% of crowdworkers judged the z = 10 (boost) setting, which asks 99.5% questions, as asking too many questions.",
"8 For full details of these scores, see Appendix F and H. For time and budget reasons, we did not evaluate any models controlling both question-asking and specificity.",
"However, we expect it is possible to obtain further improvements by doing so.",
"A/B tests for interestingness Though our more-specific models yielded significant improvements in engagingness, we were surprised that they did not yield clearer improvements in interestingness.",
"To investigate further, we conducted an A/B interestingness evaluation of three specificity-controlled models, compared to the repetition-controlled baseline.",
"Crowdworkers were shown two conversations (from the main human evaluation) and asked to choose which model was more interesting (see Figure 7 for details).",
"We collected 500 samples per comparison, plus 200 additional human vs repetition-controlled baseline samples, which were used to filter for quality control.",
"After discarding low-quality crowdworkers, we have roughly 300 evaluations per comparison, with an average Cohen's κ = 0.6.",
"As shown in Table 3 , all three models were rated significantly more interesting than the repetitioncontrolled baseline.",
"This convincingly shows that producing utterances with more rare words is a valid strategy to improve interestingness.",
"We have two explanations for why these interestingness differences did not materialize in our main evaluation.",
"Firstly, interestingness is a particularly subjective metric (unlike more tangible metrics such as avoiding repetition and making sense) -this makes it hard to calibrate across crowdworkers.",
"Secondly, we suspect that in our original evaluation, the crowdworkers may have evaluated the interestingness of the task rather than the chatbot.",
"This could account for why subtle increases in conversational ability did not result in higher interestingness ratings -the PersonaChat task itself has a natural interestingness limit.",
"Conclusion What makes a good conversation?",
"Through our evaluation, we showed that a good conversation is about balance -controlling for the right level of repetition, specificity and question-asking is important for overall quality.",
"We also found that conversational aspects such as interestingness, listening, and inquisitiveness are all importantthough optimizing these can introduce a trade-off against certain types of errors (such as repetitive, disfluent, or nonsensical output).",
"Secondly, multiturn evaluation is essential to study what makes a good conversation -multiple turns are required to reveal issues such as repetition, consistency, and question-asking frequency.",
"Lastly, what do we mean by 'good'?",
"Although humanness and engagingness are both commonly used as overall quality metrics, the two are very different.",
"While our models achieved close-to-human scores on engagingness, they failed to get close on humannessshowing that a chatbot need not be human-like to be enjoyable.",
"This striking result also demonstrates the importance of measuring more than one quality metric when evaluating dialogue agents.",
"Outlook Our work shows that neural generative systems have systemic problems when applied to open-ended dialogue, some of which (e.g.",
"repetition) are only observable in the multi-turn setting.",
"Furthermore, control of low-level attributes offers a practical way to correct these problems, yielding large improvements to overall quality -in our case, comparable to systems trained on much more data.",
"Future work includes optimizing control settings automatically, and building more convincingly human-like chatbots.",
"Supplementary Material A Screenshots of human evaluation interface B Human evaluation questionnaire design Here are the questions and multiple-choice options used in the human evaluation, in the order presented: [Engagingness] How much did you enjoy talking to this user?",
"Adding w to the hypothesis y <t would create a 2-gram that appears in a previous utterance by the model extrep unigram(w, y <t , x) w is a non-stopword and w appears in a previous utterance by the model intrep bigram(w, y <t , x) Adding w to the hypothesis y <t would create a 2-gram that appears earlier in the hypothesis y <t intrep unigram(w, y <t , x) w is a non-stopword and w appears earlier in the hypothesis y <t partnerrep bigram(w, y <t , x) Adding w to the hypothesis y <t would create a 2-gram that appears in a previous utterance by the partner Repetition control (WD) Extrep bigram WD -0.5 wt -0.5 Extrep bigram WD -1.25 wt -1.25 Extrep bigram WD -3.5 wt -3.5 Extrep bigram WD -inf wt -∞ Repetition-controlled baseline wt -3.5 wt -∞ wt -∞ Question control (CT) Question-controlled CT 0 wt -3.5 wt -∞ wt -∞ z = 0 Question-controlled CT 1 wt -3.5 wt -∞ wt -∞ z = 1 Question-controlled CT 4 wt -3.5 wt -∞ wt -∞ z = 4 Question-controlled CT 7 wt -3.5 wt -∞ wt -∞ z = 7 Question-controlled CT 10 wt -3.5 wt -∞ wt -∞ z = 10 Question-controlled CT 10 (boost) wt 0 * wt -∞ wt -∞ z = 10 Specificity control (CT) Specificity-controlled CT 0 wt -3.5 wt -∞ wt -∞ z = 0 Specificity-controlled CT 2 wt -3.5 wt -∞ wt -∞ z = 2 Specificity-controlled CT 4 wt -3.5 wt -∞ wt -∞ z = 4 Specificity-controlled CT 7 wt -3.5 wt -∞ wt -∞ z = 7 Specificity-controlled CT 9 wt -3.5 wt -∞ wt -∞ z = 9 Specificity control (WD) Specificity-controlled WD -10 wt -3.5 wt -∞ wt -∞ wt -10 Specificity-controlled WD -4 wt -3.5 wt -∞ wt -∞ wt -4 Specificity-controlled WD 4 wt -3.5 wt -∞ wt -∞ wt 4 Specificity-controlled WD 6 wt -3.5 wt -∞ wt -∞ wt 6 Specificity-controlled WD 8 wt -3.5 wt -∞ wt -∞ wt 8 Response-related control (WD) ** Response-related controlled WD -10 wt -3.5 wt -∞ wt -∞ wt -∞ wt -∞ wt -10 Response-related controlled WD 0 wt -3.5 wt -∞ wt -∞ wt -∞ wt -∞ wt 0 Response-related controlled WD 5 wt -3.5 wt -∞ wt -∞ wt -∞ wt -∞ wt 5 Response-related controlled WD 10 wt -3.5 wt -∞ wt -∞ wt -∞ wt -∞ wt 10 Response-related controlled WD 13 wt -3.5 wt -∞ wt -∞ wt -∞ wt -∞ wt 13 Table 5 : Control settings for all configurations that were human-evaluated.",
"'wt' means the weight used for a weighted decoding feature and 'z =' means the setting (i.e.",
"bucket) for the control variable in conditional training.",
"* In the setting Question-controlled CT 10 (boost), the feature extrep bigram is not used for weighted decoding during beam search, but it is used to rerank the candidates after beam search.",
"See Section 6.4 for details.",
"** Note that the Response-related controlled models additionally introduce repetition controls to block internal bigram repetition and partner bigram repetition.",
"This was necessary to prevent the model from parroting the partner's last utterance.",
"In Table 8 , we find that just adding these extra repetition controls (here called Responserelated controlled WD 0, i.e.",
"increased repetition control but no response-relatedness control) outperforms our canonical Repetition-controlled baseline.",
"However, given that we discovered this later, our specificity and question controlled models are built on top of the canonical Repetition-controlled baseline.",
"Table 7 : Raw scores (mean ± std.)",
"for all models and human evaluation metrics.",
"The first eight columns are Likert metrics on a 1-4 scale (except Avoiding Repetition, which is a 1-3 scale), where higher is better (except Inquisitiveness, which has an optimal score of 3).",
"The last column, Persona Retrieval, is on a scale from 0 to 1 where higher is better.",
"The maximum of each column (excluding Human row) is in bold.",
"Model Avoiding Rep.",
"Engage Fluency Humanness Inquisitive Interesting Listening Make Sense Human and baselines * Human 2.79 ± 0.12 3.04 ± 0.11 3.36 ± 0.12 3.35 ± 0.11 2.44 ± 0.12 2.92 ± 0.11 3.32 ± 0.13 3.68 ± 0.11 * Greedy Search 2.08 ± 0.10 2.24 ± 0.11 3.03 ± 0.10 1.75 ± 0.12 1.95 ± 0.10 2.29 ± 0.13 2.62 ± 0.10 3.23 ± 0.10 * Beam Search (beam size 20) 2.08 ± 0.11 2.29 ± 0.11 3.09 ± 0.13 1.71 ± 0.13 2.42 ± 0.11 2.29 ± 0.14 2.47 ± 0.12 3.35 ± 0.13 Repetition control (WD) Extrep bigram WD -0.5 2.62 ± 0.10 2.54 ± 0.12 3.35 ± 0.12 2.13 ± 0.11 2.63 ± 0.11 2.56 ± 0.11 2.93 ± 0.11 3.48 ± 0.11 Extrep bigram WD -1.25 2.78 ± 0.09 2.82 ± 0.13 3.40 ± 0.12 2.27 ± 0.12 2.54 ± 0.09 2.76 ± 0.10 3.05 ± 0.11 3.53 ± 0.14 Extrep bigram WD -3.5 2.83 ± 0.11 2.93 ± 0.10 3.56 ± 0.10 2.43 ± 0.11 2.47 ± 0.11 2.83 ± 0.10 3.14 ± 0.10 3.62 ± 0.12 Extrep bigram WD -inf 2.74 ± 0.11 2.87 ± 0.14 3.49 ± 0.12 2.32 ± 0.13 2.56 ± 0.11 2.75 ± 0.12 3.13 ± 0.12 3.59 ± 0.12 * Repetition-controlled baseline 2.86 ± 0.12 2.82 ± 0.12 3.53 ± 0.10 2.40 ± 0.11 2.62 ± 0.13 2.84 ± 0.12 3.10 ± 0.11 3.58 ± 0.14 Question control (CT) Question-controlled CT 0 2.87 ± 0.12 2.84 ± 0.13 3.51 ± 0.10 2.46 ± 0.11 2.36 ± 0.09 2.76 ± 0.09 3.10 ± 0.10 3.49 ± 0.12 Question-controlled CT 1 2.82 ± 0.11 2.88 ± 0.11 3.42 ± 0.10 2.46 ± 0.12 2.47 ± 0.11 2.79 ± 0.13 3.14 ± 0.11 3.55 ± 0.10 Question-controlled CT 4 2.78 ± 0.12 2.88 ± 0.10 3.47 ± 0.11 2.40 ± 0.09 2.53 ± 0.13 2.83 ± 0.13 3.24 ± 0.11 3.59 ± 0.10 * Question-controlled CT 7 2.81 ± 0.10 2.99 ± 0.11 3.54 ± 0.09 2.35 ± 0.11 2.66 ± 0.12 2.92 ± 0.12 3.11 ± 0.10 3.47 ± 0.10 Question-controlled CT 10 2.67 ± 0.13 2.87 ± 0.11 3.52 ± 0.12 2.35 ± 0.12 2.63 ± 0.12 2.66 ± 0.10 2.94 ± 0.11 3.53 ± 0.12 Question-controlled CT 10 (boost) 2.68 ± 0.12 2.74 ± 0.09 3.42 ± 0.12 2.19 ± 0.13 2.79 ± 0.11 2.74 ± 0.11 3.00 ± 0.12 3.45 ± 0.13 Specificity control (CT) Specificity-controlled CT 0 2.79 ± 0.10 2.93 ± 0.09 3.44 ± 0.12 2.38 ± 0.11 2.56 ± 0.12 2.84 ± 0.12 3.12 ± 0.13 3.61 ± 0.11 Specificity-controlled CT 2 2.78 ± 0.12 2.74 ± 0.11 3.39 ± 0.13 2.31 ± 0.13 2.56 ± 0.13 2.74 ± 0.12 2.99 ± 0.11 3.47 ± 0.10 Specificity-controlled CT 4 2.82 ± 0.10 2.80 ± 0.13 3.44 ± 0.14 2.32 ± 0.13 2.51 ± 0.12 2.78 ± 0.15 3.09 ± 0.13 3.46 ± 0.13 Specificity-controlled CT 7 2.81 ± 0.12 2.91 ± 0.13 3.43 ± 0.11 2.45 ± 0.10 2.49 ± 0.11 2.81 ± 0.12 3.15 ± 0.12 3.55 ± 0.11 Specificity-controlled CT 9 2.80 ± 0.13 2.78 ± 0.10 3.41 ± 0.12 2.35 ± 0.13 2.28 ± 0.11 2.79 ± 0.11 2.91 ± 0.11 3.51 ± 0.12 Specificity control (WD) Specificity-controlled WD -10 2.76 ± 0.11 2.41 ± 0.12 3.19 ± 0.12 2.15 ± 0.11 2.28 ± 0.13 2.35 ± 0.12 2.89 ± 0.11 3.28 ± 0.12 Specificity-controlled WD -4 2.83 ± 0.10 2.76 ± 0.12 3.37 ± 0.10 2.36 ± 0.11 2.46 ± 0.11 2.62 ± 0.12 3.14 ± 0.09 3.52 ± 0.11 * Specificity-controlled WD 4 2.84 ± 0.10 2.96 ± 0.12 3.45 ± 0.13 2.44 ± 0.12 2.56 ± 0.09 2.94 ± 0.11 3.20 ± 0.10 3.54 ± 0.11 Specificity-controlled WD 6 2.81 ± 0.09 2.91 ± 0.10 3.34 ± 0.09 2.31 ± 0.11 2.53 ± 0.12 2.93 ± 0.12 3.09 ± 0.10 3.41 ± 0.12 Specificity-controlled WD 8 2.70 ± 0.11 2.39 ± 0.12 2.54 ± 0.12 1.80 ± 0.13 2.00 ± 0.10 2.49 ± 0.12 2.47 ± 0.10 2.87 ± 0.11 Response-related control (WD) Response-related controlled WD -10 2.77 ± 0.12 2.45 ± 0.12 3.26 ± 0.11 1.96 ± 0.10 2.31 ± 0.12 2.47 ± 0.12 2.73 ± 0.11 3.12 ± 0.12 Response-related controlled WD 0 2.87 ± 0.12 2.97 ± 0.11 3.55 ± 0.09 2.62 ± 0.11 2.48 ± 0.10 2.88 ± 0.12 3.21 ± 0.09 3.70 ± 0.10 Response-related controlled WD 5 2.79 ± 0.10 2.83 ± 0.09 3.35 ± 0.12 2.40 ± 0.12 2.51 ± 0.13 2.80 ± 0.13 3.13 ± 0.12 3.41 ± 0.12 Response-related controlled WD 10 2.74 ± 0.11 2.42 ± 0.12 2.93 ± 0.11 1.95 ± 0.12 2.20 ± 0.12 2.56 ± 0.12 2.90 ± 0.12 3.12 ± 0.10 Response-related controlled WD 13 2.63 ± 0.12 2.06 ± 0.11 2.40 ± 0.09 1.74 ± 0.11 2.07 ± 0.11 2.25 ± 0.12 2.49 ± 0.14 2.63 ± 0.10 Table 8 : Calibrated scores (mean ± std.)",
"for all models and human evaluation metrics.",
"The first eight columns are Likert metrics on a 1-4 scale (except Avoiding Repetition, which is a 1-3 scale), where higher is better (except Inquisitiveness, which has an optimal score of 3).",
"The last column, Persona Retrieval, is on a scale from 0 to 1 where higher is better.",
"The maximum of each column (excluding Human row) is in bold.",
"Rows marked with * are the six models included in Figure 3 Figure 9 : Calibrated human evaluation scores for all models.",
"This is the same data as in Table 8 .",
"Note: 'Repetition-controlled baseline+' in the rightmost column is 'Response-related controlled WD 0' in Table 8 .",
"See Table 5 for explanation."
]
} | {
"paper_header_number": [
"1",
"2",
"4",
"5",
"5.1",
"5.2",
"6",
"6.1",
"6.2",
"6.3",
"6.4",
"7",
"8",
"8.1",
"8.2",
"8.3",
"9"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Baseline model",
"Controllable text generation methods",
"Conditional Training (CT)",
"Weighted Decoding (WD)",
"Controlling conversational attributes",
"Repetition",
"Specificity",
"Response-relatedness",
"Question-asking",
"Comparison of control methods",
"Human evaluation results",
"Main findings",
"Effect of controlled attributes",
"A/B tests for interestingness",
"Conclusion"
]
} | GEM-SciDuet-train-133#paper-1358#slide-12 | Conclusions | Control is a good idea for your neural sequence generation dialogue system.
Using simple control, we matched performance of GPT-based contest winner.
Don't repeat yourself. Don't be boring. Ask more questions.
Multi-turn phenomena (repetition, question-asking frequency) are important
so need multi-turn eval to detect them.
Engagingness = Humanness, so think carefully about which to use.
Paid Turkers are not engaging conversationalists, or good judges of engaging conversation. Humans chatting for fun may be better.
Problem: Manually finding the best combination of control settings is painful. | Control is a good idea for your neural sequence generation dialogue system.
Using simple control, we matched performance of GPT-based contest winner.
Don't repeat yourself. Don't be boring. Ask more questions.
Multi-turn phenomena (repetition, question-asking frequency) are important
so need multi-turn eval to detect them.
Engagingness = Humanness, so think carefully about which to use.
Paid Turkers are not engaging conversationalists, or good judges of engaging conversation. Humans chatting for fun may be better.
Problem: Manually finding the best combination of control settings is painful. | [] |
GEM-SciDuet-train-134#paper-1359#slide-0 | 1359 | Searching for the X-Factor: Exploring Corpus Subjectivity for Word Embeddings | We explore the notion of subjectivity, and hypothesize that word embeddings learnt from input corpora of varying levels of subjectivity behave differently on natural language processing tasks such as classifying a sentence by sentiment, subjectivity, or topic. Through systematic comparative analyses, we establish this to be the case indeed. Moreover, based on the discovery of the outsized role that sentiment words play on subjectivity-sensitive tasks such as sentiment classification, we develop a novel word embedding SentiVec which is infused with sentiment information from a lexical resource, and is shown to outperform baselines on such tasks. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226
],
"paper_content_text": [
"Introduction Distributional analysis methods such as Word2Vec (Mikolov et al., 2013) and GloVe (Pennington et al., 2014) have been critical for the success of many large-scale natural language processing (NLP) applications (Collobert et al., 2011; Socher et al., 2013; Goldberg, 2016) .",
"These methods employ distributional hypothesis (i.e., words used in the same contexts tend to have similar meaning) to derive distributional meaning via context prediction tasks and produce dense word embeddings.",
"While there have been active and ongoing research on improving word embedding methods (see Section 5), there is a relative dearth of study on the impact that an input corpus may have on the quality of the word embeddings.",
"The previous preoccupation centers around corpus size, i.e., a larger corpus is perceived to be richer in statistical information.",
"For instance, popular corpora include Wikipedia, Common Crawl, and Google News.",
"We postulate that there may be variations across corpora owing to factors that affect language use.",
"Intuitively, the many things we write (a work email, a product review, an academic publication, etc.)",
"may each involve certain stylistic, syntactic, and lexical choices, resulting in meaningfully different distributions of word cooccurrences.",
"Consequently, such factors may be encoded in the word embeddings, and input corpora may be differentially informative towards various NLP tasks.",
"In this work, we are interested in the notion of subjectivity.",
"Some NLP tasks, such as sentiment classification, revolve around subjective expressions of likes or dislikes.",
"Others, such as topic classification, revolve around more objective elements of whether a document belongs to a topic (e.g., science, politics).",
"Our central hypothesis is that word embeddings learnt from input corpora of contrasting levels of subjectivity perform differently when classifying sentences by sentiment, subjectivity, or topic.",
"As the first contribution, we outline an experimental scheme to explore this hypothesis in Section 2, and conduct a series of controlled experiments in Section 3 establishing that there exists a meaningful difference between word embeddings derived from objective vs. subjective corpora.",
"We further systematically investigate factors that could potentially explain the differences.",
"Upon discovering from the investigation that sentiment words play a particularly important role in subjectivity-sensitive NLP tasks, such as sentiment classification, as the second contribution, in Section 4 we develop SentiVec, a novel word embedding method infused with information from lexical resources such as a sentiment lexicon.",
"We further identify two alternative lexical objectives: Logistic SentiVec based on discriminative logistic regression, and Spherical SentiVec based on soft clustering effect of von Mises-Fisher distributions.",
"In Section 6, the proposed word embeddings show evident improvements on sentiment classification, as compared to the base model Word2Vec and other baselines using the same lexical resource.",
"Data and Methodology We lay out the methodology for generating word embeddings of contrasting subjectivity, whose effects are tested on several text classification tasks.",
"Generating Word Embeddings As it is difficult to precisely quantify the degree of subjectivity of a corpus, we resort to generating word embeddings from two corpora that contrast sharply in subjectivity, referring to them as the Objective Corpus and the Subjective Corpus.",
"Objective Corpus As virtually all contents are written by humans, an absolutely objective corpus (in the philosophical sense) may prove elusive.",
"There are however exemplars where, by construction, a corpus aspires to be as objective as possible, and probably achieves that in practical terms.",
"We postulate that one such corpus is Wikipedia.",
"Its list of policies and guidelines 1 , assiduously enforced by an editorial team, specify that an article must be written from a neutral point of view, which among other things means \"representing fairly, proportionately, and, as far as possible, without editorial bias, all of the significant views that have been published by reliable sources on a topic.\".",
"Moreover, it is a common resource for training distributional word embeddings and adopted widely by the research community to solve various NLP problems.",
"Hence, in this study, we use Wikipedia as the Objective Corpus.",
"Subjective Corpus By extension, one may then deem a corpus subjective if its content does not at least meet Wikipedia's neutral point of view requirement.",
"In other words, if the content is replete with personal feelings and opinions.",
"We posit that product reviews would be one such corpus.",
"For instance, Amazon's Community Guideline 2 states that \"Amazon values diverse opinions\", and that \"Content you submit should be relevant and based on your own honest opinions and experience.\".",
"Reviews consist of expressive content written by customers, and may not strive for the neutrality of an encyclopedia.",
"We rely on a 1 https://en.wikipedia.org/wiki/ Wikipedia:List_of_policies_and_ guidelines 2 https://www.amazon.com/gp/help/ customer/display.html?nodeId=201929730 large corpus of Amazon reviews from various categories (e.g., electronics, jewelry, books, and etc.)",
"(McAuley et al., 2015) as the Subjective Corpus.",
"Word Embeddings For the comparative analysis in Section 3, we employ Word2Vec (reviewed below) to generate word embeddings from each corpus.",
"Later on in Section 4, we will propose a new word embedding method called SentiVec.",
"For Word2Vec, we use the Skip-gram model to train distributional word embeddings on the Objective Corpus and the Subjective Corpus respectively.",
"Skip-gram aims to find word embeddings that are useful for predicting nearby words.",
"The objective is to maximize the context probability: log L(W ; C) = w∈W w ∈C(w) log P(w |w), (1) where W is an input corpus and C(w) is the context of token w. The probability of context word w , given observed word w is defined via softmax: P(w |w) = exp (v w · vw) ŵ∈V exp (vŵ · vw) , (2) where v w and v w are corresponding embeddings and V is the corpus vocabulary.",
"Though theoretically sound, the formulation is computationally impractical and requires tractable approximation.",
"Mikolov et al.",
"(2013) propose two efficient procedures to optimize (1): Hierarchical Softmax and Negative Sampling (NS).",
"In this work we focus on the widely adopted NS.",
"The intuition is that a \"good\" model should be able to differentiate observed data from noise.",
"The differentiation task is defined using logistic regression; the goal is to tell apart real context-word pair (w , w) from randomly generated noise pair (ŵ, w).",
"Formally, log L [w',w] = log σ (v w · vw) + k i=1 log σ (−vŵ i · vw), (3) where σ( · ) is a sigmoid function, and {ŵ i } k i=1 are negative samples.",
"Summing up all the contextword pairs, we derive the NS Skip-gram objective: log L word2vec (W ; C) = w∈W w ∈C(w) log L [w',w] .",
"(4) Training word embeddings with Skip-gram, we keep the same hyperparameters across all the runs: 300 dimensions for embeddings, k = 5 negative samples, and window of 5 tokens.",
"The Objective and Subjective corpora undergo the same preprocessing, i.e., discarding short sentences (< 5 tokens) and rare words (< 10 occurrences), removing punctuation, normalizing Unicode symbols.",
"Evaluation Tasks To compare word embeddings, we need a common yardstick.",
"It is difficult to define an inherent quality to word embeddings.",
"Instead, we put them through several evaluation tasks that can leverage word embeddings and standardize their formulations as binary classification tasks.",
"To boil the comparisons down to the essences of word embeddings (which is our central focus), we rely on standardized techniques so as to attribute as much of the differences as possible to the word embeddings.",
"We use logistic regression for classification, and represent a text snippet (e.g., a sentence) in the feature space as the average of the word embeddings of tokens in the snippet (ignoring out-ofvocabulary tokens).",
"The evaluation metric is the average accuracy from 10-fold cross validation.",
"There are three evaluation tasks of varying degrees of hypothetical subjectivity, as outlined below.",
"Each may involve multiple datasets.",
"Sentiment Classification Task This task classifies a sentence into either positive or negative.",
"We use two groups of datasets as follows.",
"The first group consists of 24 datasets from UCSD Amazon product data 3 corresponding to various product categories.",
"Each review has a rating from 1 to 5, which is transformed into positive (ratings 4 or 5) or negative (ratings 1 or 2) class.",
"For each dataset respectively, we sample 5000 sentences each from the positive and negative reviews.",
"Note that these sentences used for this evaluation task have not participated in the generation of word embeddings.",
"Due to space constraint, in most cases we present the average accuracy across the datasets, but where appropriate we enumerate the results for each dataset.",
"The second is Cornell's sentence polarity dataset v1.0 4 (Pang and Lee, 2005) , made up of 5331 each of positive and negative sentences from Rotten Tomatoes movie reviews.",
"The inclusion of this out-of-domain evaluation dataset is useful for examining whether the performance of word embeddings from the Subjective Corpus on the first group above may inadvertently be affected by indomain advantage arising from its Amazon origin.",
"Subjectivity Classification Task This task classifies a sentence into subjective or objective.",
"The dataset is Cornell's subjectivity dataset v1.0 5 , consisting of 5000 subjective sentences derived from Rotten Tomatoes (RT) reviews and 5000 objective sentences derived from IMDB plot summaries (Pang and Lee, 2004) .",
"This task is probably less sensitive to the subjectivity within word embeddings than sentiment classification, as determining whether a sentence is subjective or objective should ideally be an objective undertaking.",
"Topic Classification Task We use the 20 Newsgroups dataset 6 (\"bydate\" version), whereby the newsgroups are organized into six subject matter groupings.",
"We extract the message body and split them into sentences.",
"Each group's sentences then form the in-topic class, and we randomly sample an equivalent number of sentences from the remaining newsgroups to form the out-of-topic class.",
"This results in six datasets, each corresponding to a binary classification task.",
"In most cases, we present the average results, and where appropriate we enumerate the results for each dataset.",
"Hypothetically, this task is the least affected by the subjectivity within word embeddings.",
"Comparative Analyses of Subjective vs.",
"Objective Corpora We conduct a series of comparative analyses under various setups.",
"For each, we compare the performance in the evaluation tasks when using the Objective Corpus and the Subjective Corpus.",
"Table 1 shows the results for this series of analyses.",
"Initial Condition Setup I seeks to answer whether there is any difference between word embeddings derived from the Objective Corpus and the Subjective Corpus.",
"The word embeddings were trained on the whole data respectively.",
"Table 1 shows the corpus statistics and classification accuracies.",
"Evidently, the Subjective word embeddings outperform the Objective word embeddings on all the evaluation tasks.",
"The margins are largest for sentiment classification (86.5% vs. 81.5% or +5% Amazon, and 78.2% vs. 75.4% or +2.8% on Rotten Tomatoes or RT).",
"For subjectivity and topic classifications, the differences are smaller.",
"As earlier hypothesized, the sentiment classification task is more sensitive to subjectivity within word embeddings than the other tasks.",
"Therefore, training word embeddings on a subjective corpus may confer an advantage for such tasks.",
"On the other hand, the corpus statistics show a substantial difference in corpus size, which could be an alternative explanation for the outperformance by the Subjective Corpus if the larger corpus contains more informative distributional statistics.",
"Controlling for Corpus Size In Setup II, we keep the number of sentences in both corpora the same, by randomly downsampling sentences in the Subjective Corpus.",
"This procedure consequently reduces the number of types and tokens (see Table 1 , Setup II, Corpus Statistics).",
"Note that the number of tokens in the Subjective corpus is now fewer than in the Objective, the latter suffers no change.",
"Yet, even after a dramatic reduction in size, the Subjective embeddings still outperform the Objective significantly on both datasets of the sentiment classification task (+4% on Amazon and +2.5% on RT), while showing similar performance on subjectivity and topic classifications.",
"This bolsters the earlier observation that sentiment classification is more sensitive to subjectivity.",
"While there is a small effect due to corpus size difference, the gap in performance between Subjective and Objective embeddings on sentiment classification is still significant and cannot be explained away by the corpus size alone.",
"Controlling for Vocabulary While the Subjective Corpus has a much smaller vocabulary (i.e., # types), we turn a critical eye on whether its apparent advantage lies in having access to special word types that do not exist in the Objective Corpus.",
"In Setup III, we keep the training vocabulary the same for both, removing the types that are Table 2 : Top words of misclassified sentences present in one corpus but not in the other, so that out-of-vocabulary words are ignored in the training phase.",
"Table 1 , Setup III, shows significant reduction in types for both corpora.",
"Yet, the outperformance by the Subjective embeddings on the sentiment classification task still stands (+3.8% on Amazon and +2.3% on RT).",
"Moreover, it is so for both Amazon and Rotten Tomatoes datasets, implying that it is not due to close in-domain similarity between the corpora used for training the word embeddings and the classification tasks.",
"Significant Words To get more insights on the difference between the Subjective and Objective corpora, we analyze the mistakes word embeddings make on the development folds.",
"At this point we focus on the sentiment classification task and specifically on the Amazon data, which indicates the largest performance differences in the controlled experiments (see Table 1 , Setup III).",
"As words are still the main unit of information in distributional word embeddings, we extract words strongly associated with misclassified sentences.",
"We employed log-odds ratio with informative Dirichlet prior method (Monroe et al., 2008) to quantify this association.",
"It is used to contrast the words in misclassified vs. correctly classified sentences, and accounts for the variance of words and their prior counts taken from a large corpus.",
"Table 2 shows the top 25 words most associated with the misclassified sentences, sorted by their association scores.",
"On average 50% of the mistakes overlap for both word embeddings, therefore, some of the words are included in both lists.",
"40 − 44% of these words carry positive or negative sentiment connotations in general (see the underlined words in Table 2 ), while other words like return or send may carry sentiment connotation in e-commerce context.",
"We check if a word carries sentiment connotation using sentiment lexicon compiled by Hu and Liu (2004) , including 6789 words along with positive or negative labels.",
"We also observe linguistic negations (i.e., not, Don't).",
"For instance, the word most associated with the Objective-specific mistakes (excluding the Subjective misclassified sentences) is not, which suggests that perhaps Subjective word embedding accommodates better understanding of linguistic negations, which may partially explain the difference.",
"However, our methodology as outlined in Section 2.2 permits exchangeable word order and is not intended to analyze structural interaction between words.",
"We focus on further analysis of sentiment words, leaving linguistic negations in word embeddings for future investigation.",
"Controlling for Sentiment Words To control for the \"amount\" of sentiment in the Subjective and Objective corpora, we use sentiment lexicon compiled by Hu and Liu (2004) .",
"For each corpus, we create two subcorpora: With Sentiment contains only the sentences with at least one word from the sentiment lexicon, while Without Sentiment is the complement.",
"We match the corpora on the number of sentences, downsampling the larger corpus, train word embeddings on each subcorpus, and proceed with the classification experiments.",
"Table 3 shows the results, including that of random word embeddings for reference.",
"Sentiment lexicon has a significant impact on the performance of sentiment and subjectivity classifications, and a smaller impact on topic classification.",
"Without sentiment, the Subjective embeddings prove more robust, still outperforming the Objective on sentiment classification, while the Objective performs close to random word embeddings on Amazon .",
"In summary, evidences from the series of controlled experiments support the existence of some X-factor to the Subjective embeddings, which confers superior performance in subjectivity-sensitive tasks such as sentiment classification.",
"To leverage the consequential sentiment information, we propose a family of methods, called SentiVec, for training distributional word embeddings that are infused with information on the sentiment polarity of words.",
"The methods are built upon Word2Vec optimization algorithm and make use of available lexical sentiment resources such as SentiWordNet (Baccianella et al., 2010) , sentiment lexicon by Hu and Liu (2004) , and etc.",
"SentiVec seeks to satisfy two objectives, namely context prediction and lexical category prediction: log L = log L word2vec (W ; C) + λ log L lex (W, L), (5) where L word2vec (W ; C) is the Skip-gram objective as in (4) ; L lex (W, L) is a lexical objective for corpus W and lexical resource L; and λ is a tradeoff parameter.",
"Lexical resource L = {X i } n i=1 comprises of n word sets, each X i contains words of the same category.",
"For sentiment classification, we consider positive and negative word categories.",
"Logistic SentiVec Logistic SentiVec admits lexical resource in the form of two disjoint word sets, L = {X 1 , X 2 }, X 1 ∩ X 2 = ∅.",
"The objective is to tell apart which word set of L word w belongs to: log L lex (W, L) (6) = w∈X 1 log P(w ∈ X 1 ) + w∈X 2 log P(w ∈ X 2 ).",
"We further tie these probabilities together, and cast the objective as a logistic regression problem: P(w ∈ X 1 ) = 1 − P(w ∈ X 2 ) = σ(v w · τ ), (7) where v w is a word embedding and τ is a direction vector.",
"Since word embeddings are generally invariant to scaling and rotation when used as downstream feature representations, τ can be chosen randomly and fixed during training.",
"We experiment with randomly sampled unit length directions.",
"For simplicity, we also scale embedding v w to its unit length when computing v w · τ , which now equals to cosine similarity between v w and τ .",
"When v w is completely aligned with τ , the cosine similarity between them is 1, which maximizes P(w ∈ X 1 ) and favors words in X 1 .",
"When v w is opposite to τ , the cosine similarity equals to −1, which maximizes P(w ∈ X 2 ) and predicts vectors from X 2 .",
"Orthogonal vectors have cosine similarity of 0, which makes both w ∈ X 1 and w ∈ X 2 equally probable.",
"Optimizing (6) makes the corresponding word embeddings of X 1 and X 2 gravitate to the opposite semispaces and simulates clustering effect for the words of the same category, while the Word2Vec objective prevents words from collapsing to the same directions.",
"Optimization The objective in (6) permits simple stochastic gradient ascent optimization and can be combined with negative sampling procedure for Skip-gram in (5).",
"The gradient for unnormalized embedding v w is solved as follows: log L [w∈X 1 ] (D, L) v wi = (log P (x ∈ X 1 )) v wi = 1 v w 2 σ − v w · τ v w τ i v w − v wi v w · τ v w (8) The optimization equation for v w , when w ∈ X 2 , can be derived analogously.",
"Spherical SentiVec Spherical SentiVec extends Logistic SentiVec by dealing with any number of lexical categories, L = {X i } n i=1 .",
"As such, the lexical objective takes on generic form: log L lex (W, L) = n i=1 w∈X i log P (w ∈ X i ), (9) Each P (w ∈ X i ) defines embedding generating process.",
"We assume each length-normalized v w for w of L is generated w.r.t.",
"a mixture model of von Mises-Fisher (vMF) distributions.",
"vMF is a probability distribution on a multidimensional sphere, characterized by parameters µ (mean direction) and κ (concentration parameter).",
"Sampled points are concentrated around µ; the greater the κ, the closer the sampled points are to µ.",
"We consider only unimodal vMF distributions, restricting concentration parameters to be strictly positive.",
"Hereby, each X i ∈ L is assigned to vMF distribution parameters (µ i , κ i ) and the membership probabilities are defined as follows: P(w ∈ X i ) = P (v w ; µ i , κ i ) = 1 Z κ i e κ i µ i ·vw , (10) where Z κ is the normalization factor.",
"The Spherical SentiVec lexical objective forces words of every X i ∈ L to gravitate towards and concentrate around their direction mean µ i .",
"As in Logistic SentiVec, it simulates clustering effect for the words of the same set.",
"In comparison to the direction vector of Logistic SentiVec, mean directions of Spherical SentiVec when fixed can substantially influence word embeddings training and must be carefully selected.",
"We optimize the mean directions along with the word embeddings using alternating procedure resembling K-means clustering algorithm.",
"For simplicity, we keep concentration parameters tied, κ 1 = κ 2 = ... = κ n = κ, and treat κ as a hyperparameter of this algorithm.",
"Optimization We derive optimization procedure for updating word embeddings assuming fixed direction means.",
"Like Logistic SentiVec, Spherical SentiVec can be combined with the negative sampling procedure of Skip-gram.",
"The gradient for unnormalized word embedding v w is solved by the following equation: log L [w∈X i ] (W, L) v wj = κi µij vw − vwj vw ·µ i vw vw 2 (11) Once word embedding v w (w ∈ X i ) is updated, we revise direction mean µ i w.r.t.",
"maximum likelihood estimator: µi = w∈X i vw w∈X i vw .",
"(12) Updating the direction means in such a way ensures that the lexical objective is non-decreasing.",
"Assuming the stochastic optimization procedure for L word2vec complies with the same nondecreasing property, the proposed alternating procedure converges.",
"Related Work There have been considerable research on improving the quality of distributional word embeddings.",
"Bolukbasi et al.",
"(2016) seek to debias word embeddings from gender stereotypes.",
"Rothe and Schütze (2017) incorporate WordNet lexeme and synset information.",
"Mrkšic et al.",
"(2016) encode antonym-synonym relations.",
"Liu et al.",
"(2015) encode ordinal relations such as hypernym and hyponym.",
"Kiela et al.",
"(2015) augment Skip-gram to enforce lexical similarity or relatedness constraints, Bollegala et al.",
"(2016) modify GloVe optimization procedure for the same purpose.",
"Faruqui et al.",
"(2015) employ semantic relations of PPDB, WordNet, FrameNet to retrofit word embeddings for various prediction tasks.",
"We use this Retrofitting method 7 as a baseline.",
"Socher et al.",
"(2011) derive multi-word embeddings for sentiment distribution prediction, while we focus on lexical distributional analysis.",
"Maas et al.",
"(2011) and Tang et al.",
"(2016) use documentlevel sentiment annotations to fit word embeddings, but document annotation might not always be available for distributional analysis on neutral corpora such as Wikipedia.",
"SentiVec relies on simple sentiment lexicon instead.",
"Refining (Yu et al., 2018) aligns the sentiment scores taken from lexical resource and the cosine similarity scores of corresponding word embeddings.",
"The method generally requires fine-grained sentiment scores for the words, which may not be available in some settings.",
"We use Refining as a baseline and adopt coarse-grained sentiment lexicon for this method.",
"Villegas et al.",
"(2016) compare various distributional word embeddings arising from the same corpus for sentiment classification, whereas we focus on the differentiation in input corpora and propose novel sentiment-infused word embeddings.",
"Experiments The objective of experiments is to study the efficacy of Logistic SentiVec and Spherical SentiVec word embeddings on the aforementioned text classification tasks.",
"One natural baseline is Word2Vec, as SentiVec subsumes its context prediction objective, while further incorporating lexical category prediction.",
"We include two other baselines that can leverage the same lexical resource but in manners different from SentiVec, namely: Retrofitting (Faruqui et al., 2015) and Refining (Yu et al., 2018) .",
"For these methods, we generate their word embeddings based on Setup III (see Section 3).",
"All the methods were run multiple times with various hyperparameters, optimized via grid-search; for each we present the best performing setting.",
"First, we discuss the sentiment classification task.",
"Table 4 shows the unfolded results for the 24 classification datasets of Amazon, as well as for Rotten Tomatoes.",
"For each classification dataset (row), and for the Objective and Subjective embedding corpora respectively, the best word embedding methods are shown in bold.",
"An asterisk indicates statistically significant 8 results at 5% in comparison to Word2Vec.",
"Both SentiVec variants outperform Word2Vec in the vast majority of the cases.",
"The degree of outperformance is higher for the Objective than the Subjective word embeddings.",
"This is a reasonable trend given our previous findings in Section 3.",
"As the Objective Corpus encodes less information than the Subjective Corpus for sentiment classification, the former is more likely to benefit from the infusion of sentiment information from additional lexical resources.",
"Note that the sentiment infusion into the word embeddings comes from separate lexical resources, and does not involve any sentiment classification label.",
"SentiVec also outperforms the two baselines that benefit from the same lexical resources.",
"Retrofitting does not improve upon Word2Vec, with the two embeddings essentially indistinguishable (the difference is only noticeable at the second decimal point).",
"Refining makes the word embeddings perform worse on the sentiment classification task.",
"One possible explanation is that Refining normally requires fine-grained labeled lexicon, where the words are scored w.r.t.",
"the sentiment scale, whereas we use sentiment lexicon of two labels (i.e., positive or negative).",
"SentiVec accepts coarse-grained sentiment lexicons, and potentially could be extended to deal with fine-grained labels.",
"As previously alluded to, topic and subjectivity classifications are less sensitive to the subjectivity within word embeddings than sentiment classification.",
"One therefore would not expect much, if any, performance gain from infusion of sentiment information.",
"However, such infusion should not subtract or harm the quality of word embeddings either.",
"Table 5 shows that the unfolded results for topic classification on the six datasets, and the result for subjectivity classification are similar across methods.",
"Neither the SentiVec variants, nor Retrofitting and Refining, change the subjectivity and topic classification capabilities much, which means that the used sentiment lexicon is targeted only at the sentiment subspace of embeddings.",
"Illustrative Changes in Embeddings To give more insights on the difference between SentiVec and Word2Vec, we show \"flower\" diagrams in Figure 1 for Logistic SentiVec and Figure 2 for Spherical SentiVec.",
"Each is associated with a reference word (e.g., good for Figure 1a) , and indicates relative changes in cosine distances between the reference word and the testing words surrounding the \"flower\".",
"Every testing word is associated with a \"petal\" or black axis extending from the center of the circle.",
"The \"petal\" length is proportional to the relative distance change in two word embeddings: κ = Word2Vec embeddings correspondingly.",
"If the distance remains unchanged (κ = 1), then the \"petal\" points at the circumference; if the reference and testing words are closer in the SentiVec embedding than they are in Word2Vec (κ < 1), the \"petal\" lies inside the circle; when the distance increases (κ > 1), the \"petal\" goes beyond the circle.",
"The diagrams are presented for Objective Embeddings 9 .",
"We use three reference words: good (positive), bad (negative), time (neutral); as well as three groups of testing words: green for words randomly sampled from positive lexicon (Sector I-II), red for words randomly sampled from negative lexicon (Sector II-III), and gray for frequent neutral common nouns (Sector III-I).",
"Figure 1 shows changes produced by Logistic SentiVec.",
"For the positive reference word (Figure 1a) , the average distance to the green words is shortened, whereas the distance to the red words increases.",
"The reverse is observed for the negative reference word (Figure 1b ).",
"This observation complies with the lexical objective (7) of Logistic SentiVec, which aims to separate the words of two different classes.",
"Note that the gray words suffer only moderate change with respect to positive and negative reference words.",
"For the neutral reference word (Figure 1c ), the distances are only moderately affected across all testing groups.",
"Figure 2 shows that Spherical SentiVec tends to make embeddings more compact than Logistic SentiVec.",
"As the former's lexical objective (9) is designed for clustering, but not for separation, we look at the comparative strength of the clustering effect on the testing words.",
"For the positive reference word (Figure 2a ), the largest clustering effect is achieved for the green words.",
"For the negative reference word (Figure 2b) , as expected, the red words are affected the most.",
"The gray words suffer the least change for all the reference words.",
"In summary, SentiVec effectively provides an advantage for subjectivity-sensitive task such as sentiment classification, while not harming the performance of other text classification tasks.",
"Conclusion We explore the differences between objective and subjective corpora for generating word embeddings, and find that there is indeed a difference in the embeddings' classification task performances.",
"Identifying the presence of sentiment words as one key factor for the difference, we propose a novel method SentiVec to train word embeddings that are infused with the sentiment polarity of words derived from a separate sentiment lexicon.",
"We further identify two lexical objectives: Logistic SentiVec and Spherical SentiVec.",
"The proposed word embeddings show improvements in sentiment classification, while maintaining their performance on subjectivity and topic classifications."
]
} | {
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"3",
"4.1",
"4.2",
"5",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Data and Methodology",
"Generating Word Embeddings",
"Evaluation Tasks",
"Comparative Analyses of Subjective vs. Objective Corpora",
"Logistic SentiVec",
"Spherical SentiVec",
"Related Work",
"Experiments",
"Conclusion"
]
} | GEM-SciDuet-train-134#paper-1359#slide-0 | Word Embeddings | Dense vectors of words
Unsupervised training: GloVe, Word2Vec
Words in similar context tend to have similar meaning
Words with similar meanings tend to be close in embedding space | Dense vectors of words
Unsupervised training: GloVe, Word2Vec
Words in similar context tend to have similar meaning
Words with similar meanings tend to be close in embedding space | [] |
GEM-SciDuet-train-134#paper-1359#slide-1 | 1359 | Searching for the X-Factor: Exploring Corpus Subjectivity for Word Embeddings | We explore the notion of subjectivity, and hypothesize that word embeddings learnt from input corpora of varying levels of subjectivity behave differently on natural language processing tasks such as classifying a sentence by sentiment, subjectivity, or topic. Through systematic comparative analyses, we establish this to be the case indeed. Moreover, based on the discovery of the outsized role that sentiment words play on subjectivity-sensitive tasks such as sentiment classification, we develop a novel word embedding SentiVec which is infused with sentiment information from a lexical resource, and is shown to outperform baselines on such tasks. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226
],
"paper_content_text": [
"Introduction Distributional analysis methods such as Word2Vec (Mikolov et al., 2013) and GloVe (Pennington et al., 2014) have been critical for the success of many large-scale natural language processing (NLP) applications (Collobert et al., 2011; Socher et al., 2013; Goldberg, 2016) .",
"These methods employ distributional hypothesis (i.e., words used in the same contexts tend to have similar meaning) to derive distributional meaning via context prediction tasks and produce dense word embeddings.",
"While there have been active and ongoing research on improving word embedding methods (see Section 5), there is a relative dearth of study on the impact that an input corpus may have on the quality of the word embeddings.",
"The previous preoccupation centers around corpus size, i.e., a larger corpus is perceived to be richer in statistical information.",
"For instance, popular corpora include Wikipedia, Common Crawl, and Google News.",
"We postulate that there may be variations across corpora owing to factors that affect language use.",
"Intuitively, the many things we write (a work email, a product review, an academic publication, etc.)",
"may each involve certain stylistic, syntactic, and lexical choices, resulting in meaningfully different distributions of word cooccurrences.",
"Consequently, such factors may be encoded in the word embeddings, and input corpora may be differentially informative towards various NLP tasks.",
"In this work, we are interested in the notion of subjectivity.",
"Some NLP tasks, such as sentiment classification, revolve around subjective expressions of likes or dislikes.",
"Others, such as topic classification, revolve around more objective elements of whether a document belongs to a topic (e.g., science, politics).",
"Our central hypothesis is that word embeddings learnt from input corpora of contrasting levels of subjectivity perform differently when classifying sentences by sentiment, subjectivity, or topic.",
"As the first contribution, we outline an experimental scheme to explore this hypothesis in Section 2, and conduct a series of controlled experiments in Section 3 establishing that there exists a meaningful difference between word embeddings derived from objective vs. subjective corpora.",
"We further systematically investigate factors that could potentially explain the differences.",
"Upon discovering from the investigation that sentiment words play a particularly important role in subjectivity-sensitive NLP tasks, such as sentiment classification, as the second contribution, in Section 4 we develop SentiVec, a novel word embedding method infused with information from lexical resources such as a sentiment lexicon.",
"We further identify two alternative lexical objectives: Logistic SentiVec based on discriminative logistic regression, and Spherical SentiVec based on soft clustering effect of von Mises-Fisher distributions.",
"In Section 6, the proposed word embeddings show evident improvements on sentiment classification, as compared to the base model Word2Vec and other baselines using the same lexical resource.",
"Data and Methodology We lay out the methodology for generating word embeddings of contrasting subjectivity, whose effects are tested on several text classification tasks.",
"Generating Word Embeddings As it is difficult to precisely quantify the degree of subjectivity of a corpus, we resort to generating word embeddings from two corpora that contrast sharply in subjectivity, referring to them as the Objective Corpus and the Subjective Corpus.",
"Objective Corpus As virtually all contents are written by humans, an absolutely objective corpus (in the philosophical sense) may prove elusive.",
"There are however exemplars where, by construction, a corpus aspires to be as objective as possible, and probably achieves that in practical terms.",
"We postulate that one such corpus is Wikipedia.",
"Its list of policies and guidelines 1 , assiduously enforced by an editorial team, specify that an article must be written from a neutral point of view, which among other things means \"representing fairly, proportionately, and, as far as possible, without editorial bias, all of the significant views that have been published by reliable sources on a topic.\".",
"Moreover, it is a common resource for training distributional word embeddings and adopted widely by the research community to solve various NLP problems.",
"Hence, in this study, we use Wikipedia as the Objective Corpus.",
"Subjective Corpus By extension, one may then deem a corpus subjective if its content does not at least meet Wikipedia's neutral point of view requirement.",
"In other words, if the content is replete with personal feelings and opinions.",
"We posit that product reviews would be one such corpus.",
"For instance, Amazon's Community Guideline 2 states that \"Amazon values diverse opinions\", and that \"Content you submit should be relevant and based on your own honest opinions and experience.\".",
"Reviews consist of expressive content written by customers, and may not strive for the neutrality of an encyclopedia.",
"We rely on a 1 https://en.wikipedia.org/wiki/ Wikipedia:List_of_policies_and_ guidelines 2 https://www.amazon.com/gp/help/ customer/display.html?nodeId=201929730 large corpus of Amazon reviews from various categories (e.g., electronics, jewelry, books, and etc.)",
"(McAuley et al., 2015) as the Subjective Corpus.",
"Word Embeddings For the comparative analysis in Section 3, we employ Word2Vec (reviewed below) to generate word embeddings from each corpus.",
"Later on in Section 4, we will propose a new word embedding method called SentiVec.",
"For Word2Vec, we use the Skip-gram model to train distributional word embeddings on the Objective Corpus and the Subjective Corpus respectively.",
"Skip-gram aims to find word embeddings that are useful for predicting nearby words.",
"The objective is to maximize the context probability: log L(W ; C) = w∈W w ∈C(w) log P(w |w), (1) where W is an input corpus and C(w) is the context of token w. The probability of context word w , given observed word w is defined via softmax: P(w |w) = exp (v w · vw) ŵ∈V exp (vŵ · vw) , (2) where v w and v w are corresponding embeddings and V is the corpus vocabulary.",
"Though theoretically sound, the formulation is computationally impractical and requires tractable approximation.",
"Mikolov et al.",
"(2013) propose two efficient procedures to optimize (1): Hierarchical Softmax and Negative Sampling (NS).",
"In this work we focus on the widely adopted NS.",
"The intuition is that a \"good\" model should be able to differentiate observed data from noise.",
"The differentiation task is defined using logistic regression; the goal is to tell apart real context-word pair (w , w) from randomly generated noise pair (ŵ, w).",
"Formally, log L [w',w] = log σ (v w · vw) + k i=1 log σ (−vŵ i · vw), (3) where σ( · ) is a sigmoid function, and {ŵ i } k i=1 are negative samples.",
"Summing up all the contextword pairs, we derive the NS Skip-gram objective: log L word2vec (W ; C) = w∈W w ∈C(w) log L [w',w] .",
"(4) Training word embeddings with Skip-gram, we keep the same hyperparameters across all the runs: 300 dimensions for embeddings, k = 5 negative samples, and window of 5 tokens.",
"The Objective and Subjective corpora undergo the same preprocessing, i.e., discarding short sentences (< 5 tokens) and rare words (< 10 occurrences), removing punctuation, normalizing Unicode symbols.",
"Evaluation Tasks To compare word embeddings, we need a common yardstick.",
"It is difficult to define an inherent quality to word embeddings.",
"Instead, we put them through several evaluation tasks that can leverage word embeddings and standardize their formulations as binary classification tasks.",
"To boil the comparisons down to the essences of word embeddings (which is our central focus), we rely on standardized techniques so as to attribute as much of the differences as possible to the word embeddings.",
"We use logistic regression for classification, and represent a text snippet (e.g., a sentence) in the feature space as the average of the word embeddings of tokens in the snippet (ignoring out-ofvocabulary tokens).",
"The evaluation metric is the average accuracy from 10-fold cross validation.",
"There are three evaluation tasks of varying degrees of hypothetical subjectivity, as outlined below.",
"Each may involve multiple datasets.",
"Sentiment Classification Task This task classifies a sentence into either positive or negative.",
"We use two groups of datasets as follows.",
"The first group consists of 24 datasets from UCSD Amazon product data 3 corresponding to various product categories.",
"Each review has a rating from 1 to 5, which is transformed into positive (ratings 4 or 5) or negative (ratings 1 or 2) class.",
"For each dataset respectively, we sample 5000 sentences each from the positive and negative reviews.",
"Note that these sentences used for this evaluation task have not participated in the generation of word embeddings.",
"Due to space constraint, in most cases we present the average accuracy across the datasets, but where appropriate we enumerate the results for each dataset.",
"The second is Cornell's sentence polarity dataset v1.0 4 (Pang and Lee, 2005) , made up of 5331 each of positive and negative sentences from Rotten Tomatoes movie reviews.",
"The inclusion of this out-of-domain evaluation dataset is useful for examining whether the performance of word embeddings from the Subjective Corpus on the first group above may inadvertently be affected by indomain advantage arising from its Amazon origin.",
"Subjectivity Classification Task This task classifies a sentence into subjective or objective.",
"The dataset is Cornell's subjectivity dataset v1.0 5 , consisting of 5000 subjective sentences derived from Rotten Tomatoes (RT) reviews and 5000 objective sentences derived from IMDB plot summaries (Pang and Lee, 2004) .",
"This task is probably less sensitive to the subjectivity within word embeddings than sentiment classification, as determining whether a sentence is subjective or objective should ideally be an objective undertaking.",
"Topic Classification Task We use the 20 Newsgroups dataset 6 (\"bydate\" version), whereby the newsgroups are organized into six subject matter groupings.",
"We extract the message body and split them into sentences.",
"Each group's sentences then form the in-topic class, and we randomly sample an equivalent number of sentences from the remaining newsgroups to form the out-of-topic class.",
"This results in six datasets, each corresponding to a binary classification task.",
"In most cases, we present the average results, and where appropriate we enumerate the results for each dataset.",
"Hypothetically, this task is the least affected by the subjectivity within word embeddings.",
"Comparative Analyses of Subjective vs.",
"Objective Corpora We conduct a series of comparative analyses under various setups.",
"For each, we compare the performance in the evaluation tasks when using the Objective Corpus and the Subjective Corpus.",
"Table 1 shows the results for this series of analyses.",
"Initial Condition Setup I seeks to answer whether there is any difference between word embeddings derived from the Objective Corpus and the Subjective Corpus.",
"The word embeddings were trained on the whole data respectively.",
"Table 1 shows the corpus statistics and classification accuracies.",
"Evidently, the Subjective word embeddings outperform the Objective word embeddings on all the evaluation tasks.",
"The margins are largest for sentiment classification (86.5% vs. 81.5% or +5% Amazon, and 78.2% vs. 75.4% or +2.8% on Rotten Tomatoes or RT).",
"For subjectivity and topic classifications, the differences are smaller.",
"As earlier hypothesized, the sentiment classification task is more sensitive to subjectivity within word embeddings than the other tasks.",
"Therefore, training word embeddings on a subjective corpus may confer an advantage for such tasks.",
"On the other hand, the corpus statistics show a substantial difference in corpus size, which could be an alternative explanation for the outperformance by the Subjective Corpus if the larger corpus contains more informative distributional statistics.",
"Controlling for Corpus Size In Setup II, we keep the number of sentences in both corpora the same, by randomly downsampling sentences in the Subjective Corpus.",
"This procedure consequently reduces the number of types and tokens (see Table 1 , Setup II, Corpus Statistics).",
"Note that the number of tokens in the Subjective corpus is now fewer than in the Objective, the latter suffers no change.",
"Yet, even after a dramatic reduction in size, the Subjective embeddings still outperform the Objective significantly on both datasets of the sentiment classification task (+4% on Amazon and +2.5% on RT), while showing similar performance on subjectivity and topic classifications.",
"This bolsters the earlier observation that sentiment classification is more sensitive to subjectivity.",
"While there is a small effect due to corpus size difference, the gap in performance between Subjective and Objective embeddings on sentiment classification is still significant and cannot be explained away by the corpus size alone.",
"Controlling for Vocabulary While the Subjective Corpus has a much smaller vocabulary (i.e., # types), we turn a critical eye on whether its apparent advantage lies in having access to special word types that do not exist in the Objective Corpus.",
"In Setup III, we keep the training vocabulary the same for both, removing the types that are Table 2 : Top words of misclassified sentences present in one corpus but not in the other, so that out-of-vocabulary words are ignored in the training phase.",
"Table 1 , Setup III, shows significant reduction in types for both corpora.",
"Yet, the outperformance by the Subjective embeddings on the sentiment classification task still stands (+3.8% on Amazon and +2.3% on RT).",
"Moreover, it is so for both Amazon and Rotten Tomatoes datasets, implying that it is not due to close in-domain similarity between the corpora used for training the word embeddings and the classification tasks.",
"Significant Words To get more insights on the difference between the Subjective and Objective corpora, we analyze the mistakes word embeddings make on the development folds.",
"At this point we focus on the sentiment classification task and specifically on the Amazon data, which indicates the largest performance differences in the controlled experiments (see Table 1 , Setup III).",
"As words are still the main unit of information in distributional word embeddings, we extract words strongly associated with misclassified sentences.",
"We employed log-odds ratio with informative Dirichlet prior method (Monroe et al., 2008) to quantify this association.",
"It is used to contrast the words in misclassified vs. correctly classified sentences, and accounts for the variance of words and their prior counts taken from a large corpus.",
"Table 2 shows the top 25 words most associated with the misclassified sentences, sorted by their association scores.",
"On average 50% of the mistakes overlap for both word embeddings, therefore, some of the words are included in both lists.",
"40 − 44% of these words carry positive or negative sentiment connotations in general (see the underlined words in Table 2 ), while other words like return or send may carry sentiment connotation in e-commerce context.",
"We check if a word carries sentiment connotation using sentiment lexicon compiled by Hu and Liu (2004) , including 6789 words along with positive or negative labels.",
"We also observe linguistic negations (i.e., not, Don't).",
"For instance, the word most associated with the Objective-specific mistakes (excluding the Subjective misclassified sentences) is not, which suggests that perhaps Subjective word embedding accommodates better understanding of linguistic negations, which may partially explain the difference.",
"However, our methodology as outlined in Section 2.2 permits exchangeable word order and is not intended to analyze structural interaction between words.",
"We focus on further analysis of sentiment words, leaving linguistic negations in word embeddings for future investigation.",
"Controlling for Sentiment Words To control for the \"amount\" of sentiment in the Subjective and Objective corpora, we use sentiment lexicon compiled by Hu and Liu (2004) .",
"For each corpus, we create two subcorpora: With Sentiment contains only the sentences with at least one word from the sentiment lexicon, while Without Sentiment is the complement.",
"We match the corpora on the number of sentences, downsampling the larger corpus, train word embeddings on each subcorpus, and proceed with the classification experiments.",
"Table 3 shows the results, including that of random word embeddings for reference.",
"Sentiment lexicon has a significant impact on the performance of sentiment and subjectivity classifications, and a smaller impact on topic classification.",
"Without sentiment, the Subjective embeddings prove more robust, still outperforming the Objective on sentiment classification, while the Objective performs close to random word embeddings on Amazon .",
"In summary, evidences from the series of controlled experiments support the existence of some X-factor to the Subjective embeddings, which confers superior performance in subjectivity-sensitive tasks such as sentiment classification.",
"To leverage the consequential sentiment information, we propose a family of methods, called SentiVec, for training distributional word embeddings that are infused with information on the sentiment polarity of words.",
"The methods are built upon Word2Vec optimization algorithm and make use of available lexical sentiment resources such as SentiWordNet (Baccianella et al., 2010) , sentiment lexicon by Hu and Liu (2004) , and etc.",
"SentiVec seeks to satisfy two objectives, namely context prediction and lexical category prediction: log L = log L word2vec (W ; C) + λ log L lex (W, L), (5) where L word2vec (W ; C) is the Skip-gram objective as in (4) ; L lex (W, L) is a lexical objective for corpus W and lexical resource L; and λ is a tradeoff parameter.",
"Lexical resource L = {X i } n i=1 comprises of n word sets, each X i contains words of the same category.",
"For sentiment classification, we consider positive and negative word categories.",
"Logistic SentiVec Logistic SentiVec admits lexical resource in the form of two disjoint word sets, L = {X 1 , X 2 }, X 1 ∩ X 2 = ∅.",
"The objective is to tell apart which word set of L word w belongs to: log L lex (W, L) (6) = w∈X 1 log P(w ∈ X 1 ) + w∈X 2 log P(w ∈ X 2 ).",
"We further tie these probabilities together, and cast the objective as a logistic regression problem: P(w ∈ X 1 ) = 1 − P(w ∈ X 2 ) = σ(v w · τ ), (7) where v w is a word embedding and τ is a direction vector.",
"Since word embeddings are generally invariant to scaling and rotation when used as downstream feature representations, τ can be chosen randomly and fixed during training.",
"We experiment with randomly sampled unit length directions.",
"For simplicity, we also scale embedding v w to its unit length when computing v w · τ , which now equals to cosine similarity between v w and τ .",
"When v w is completely aligned with τ , the cosine similarity between them is 1, which maximizes P(w ∈ X 1 ) and favors words in X 1 .",
"When v w is opposite to τ , the cosine similarity equals to −1, which maximizes P(w ∈ X 2 ) and predicts vectors from X 2 .",
"Orthogonal vectors have cosine similarity of 0, which makes both w ∈ X 1 and w ∈ X 2 equally probable.",
"Optimizing (6) makes the corresponding word embeddings of X 1 and X 2 gravitate to the opposite semispaces and simulates clustering effect for the words of the same category, while the Word2Vec objective prevents words from collapsing to the same directions.",
"Optimization The objective in (6) permits simple stochastic gradient ascent optimization and can be combined with negative sampling procedure for Skip-gram in (5).",
"The gradient for unnormalized embedding v w is solved as follows: log L [w∈X 1 ] (D, L) v wi = (log P (x ∈ X 1 )) v wi = 1 v w 2 σ − v w · τ v w τ i v w − v wi v w · τ v w (8) The optimization equation for v w , when w ∈ X 2 , can be derived analogously.",
"Spherical SentiVec Spherical SentiVec extends Logistic SentiVec by dealing with any number of lexical categories, L = {X i } n i=1 .",
"As such, the lexical objective takes on generic form: log L lex (W, L) = n i=1 w∈X i log P (w ∈ X i ), (9) Each P (w ∈ X i ) defines embedding generating process.",
"We assume each length-normalized v w for w of L is generated w.r.t.",
"a mixture model of von Mises-Fisher (vMF) distributions.",
"vMF is a probability distribution on a multidimensional sphere, characterized by parameters µ (mean direction) and κ (concentration parameter).",
"Sampled points are concentrated around µ; the greater the κ, the closer the sampled points are to µ.",
"We consider only unimodal vMF distributions, restricting concentration parameters to be strictly positive.",
"Hereby, each X i ∈ L is assigned to vMF distribution parameters (µ i , κ i ) and the membership probabilities are defined as follows: P(w ∈ X i ) = P (v w ; µ i , κ i ) = 1 Z κ i e κ i µ i ·vw , (10) where Z κ is the normalization factor.",
"The Spherical SentiVec lexical objective forces words of every X i ∈ L to gravitate towards and concentrate around their direction mean µ i .",
"As in Logistic SentiVec, it simulates clustering effect for the words of the same set.",
"In comparison to the direction vector of Logistic SentiVec, mean directions of Spherical SentiVec when fixed can substantially influence word embeddings training and must be carefully selected.",
"We optimize the mean directions along with the word embeddings using alternating procedure resembling K-means clustering algorithm.",
"For simplicity, we keep concentration parameters tied, κ 1 = κ 2 = ... = κ n = κ, and treat κ as a hyperparameter of this algorithm.",
"Optimization We derive optimization procedure for updating word embeddings assuming fixed direction means.",
"Like Logistic SentiVec, Spherical SentiVec can be combined with the negative sampling procedure of Skip-gram.",
"The gradient for unnormalized word embedding v w is solved by the following equation: log L [w∈X i ] (W, L) v wj = κi µij vw − vwj vw ·µ i vw vw 2 (11) Once word embedding v w (w ∈ X i ) is updated, we revise direction mean µ i w.r.t.",
"maximum likelihood estimator: µi = w∈X i vw w∈X i vw .",
"(12) Updating the direction means in such a way ensures that the lexical objective is non-decreasing.",
"Assuming the stochastic optimization procedure for L word2vec complies with the same nondecreasing property, the proposed alternating procedure converges.",
"Related Work There have been considerable research on improving the quality of distributional word embeddings.",
"Bolukbasi et al.",
"(2016) seek to debias word embeddings from gender stereotypes.",
"Rothe and Schütze (2017) incorporate WordNet lexeme and synset information.",
"Mrkšic et al.",
"(2016) encode antonym-synonym relations.",
"Liu et al.",
"(2015) encode ordinal relations such as hypernym and hyponym.",
"Kiela et al.",
"(2015) augment Skip-gram to enforce lexical similarity or relatedness constraints, Bollegala et al.",
"(2016) modify GloVe optimization procedure for the same purpose.",
"Faruqui et al.",
"(2015) employ semantic relations of PPDB, WordNet, FrameNet to retrofit word embeddings for various prediction tasks.",
"We use this Retrofitting method 7 as a baseline.",
"Socher et al.",
"(2011) derive multi-word embeddings for sentiment distribution prediction, while we focus on lexical distributional analysis.",
"Maas et al.",
"(2011) and Tang et al.",
"(2016) use documentlevel sentiment annotations to fit word embeddings, but document annotation might not always be available for distributional analysis on neutral corpora such as Wikipedia.",
"SentiVec relies on simple sentiment lexicon instead.",
"Refining (Yu et al., 2018) aligns the sentiment scores taken from lexical resource and the cosine similarity scores of corresponding word embeddings.",
"The method generally requires fine-grained sentiment scores for the words, which may not be available in some settings.",
"We use Refining as a baseline and adopt coarse-grained sentiment lexicon for this method.",
"Villegas et al.",
"(2016) compare various distributional word embeddings arising from the same corpus for sentiment classification, whereas we focus on the differentiation in input corpora and propose novel sentiment-infused word embeddings.",
"Experiments The objective of experiments is to study the efficacy of Logistic SentiVec and Spherical SentiVec word embeddings on the aforementioned text classification tasks.",
"One natural baseline is Word2Vec, as SentiVec subsumes its context prediction objective, while further incorporating lexical category prediction.",
"We include two other baselines that can leverage the same lexical resource but in manners different from SentiVec, namely: Retrofitting (Faruqui et al., 2015) and Refining (Yu et al., 2018) .",
"For these methods, we generate their word embeddings based on Setup III (see Section 3).",
"All the methods were run multiple times with various hyperparameters, optimized via grid-search; for each we present the best performing setting.",
"First, we discuss the sentiment classification task.",
"Table 4 shows the unfolded results for the 24 classification datasets of Amazon, as well as for Rotten Tomatoes.",
"For each classification dataset (row), and for the Objective and Subjective embedding corpora respectively, the best word embedding methods are shown in bold.",
"An asterisk indicates statistically significant 8 results at 5% in comparison to Word2Vec.",
"Both SentiVec variants outperform Word2Vec in the vast majority of the cases.",
"The degree of outperformance is higher for the Objective than the Subjective word embeddings.",
"This is a reasonable trend given our previous findings in Section 3.",
"As the Objective Corpus encodes less information than the Subjective Corpus for sentiment classification, the former is more likely to benefit from the infusion of sentiment information from additional lexical resources.",
"Note that the sentiment infusion into the word embeddings comes from separate lexical resources, and does not involve any sentiment classification label.",
"SentiVec also outperforms the two baselines that benefit from the same lexical resources.",
"Retrofitting does not improve upon Word2Vec, with the two embeddings essentially indistinguishable (the difference is only noticeable at the second decimal point).",
"Refining makes the word embeddings perform worse on the sentiment classification task.",
"One possible explanation is that Refining normally requires fine-grained labeled lexicon, where the words are scored w.r.t.",
"the sentiment scale, whereas we use sentiment lexicon of two labels (i.e., positive or negative).",
"SentiVec accepts coarse-grained sentiment lexicons, and potentially could be extended to deal with fine-grained labels.",
"As previously alluded to, topic and subjectivity classifications are less sensitive to the subjectivity within word embeddings than sentiment classification.",
"One therefore would not expect much, if any, performance gain from infusion of sentiment information.",
"However, such infusion should not subtract or harm the quality of word embeddings either.",
"Table 5 shows that the unfolded results for topic classification on the six datasets, and the result for subjectivity classification are similar across methods.",
"Neither the SentiVec variants, nor Retrofitting and Refining, change the subjectivity and topic classification capabilities much, which means that the used sentiment lexicon is targeted only at the sentiment subspace of embeddings.",
"Illustrative Changes in Embeddings To give more insights on the difference between SentiVec and Word2Vec, we show \"flower\" diagrams in Figure 1 for Logistic SentiVec and Figure 2 for Spherical SentiVec.",
"Each is associated with a reference word (e.g., good for Figure 1a) , and indicates relative changes in cosine distances between the reference word and the testing words surrounding the \"flower\".",
"Every testing word is associated with a \"petal\" or black axis extending from the center of the circle.",
"The \"petal\" length is proportional to the relative distance change in two word embeddings: κ = Word2Vec embeddings correspondingly.",
"If the distance remains unchanged (κ = 1), then the \"petal\" points at the circumference; if the reference and testing words are closer in the SentiVec embedding than they are in Word2Vec (κ < 1), the \"petal\" lies inside the circle; when the distance increases (κ > 1), the \"petal\" goes beyond the circle.",
"The diagrams are presented for Objective Embeddings 9 .",
"We use three reference words: good (positive), bad (negative), time (neutral); as well as three groups of testing words: green for words randomly sampled from positive lexicon (Sector I-II), red for words randomly sampled from negative lexicon (Sector II-III), and gray for frequent neutral common nouns (Sector III-I).",
"Figure 1 shows changes produced by Logistic SentiVec.",
"For the positive reference word (Figure 1a) , the average distance to the green words is shortened, whereas the distance to the red words increases.",
"The reverse is observed for the negative reference word (Figure 1b ).",
"This observation complies with the lexical objective (7) of Logistic SentiVec, which aims to separate the words of two different classes.",
"Note that the gray words suffer only moderate change with respect to positive and negative reference words.",
"For the neutral reference word (Figure 1c ), the distances are only moderately affected across all testing groups.",
"Figure 2 shows that Spherical SentiVec tends to make embeddings more compact than Logistic SentiVec.",
"As the former's lexical objective (9) is designed for clustering, but not for separation, we look at the comparative strength of the clustering effect on the testing words.",
"For the positive reference word (Figure 2a ), the largest clustering effect is achieved for the green words.",
"For the negative reference word (Figure 2b) , as expected, the red words are affected the most.",
"The gray words suffer the least change for all the reference words.",
"In summary, SentiVec effectively provides an advantage for subjectivity-sensitive task such as sentiment classification, while not harming the performance of other text classification tasks.",
"Conclusion We explore the differences between objective and subjective corpora for generating word embeddings, and find that there is indeed a difference in the embeddings' classification task performances.",
"Identifying the presence of sentiment words as one key factor for the difference, we propose a novel method SentiVec to train word embeddings that are infused with the sentiment polarity of words derived from a separate sentiment lexicon.",
"We further identify two lexical objectives: Logistic SentiVec and Spherical SentiVec.",
"The proposed word embeddings show improvements in sentiment classification, while maintaining their performance on subjectivity and topic classifications."
]
} | {
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"3",
"4.1",
"4.2",
"5",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Data and Methodology",
"Generating Word Embeddings",
"Evaluation Tasks",
"Comparative Analyses of Subjective vs. Objective Corpora",
"Logistic SentiVec",
"Spherical SentiVec",
"Related Work",
"Experiments",
"Conclusion"
]
} | GEM-SciDuet-train-134#paper-1359#slide-1 | Training Word Embeddings | This camera is good for high quality | This camera is good for high quality | [] |
GEM-SciDuet-train-134#paper-1359#slide-3 | 1359 | Searching for the X-Factor: Exploring Corpus Subjectivity for Word Embeddings | We explore the notion of subjectivity, and hypothesize that word embeddings learnt from input corpora of varying levels of subjectivity behave differently on natural language processing tasks such as classifying a sentence by sentiment, subjectivity, or topic. Through systematic comparative analyses, we establish this to be the case indeed. Moreover, based on the discovery of the outsized role that sentiment words play on subjectivity-sensitive tasks such as sentiment classification, we develop a novel word embedding SentiVec which is infused with sentiment information from a lexical resource, and is shown to outperform baselines on such tasks. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226
],
"paper_content_text": [
"Introduction Distributional analysis methods such as Word2Vec (Mikolov et al., 2013) and GloVe (Pennington et al., 2014) have been critical for the success of many large-scale natural language processing (NLP) applications (Collobert et al., 2011; Socher et al., 2013; Goldberg, 2016) .",
"These methods employ distributional hypothesis (i.e., words used in the same contexts tend to have similar meaning) to derive distributional meaning via context prediction tasks and produce dense word embeddings.",
"While there have been active and ongoing research on improving word embedding methods (see Section 5), there is a relative dearth of study on the impact that an input corpus may have on the quality of the word embeddings.",
"The previous preoccupation centers around corpus size, i.e., a larger corpus is perceived to be richer in statistical information.",
"For instance, popular corpora include Wikipedia, Common Crawl, and Google News.",
"We postulate that there may be variations across corpora owing to factors that affect language use.",
"Intuitively, the many things we write (a work email, a product review, an academic publication, etc.)",
"may each involve certain stylistic, syntactic, and lexical choices, resulting in meaningfully different distributions of word cooccurrences.",
"Consequently, such factors may be encoded in the word embeddings, and input corpora may be differentially informative towards various NLP tasks.",
"In this work, we are interested in the notion of subjectivity.",
"Some NLP tasks, such as sentiment classification, revolve around subjective expressions of likes or dislikes.",
"Others, such as topic classification, revolve around more objective elements of whether a document belongs to a topic (e.g., science, politics).",
"Our central hypothesis is that word embeddings learnt from input corpora of contrasting levels of subjectivity perform differently when classifying sentences by sentiment, subjectivity, or topic.",
"As the first contribution, we outline an experimental scheme to explore this hypothesis in Section 2, and conduct a series of controlled experiments in Section 3 establishing that there exists a meaningful difference between word embeddings derived from objective vs. subjective corpora.",
"We further systematically investigate factors that could potentially explain the differences.",
"Upon discovering from the investigation that sentiment words play a particularly important role in subjectivity-sensitive NLP tasks, such as sentiment classification, as the second contribution, in Section 4 we develop SentiVec, a novel word embedding method infused with information from lexical resources such as a sentiment lexicon.",
"We further identify two alternative lexical objectives: Logistic SentiVec based on discriminative logistic regression, and Spherical SentiVec based on soft clustering effect of von Mises-Fisher distributions.",
"In Section 6, the proposed word embeddings show evident improvements on sentiment classification, as compared to the base model Word2Vec and other baselines using the same lexical resource.",
"Data and Methodology We lay out the methodology for generating word embeddings of contrasting subjectivity, whose effects are tested on several text classification tasks.",
"Generating Word Embeddings As it is difficult to precisely quantify the degree of subjectivity of a corpus, we resort to generating word embeddings from two corpora that contrast sharply in subjectivity, referring to them as the Objective Corpus and the Subjective Corpus.",
"Objective Corpus As virtually all contents are written by humans, an absolutely objective corpus (in the philosophical sense) may prove elusive.",
"There are however exemplars where, by construction, a corpus aspires to be as objective as possible, and probably achieves that in practical terms.",
"We postulate that one such corpus is Wikipedia.",
"Its list of policies and guidelines 1 , assiduously enforced by an editorial team, specify that an article must be written from a neutral point of view, which among other things means \"representing fairly, proportionately, and, as far as possible, without editorial bias, all of the significant views that have been published by reliable sources on a topic.\".",
"Moreover, it is a common resource for training distributional word embeddings and adopted widely by the research community to solve various NLP problems.",
"Hence, in this study, we use Wikipedia as the Objective Corpus.",
"Subjective Corpus By extension, one may then deem a corpus subjective if its content does not at least meet Wikipedia's neutral point of view requirement.",
"In other words, if the content is replete with personal feelings and opinions.",
"We posit that product reviews would be one such corpus.",
"For instance, Amazon's Community Guideline 2 states that \"Amazon values diverse opinions\", and that \"Content you submit should be relevant and based on your own honest opinions and experience.\".",
"Reviews consist of expressive content written by customers, and may not strive for the neutrality of an encyclopedia.",
"We rely on a 1 https://en.wikipedia.org/wiki/ Wikipedia:List_of_policies_and_ guidelines 2 https://www.amazon.com/gp/help/ customer/display.html?nodeId=201929730 large corpus of Amazon reviews from various categories (e.g., electronics, jewelry, books, and etc.)",
"(McAuley et al., 2015) as the Subjective Corpus.",
"Word Embeddings For the comparative analysis in Section 3, we employ Word2Vec (reviewed below) to generate word embeddings from each corpus.",
"Later on in Section 4, we will propose a new word embedding method called SentiVec.",
"For Word2Vec, we use the Skip-gram model to train distributional word embeddings on the Objective Corpus and the Subjective Corpus respectively.",
"Skip-gram aims to find word embeddings that are useful for predicting nearby words.",
"The objective is to maximize the context probability: log L(W ; C) = w∈W w ∈C(w) log P(w |w), (1) where W is an input corpus and C(w) is the context of token w. The probability of context word w , given observed word w is defined via softmax: P(w |w) = exp (v w · vw) ŵ∈V exp (vŵ · vw) , (2) where v w and v w are corresponding embeddings and V is the corpus vocabulary.",
"Though theoretically sound, the formulation is computationally impractical and requires tractable approximation.",
"Mikolov et al.",
"(2013) propose two efficient procedures to optimize (1): Hierarchical Softmax and Negative Sampling (NS).",
"In this work we focus on the widely adopted NS.",
"The intuition is that a \"good\" model should be able to differentiate observed data from noise.",
"The differentiation task is defined using logistic regression; the goal is to tell apart real context-word pair (w , w) from randomly generated noise pair (ŵ, w).",
"Formally, log L [w',w] = log σ (v w · vw) + k i=1 log σ (−vŵ i · vw), (3) where σ( · ) is a sigmoid function, and {ŵ i } k i=1 are negative samples.",
"Summing up all the contextword pairs, we derive the NS Skip-gram objective: log L word2vec (W ; C) = w∈W w ∈C(w) log L [w',w] .",
"(4) Training word embeddings with Skip-gram, we keep the same hyperparameters across all the runs: 300 dimensions for embeddings, k = 5 negative samples, and window of 5 tokens.",
"The Objective and Subjective corpora undergo the same preprocessing, i.e., discarding short sentences (< 5 tokens) and rare words (< 10 occurrences), removing punctuation, normalizing Unicode symbols.",
"Evaluation Tasks To compare word embeddings, we need a common yardstick.",
"It is difficult to define an inherent quality to word embeddings.",
"Instead, we put them through several evaluation tasks that can leverage word embeddings and standardize their formulations as binary classification tasks.",
"To boil the comparisons down to the essences of word embeddings (which is our central focus), we rely on standardized techniques so as to attribute as much of the differences as possible to the word embeddings.",
"We use logistic regression for classification, and represent a text snippet (e.g., a sentence) in the feature space as the average of the word embeddings of tokens in the snippet (ignoring out-ofvocabulary tokens).",
"The evaluation metric is the average accuracy from 10-fold cross validation.",
"There are three evaluation tasks of varying degrees of hypothetical subjectivity, as outlined below.",
"Each may involve multiple datasets.",
"Sentiment Classification Task This task classifies a sentence into either positive or negative.",
"We use two groups of datasets as follows.",
"The first group consists of 24 datasets from UCSD Amazon product data 3 corresponding to various product categories.",
"Each review has a rating from 1 to 5, which is transformed into positive (ratings 4 or 5) or negative (ratings 1 or 2) class.",
"For each dataset respectively, we sample 5000 sentences each from the positive and negative reviews.",
"Note that these sentences used for this evaluation task have not participated in the generation of word embeddings.",
"Due to space constraint, in most cases we present the average accuracy across the datasets, but where appropriate we enumerate the results for each dataset.",
"The second is Cornell's sentence polarity dataset v1.0 4 (Pang and Lee, 2005) , made up of 5331 each of positive and negative sentences from Rotten Tomatoes movie reviews.",
"The inclusion of this out-of-domain evaluation dataset is useful for examining whether the performance of word embeddings from the Subjective Corpus on the first group above may inadvertently be affected by indomain advantage arising from its Amazon origin.",
"Subjectivity Classification Task This task classifies a sentence into subjective or objective.",
"The dataset is Cornell's subjectivity dataset v1.0 5 , consisting of 5000 subjective sentences derived from Rotten Tomatoes (RT) reviews and 5000 objective sentences derived from IMDB plot summaries (Pang and Lee, 2004) .",
"This task is probably less sensitive to the subjectivity within word embeddings than sentiment classification, as determining whether a sentence is subjective or objective should ideally be an objective undertaking.",
"Topic Classification Task We use the 20 Newsgroups dataset 6 (\"bydate\" version), whereby the newsgroups are organized into six subject matter groupings.",
"We extract the message body and split them into sentences.",
"Each group's sentences then form the in-topic class, and we randomly sample an equivalent number of sentences from the remaining newsgroups to form the out-of-topic class.",
"This results in six datasets, each corresponding to a binary classification task.",
"In most cases, we present the average results, and where appropriate we enumerate the results for each dataset.",
"Hypothetically, this task is the least affected by the subjectivity within word embeddings.",
"Comparative Analyses of Subjective vs.",
"Objective Corpora We conduct a series of comparative analyses under various setups.",
"For each, we compare the performance in the evaluation tasks when using the Objective Corpus and the Subjective Corpus.",
"Table 1 shows the results for this series of analyses.",
"Initial Condition Setup I seeks to answer whether there is any difference between word embeddings derived from the Objective Corpus and the Subjective Corpus.",
"The word embeddings were trained on the whole data respectively.",
"Table 1 shows the corpus statistics and classification accuracies.",
"Evidently, the Subjective word embeddings outperform the Objective word embeddings on all the evaluation tasks.",
"The margins are largest for sentiment classification (86.5% vs. 81.5% or +5% Amazon, and 78.2% vs. 75.4% or +2.8% on Rotten Tomatoes or RT).",
"For subjectivity and topic classifications, the differences are smaller.",
"As earlier hypothesized, the sentiment classification task is more sensitive to subjectivity within word embeddings than the other tasks.",
"Therefore, training word embeddings on a subjective corpus may confer an advantage for such tasks.",
"On the other hand, the corpus statistics show a substantial difference in corpus size, which could be an alternative explanation for the outperformance by the Subjective Corpus if the larger corpus contains more informative distributional statistics.",
"Controlling for Corpus Size In Setup II, we keep the number of sentences in both corpora the same, by randomly downsampling sentences in the Subjective Corpus.",
"This procedure consequently reduces the number of types and tokens (see Table 1 , Setup II, Corpus Statistics).",
"Note that the number of tokens in the Subjective corpus is now fewer than in the Objective, the latter suffers no change.",
"Yet, even after a dramatic reduction in size, the Subjective embeddings still outperform the Objective significantly on both datasets of the sentiment classification task (+4% on Amazon and +2.5% on RT), while showing similar performance on subjectivity and topic classifications.",
"This bolsters the earlier observation that sentiment classification is more sensitive to subjectivity.",
"While there is a small effect due to corpus size difference, the gap in performance between Subjective and Objective embeddings on sentiment classification is still significant and cannot be explained away by the corpus size alone.",
"Controlling for Vocabulary While the Subjective Corpus has a much smaller vocabulary (i.e., # types), we turn a critical eye on whether its apparent advantage lies in having access to special word types that do not exist in the Objective Corpus.",
"In Setup III, we keep the training vocabulary the same for both, removing the types that are Table 2 : Top words of misclassified sentences present in one corpus but not in the other, so that out-of-vocabulary words are ignored in the training phase.",
"Table 1 , Setup III, shows significant reduction in types for both corpora.",
"Yet, the outperformance by the Subjective embeddings on the sentiment classification task still stands (+3.8% on Amazon and +2.3% on RT).",
"Moreover, it is so for both Amazon and Rotten Tomatoes datasets, implying that it is not due to close in-domain similarity between the corpora used for training the word embeddings and the classification tasks.",
"Significant Words To get more insights on the difference between the Subjective and Objective corpora, we analyze the mistakes word embeddings make on the development folds.",
"At this point we focus on the sentiment classification task and specifically on the Amazon data, which indicates the largest performance differences in the controlled experiments (see Table 1 , Setup III).",
"As words are still the main unit of information in distributional word embeddings, we extract words strongly associated with misclassified sentences.",
"We employed log-odds ratio with informative Dirichlet prior method (Monroe et al., 2008) to quantify this association.",
"It is used to contrast the words in misclassified vs. correctly classified sentences, and accounts for the variance of words and their prior counts taken from a large corpus.",
"Table 2 shows the top 25 words most associated with the misclassified sentences, sorted by their association scores.",
"On average 50% of the mistakes overlap for both word embeddings, therefore, some of the words are included in both lists.",
"40 − 44% of these words carry positive or negative sentiment connotations in general (see the underlined words in Table 2 ), while other words like return or send may carry sentiment connotation in e-commerce context.",
"We check if a word carries sentiment connotation using sentiment lexicon compiled by Hu and Liu (2004) , including 6789 words along with positive or negative labels.",
"We also observe linguistic negations (i.e., not, Don't).",
"For instance, the word most associated with the Objective-specific mistakes (excluding the Subjective misclassified sentences) is not, which suggests that perhaps Subjective word embedding accommodates better understanding of linguistic negations, which may partially explain the difference.",
"However, our methodology as outlined in Section 2.2 permits exchangeable word order and is not intended to analyze structural interaction between words.",
"We focus on further analysis of sentiment words, leaving linguistic negations in word embeddings for future investigation.",
"Controlling for Sentiment Words To control for the \"amount\" of sentiment in the Subjective and Objective corpora, we use sentiment lexicon compiled by Hu and Liu (2004) .",
"For each corpus, we create two subcorpora: With Sentiment contains only the sentences with at least one word from the sentiment lexicon, while Without Sentiment is the complement.",
"We match the corpora on the number of sentences, downsampling the larger corpus, train word embeddings on each subcorpus, and proceed with the classification experiments.",
"Table 3 shows the results, including that of random word embeddings for reference.",
"Sentiment lexicon has a significant impact on the performance of sentiment and subjectivity classifications, and a smaller impact on topic classification.",
"Without sentiment, the Subjective embeddings prove more robust, still outperforming the Objective on sentiment classification, while the Objective performs close to random word embeddings on Amazon .",
"In summary, evidences from the series of controlled experiments support the existence of some X-factor to the Subjective embeddings, which confers superior performance in subjectivity-sensitive tasks such as sentiment classification.",
"To leverage the consequential sentiment information, we propose a family of methods, called SentiVec, for training distributional word embeddings that are infused with information on the sentiment polarity of words.",
"The methods are built upon Word2Vec optimization algorithm and make use of available lexical sentiment resources such as SentiWordNet (Baccianella et al., 2010) , sentiment lexicon by Hu and Liu (2004) , and etc.",
"SentiVec seeks to satisfy two objectives, namely context prediction and lexical category prediction: log L = log L word2vec (W ; C) + λ log L lex (W, L), (5) where L word2vec (W ; C) is the Skip-gram objective as in (4) ; L lex (W, L) is a lexical objective for corpus W and lexical resource L; and λ is a tradeoff parameter.",
"Lexical resource L = {X i } n i=1 comprises of n word sets, each X i contains words of the same category.",
"For sentiment classification, we consider positive and negative word categories.",
"Logistic SentiVec Logistic SentiVec admits lexical resource in the form of two disjoint word sets, L = {X 1 , X 2 }, X 1 ∩ X 2 = ∅.",
"The objective is to tell apart which word set of L word w belongs to: log L lex (W, L) (6) = w∈X 1 log P(w ∈ X 1 ) + w∈X 2 log P(w ∈ X 2 ).",
"We further tie these probabilities together, and cast the objective as a logistic regression problem: P(w ∈ X 1 ) = 1 − P(w ∈ X 2 ) = σ(v w · τ ), (7) where v w is a word embedding and τ is a direction vector.",
"Since word embeddings are generally invariant to scaling and rotation when used as downstream feature representations, τ can be chosen randomly and fixed during training.",
"We experiment with randomly sampled unit length directions.",
"For simplicity, we also scale embedding v w to its unit length when computing v w · τ , which now equals to cosine similarity between v w and τ .",
"When v w is completely aligned with τ , the cosine similarity between them is 1, which maximizes P(w ∈ X 1 ) and favors words in X 1 .",
"When v w is opposite to τ , the cosine similarity equals to −1, which maximizes P(w ∈ X 2 ) and predicts vectors from X 2 .",
"Orthogonal vectors have cosine similarity of 0, which makes both w ∈ X 1 and w ∈ X 2 equally probable.",
"Optimizing (6) makes the corresponding word embeddings of X 1 and X 2 gravitate to the opposite semispaces and simulates clustering effect for the words of the same category, while the Word2Vec objective prevents words from collapsing to the same directions.",
"Optimization The objective in (6) permits simple stochastic gradient ascent optimization and can be combined with negative sampling procedure for Skip-gram in (5).",
"The gradient for unnormalized embedding v w is solved as follows: log L [w∈X 1 ] (D, L) v wi = (log P (x ∈ X 1 )) v wi = 1 v w 2 σ − v w · τ v w τ i v w − v wi v w · τ v w (8) The optimization equation for v w , when w ∈ X 2 , can be derived analogously.",
"Spherical SentiVec Spherical SentiVec extends Logistic SentiVec by dealing with any number of lexical categories, L = {X i } n i=1 .",
"As such, the lexical objective takes on generic form: log L lex (W, L) = n i=1 w∈X i log P (w ∈ X i ), (9) Each P (w ∈ X i ) defines embedding generating process.",
"We assume each length-normalized v w for w of L is generated w.r.t.",
"a mixture model of von Mises-Fisher (vMF) distributions.",
"vMF is a probability distribution on a multidimensional sphere, characterized by parameters µ (mean direction) and κ (concentration parameter).",
"Sampled points are concentrated around µ; the greater the κ, the closer the sampled points are to µ.",
"We consider only unimodal vMF distributions, restricting concentration parameters to be strictly positive.",
"Hereby, each X i ∈ L is assigned to vMF distribution parameters (µ i , κ i ) and the membership probabilities are defined as follows: P(w ∈ X i ) = P (v w ; µ i , κ i ) = 1 Z κ i e κ i µ i ·vw , (10) where Z κ is the normalization factor.",
"The Spherical SentiVec lexical objective forces words of every X i ∈ L to gravitate towards and concentrate around their direction mean µ i .",
"As in Logistic SentiVec, it simulates clustering effect for the words of the same set.",
"In comparison to the direction vector of Logistic SentiVec, mean directions of Spherical SentiVec when fixed can substantially influence word embeddings training and must be carefully selected.",
"We optimize the mean directions along with the word embeddings using alternating procedure resembling K-means clustering algorithm.",
"For simplicity, we keep concentration parameters tied, κ 1 = κ 2 = ... = κ n = κ, and treat κ as a hyperparameter of this algorithm.",
"Optimization We derive optimization procedure for updating word embeddings assuming fixed direction means.",
"Like Logistic SentiVec, Spherical SentiVec can be combined with the negative sampling procedure of Skip-gram.",
"The gradient for unnormalized word embedding v w is solved by the following equation: log L [w∈X i ] (W, L) v wj = κi µij vw − vwj vw ·µ i vw vw 2 (11) Once word embedding v w (w ∈ X i ) is updated, we revise direction mean µ i w.r.t.",
"maximum likelihood estimator: µi = w∈X i vw w∈X i vw .",
"(12) Updating the direction means in such a way ensures that the lexical objective is non-decreasing.",
"Assuming the stochastic optimization procedure for L word2vec complies with the same nondecreasing property, the proposed alternating procedure converges.",
"Related Work There have been considerable research on improving the quality of distributional word embeddings.",
"Bolukbasi et al.",
"(2016) seek to debias word embeddings from gender stereotypes.",
"Rothe and Schütze (2017) incorporate WordNet lexeme and synset information.",
"Mrkšic et al.",
"(2016) encode antonym-synonym relations.",
"Liu et al.",
"(2015) encode ordinal relations such as hypernym and hyponym.",
"Kiela et al.",
"(2015) augment Skip-gram to enforce lexical similarity or relatedness constraints, Bollegala et al.",
"(2016) modify GloVe optimization procedure for the same purpose.",
"Faruqui et al.",
"(2015) employ semantic relations of PPDB, WordNet, FrameNet to retrofit word embeddings for various prediction tasks.",
"We use this Retrofitting method 7 as a baseline.",
"Socher et al.",
"(2011) derive multi-word embeddings for sentiment distribution prediction, while we focus on lexical distributional analysis.",
"Maas et al.",
"(2011) and Tang et al.",
"(2016) use documentlevel sentiment annotations to fit word embeddings, but document annotation might not always be available for distributional analysis on neutral corpora such as Wikipedia.",
"SentiVec relies on simple sentiment lexicon instead.",
"Refining (Yu et al., 2018) aligns the sentiment scores taken from lexical resource and the cosine similarity scores of corresponding word embeddings.",
"The method generally requires fine-grained sentiment scores for the words, which may not be available in some settings.",
"We use Refining as a baseline and adopt coarse-grained sentiment lexicon for this method.",
"Villegas et al.",
"(2016) compare various distributional word embeddings arising from the same corpus for sentiment classification, whereas we focus on the differentiation in input corpora and propose novel sentiment-infused word embeddings.",
"Experiments The objective of experiments is to study the efficacy of Logistic SentiVec and Spherical SentiVec word embeddings on the aforementioned text classification tasks.",
"One natural baseline is Word2Vec, as SentiVec subsumes its context prediction objective, while further incorporating lexical category prediction.",
"We include two other baselines that can leverage the same lexical resource but in manners different from SentiVec, namely: Retrofitting (Faruqui et al., 2015) and Refining (Yu et al., 2018) .",
"For these methods, we generate their word embeddings based on Setup III (see Section 3).",
"All the methods were run multiple times with various hyperparameters, optimized via grid-search; for each we present the best performing setting.",
"First, we discuss the sentiment classification task.",
"Table 4 shows the unfolded results for the 24 classification datasets of Amazon, as well as for Rotten Tomatoes.",
"For each classification dataset (row), and for the Objective and Subjective embedding corpora respectively, the best word embedding methods are shown in bold.",
"An asterisk indicates statistically significant 8 results at 5% in comparison to Word2Vec.",
"Both SentiVec variants outperform Word2Vec in the vast majority of the cases.",
"The degree of outperformance is higher for the Objective than the Subjective word embeddings.",
"This is a reasonable trend given our previous findings in Section 3.",
"As the Objective Corpus encodes less information than the Subjective Corpus for sentiment classification, the former is more likely to benefit from the infusion of sentiment information from additional lexical resources.",
"Note that the sentiment infusion into the word embeddings comes from separate lexical resources, and does not involve any sentiment classification label.",
"SentiVec also outperforms the two baselines that benefit from the same lexical resources.",
"Retrofitting does not improve upon Word2Vec, with the two embeddings essentially indistinguishable (the difference is only noticeable at the second decimal point).",
"Refining makes the word embeddings perform worse on the sentiment classification task.",
"One possible explanation is that Refining normally requires fine-grained labeled lexicon, where the words are scored w.r.t.",
"the sentiment scale, whereas we use sentiment lexicon of two labels (i.e., positive or negative).",
"SentiVec accepts coarse-grained sentiment lexicons, and potentially could be extended to deal with fine-grained labels.",
"As previously alluded to, topic and subjectivity classifications are less sensitive to the subjectivity within word embeddings than sentiment classification.",
"One therefore would not expect much, if any, performance gain from infusion of sentiment information.",
"However, such infusion should not subtract or harm the quality of word embeddings either.",
"Table 5 shows that the unfolded results for topic classification on the six datasets, and the result for subjectivity classification are similar across methods.",
"Neither the SentiVec variants, nor Retrofitting and Refining, change the subjectivity and topic classification capabilities much, which means that the used sentiment lexicon is targeted only at the sentiment subspace of embeddings.",
"Illustrative Changes in Embeddings To give more insights on the difference between SentiVec and Word2Vec, we show \"flower\" diagrams in Figure 1 for Logistic SentiVec and Figure 2 for Spherical SentiVec.",
"Each is associated with a reference word (e.g., good for Figure 1a) , and indicates relative changes in cosine distances between the reference word and the testing words surrounding the \"flower\".",
"Every testing word is associated with a \"petal\" or black axis extending from the center of the circle.",
"The \"petal\" length is proportional to the relative distance change in two word embeddings: κ = Word2Vec embeddings correspondingly.",
"If the distance remains unchanged (κ = 1), then the \"petal\" points at the circumference; if the reference and testing words are closer in the SentiVec embedding than they are in Word2Vec (κ < 1), the \"petal\" lies inside the circle; when the distance increases (κ > 1), the \"petal\" goes beyond the circle.",
"The diagrams are presented for Objective Embeddings 9 .",
"We use three reference words: good (positive), bad (negative), time (neutral); as well as three groups of testing words: green for words randomly sampled from positive lexicon (Sector I-II), red for words randomly sampled from negative lexicon (Sector II-III), and gray for frequent neutral common nouns (Sector III-I).",
"Figure 1 shows changes produced by Logistic SentiVec.",
"For the positive reference word (Figure 1a) , the average distance to the green words is shortened, whereas the distance to the red words increases.",
"The reverse is observed for the negative reference word (Figure 1b ).",
"This observation complies with the lexical objective (7) of Logistic SentiVec, which aims to separate the words of two different classes.",
"Note that the gray words suffer only moderate change with respect to positive and negative reference words.",
"For the neutral reference word (Figure 1c ), the distances are only moderately affected across all testing groups.",
"Figure 2 shows that Spherical SentiVec tends to make embeddings more compact than Logistic SentiVec.",
"As the former's lexical objective (9) is designed for clustering, but not for separation, we look at the comparative strength of the clustering effect on the testing words.",
"For the positive reference word (Figure 2a ), the largest clustering effect is achieved for the green words.",
"For the negative reference word (Figure 2b) , as expected, the red words are affected the most.",
"The gray words suffer the least change for all the reference words.",
"In summary, SentiVec effectively provides an advantage for subjectivity-sensitive task such as sentiment classification, while not harming the performance of other text classification tasks.",
"Conclusion We explore the differences between objective and subjective corpora for generating word embeddings, and find that there is indeed a difference in the embeddings' classification task performances.",
"Identifying the presence of sentiment words as one key factor for the difference, we propose a novel method SentiVec to train word embeddings that are infused with the sentiment polarity of words derived from a separate sentiment lexicon.",
"We further identify two lexical objectives: Logistic SentiVec and Spherical SentiVec.",
"The proposed word embeddings show improvements in sentiment classification, while maintaining their performance on subjectivity and topic classifications."
]
} | {
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"3",
"4.1",
"4.2",
"5",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Data and Methodology",
"Generating Word Embeddings",
"Evaluation Tasks",
"Comparative Analyses of Subjective vs. Objective Corpora",
"Logistic SentiVec",
"Spherical SentiVec",
"Related Work",
"Experiments",
"Conclusion"
]
} | GEM-SciDuet-train-134#paper-1359#slide-3 | Wikipedia | An article must be written from a neutral point of view, which among other things means representing fairly, proportionately, and, as far as possible, without editorial bias, all of the significant views that have been published by reliable sources on a topic. | An article must be written from a neutral point of view, which among other things means representing fairly, proportionately, and, as far as possible, without editorial bias, all of the significant views that have been published by reliable sources on a topic. | [] |
GEM-SciDuet-train-134#paper-1359#slide-5 | 1359 | Searching for the X-Factor: Exploring Corpus Subjectivity for Word Embeddings | We explore the notion of subjectivity, and hypothesize that word embeddings learnt from input corpora of varying levels of subjectivity behave differently on natural language processing tasks such as classifying a sentence by sentiment, subjectivity, or topic. Through systematic comparative analyses, we establish this to be the case indeed. Moreover, based on the discovery of the outsized role that sentiment words play on subjectivity-sensitive tasks such as sentiment classification, we develop a novel word embedding SentiVec which is infused with sentiment information from a lexical resource, and is shown to outperform baselines on such tasks. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226
],
"paper_content_text": [
"Introduction Distributional analysis methods such as Word2Vec (Mikolov et al., 2013) and GloVe (Pennington et al., 2014) have been critical for the success of many large-scale natural language processing (NLP) applications (Collobert et al., 2011; Socher et al., 2013; Goldberg, 2016) .",
"These methods employ distributional hypothesis (i.e., words used in the same contexts tend to have similar meaning) to derive distributional meaning via context prediction tasks and produce dense word embeddings.",
"While there have been active and ongoing research on improving word embedding methods (see Section 5), there is a relative dearth of study on the impact that an input corpus may have on the quality of the word embeddings.",
"The previous preoccupation centers around corpus size, i.e., a larger corpus is perceived to be richer in statistical information.",
"For instance, popular corpora include Wikipedia, Common Crawl, and Google News.",
"We postulate that there may be variations across corpora owing to factors that affect language use.",
"Intuitively, the many things we write (a work email, a product review, an academic publication, etc.)",
"may each involve certain stylistic, syntactic, and lexical choices, resulting in meaningfully different distributions of word cooccurrences.",
"Consequently, such factors may be encoded in the word embeddings, and input corpora may be differentially informative towards various NLP tasks.",
"In this work, we are interested in the notion of subjectivity.",
"Some NLP tasks, such as sentiment classification, revolve around subjective expressions of likes or dislikes.",
"Others, such as topic classification, revolve around more objective elements of whether a document belongs to a topic (e.g., science, politics).",
"Our central hypothesis is that word embeddings learnt from input corpora of contrasting levels of subjectivity perform differently when classifying sentences by sentiment, subjectivity, or topic.",
"As the first contribution, we outline an experimental scheme to explore this hypothesis in Section 2, and conduct a series of controlled experiments in Section 3 establishing that there exists a meaningful difference between word embeddings derived from objective vs. subjective corpora.",
"We further systematically investigate factors that could potentially explain the differences.",
"Upon discovering from the investigation that sentiment words play a particularly important role in subjectivity-sensitive NLP tasks, such as sentiment classification, as the second contribution, in Section 4 we develop SentiVec, a novel word embedding method infused with information from lexical resources such as a sentiment lexicon.",
"We further identify two alternative lexical objectives: Logistic SentiVec based on discriminative logistic regression, and Spherical SentiVec based on soft clustering effect of von Mises-Fisher distributions.",
"In Section 6, the proposed word embeddings show evident improvements on sentiment classification, as compared to the base model Word2Vec and other baselines using the same lexical resource.",
"Data and Methodology We lay out the methodology for generating word embeddings of contrasting subjectivity, whose effects are tested on several text classification tasks.",
"Generating Word Embeddings As it is difficult to precisely quantify the degree of subjectivity of a corpus, we resort to generating word embeddings from two corpora that contrast sharply in subjectivity, referring to them as the Objective Corpus and the Subjective Corpus.",
"Objective Corpus As virtually all contents are written by humans, an absolutely objective corpus (in the philosophical sense) may prove elusive.",
"There are however exemplars where, by construction, a corpus aspires to be as objective as possible, and probably achieves that in practical terms.",
"We postulate that one such corpus is Wikipedia.",
"Its list of policies and guidelines 1 , assiduously enforced by an editorial team, specify that an article must be written from a neutral point of view, which among other things means \"representing fairly, proportionately, and, as far as possible, without editorial bias, all of the significant views that have been published by reliable sources on a topic.\".",
"Moreover, it is a common resource for training distributional word embeddings and adopted widely by the research community to solve various NLP problems.",
"Hence, in this study, we use Wikipedia as the Objective Corpus.",
"Subjective Corpus By extension, one may then deem a corpus subjective if its content does not at least meet Wikipedia's neutral point of view requirement.",
"In other words, if the content is replete with personal feelings and opinions.",
"We posit that product reviews would be one such corpus.",
"For instance, Amazon's Community Guideline 2 states that \"Amazon values diverse opinions\", and that \"Content you submit should be relevant and based on your own honest opinions and experience.\".",
"Reviews consist of expressive content written by customers, and may not strive for the neutrality of an encyclopedia.",
"We rely on a 1 https://en.wikipedia.org/wiki/ Wikipedia:List_of_policies_and_ guidelines 2 https://www.amazon.com/gp/help/ customer/display.html?nodeId=201929730 large corpus of Amazon reviews from various categories (e.g., electronics, jewelry, books, and etc.)",
"(McAuley et al., 2015) as the Subjective Corpus.",
"Word Embeddings For the comparative analysis in Section 3, we employ Word2Vec (reviewed below) to generate word embeddings from each corpus.",
"Later on in Section 4, we will propose a new word embedding method called SentiVec.",
"For Word2Vec, we use the Skip-gram model to train distributional word embeddings on the Objective Corpus and the Subjective Corpus respectively.",
"Skip-gram aims to find word embeddings that are useful for predicting nearby words.",
"The objective is to maximize the context probability: log L(W ; C) = w∈W w ∈C(w) log P(w |w), (1) where W is an input corpus and C(w) is the context of token w. The probability of context word w , given observed word w is defined via softmax: P(w |w) = exp (v w · vw) ŵ∈V exp (vŵ · vw) , (2) where v w and v w are corresponding embeddings and V is the corpus vocabulary.",
"Though theoretically sound, the formulation is computationally impractical and requires tractable approximation.",
"Mikolov et al.",
"(2013) propose two efficient procedures to optimize (1): Hierarchical Softmax and Negative Sampling (NS).",
"In this work we focus on the widely adopted NS.",
"The intuition is that a \"good\" model should be able to differentiate observed data from noise.",
"The differentiation task is defined using logistic regression; the goal is to tell apart real context-word pair (w , w) from randomly generated noise pair (ŵ, w).",
"Formally, log L [w',w] = log σ (v w · vw) + k i=1 log σ (−vŵ i · vw), (3) where σ( · ) is a sigmoid function, and {ŵ i } k i=1 are negative samples.",
"Summing up all the contextword pairs, we derive the NS Skip-gram objective: log L word2vec (W ; C) = w∈W w ∈C(w) log L [w',w] .",
"(4) Training word embeddings with Skip-gram, we keep the same hyperparameters across all the runs: 300 dimensions for embeddings, k = 5 negative samples, and window of 5 tokens.",
"The Objective and Subjective corpora undergo the same preprocessing, i.e., discarding short sentences (< 5 tokens) and rare words (< 10 occurrences), removing punctuation, normalizing Unicode symbols.",
"Evaluation Tasks To compare word embeddings, we need a common yardstick.",
"It is difficult to define an inherent quality to word embeddings.",
"Instead, we put them through several evaluation tasks that can leverage word embeddings and standardize their formulations as binary classification tasks.",
"To boil the comparisons down to the essences of word embeddings (which is our central focus), we rely on standardized techniques so as to attribute as much of the differences as possible to the word embeddings.",
"We use logistic regression for classification, and represent a text snippet (e.g., a sentence) in the feature space as the average of the word embeddings of tokens in the snippet (ignoring out-ofvocabulary tokens).",
"The evaluation metric is the average accuracy from 10-fold cross validation.",
"There are three evaluation tasks of varying degrees of hypothetical subjectivity, as outlined below.",
"Each may involve multiple datasets.",
"Sentiment Classification Task This task classifies a sentence into either positive or negative.",
"We use two groups of datasets as follows.",
"The first group consists of 24 datasets from UCSD Amazon product data 3 corresponding to various product categories.",
"Each review has a rating from 1 to 5, which is transformed into positive (ratings 4 or 5) or negative (ratings 1 or 2) class.",
"For each dataset respectively, we sample 5000 sentences each from the positive and negative reviews.",
"Note that these sentences used for this evaluation task have not participated in the generation of word embeddings.",
"Due to space constraint, in most cases we present the average accuracy across the datasets, but where appropriate we enumerate the results for each dataset.",
"The second is Cornell's sentence polarity dataset v1.0 4 (Pang and Lee, 2005) , made up of 5331 each of positive and negative sentences from Rotten Tomatoes movie reviews.",
"The inclusion of this out-of-domain evaluation dataset is useful for examining whether the performance of word embeddings from the Subjective Corpus on the first group above may inadvertently be affected by indomain advantage arising from its Amazon origin.",
"Subjectivity Classification Task This task classifies a sentence into subjective or objective.",
"The dataset is Cornell's subjectivity dataset v1.0 5 , consisting of 5000 subjective sentences derived from Rotten Tomatoes (RT) reviews and 5000 objective sentences derived from IMDB plot summaries (Pang and Lee, 2004) .",
"This task is probably less sensitive to the subjectivity within word embeddings than sentiment classification, as determining whether a sentence is subjective or objective should ideally be an objective undertaking.",
"Topic Classification Task We use the 20 Newsgroups dataset 6 (\"bydate\" version), whereby the newsgroups are organized into six subject matter groupings.",
"We extract the message body and split them into sentences.",
"Each group's sentences then form the in-topic class, and we randomly sample an equivalent number of sentences from the remaining newsgroups to form the out-of-topic class.",
"This results in six datasets, each corresponding to a binary classification task.",
"In most cases, we present the average results, and where appropriate we enumerate the results for each dataset.",
"Hypothetically, this task is the least affected by the subjectivity within word embeddings.",
"Comparative Analyses of Subjective vs.",
"Objective Corpora We conduct a series of comparative analyses under various setups.",
"For each, we compare the performance in the evaluation tasks when using the Objective Corpus and the Subjective Corpus.",
"Table 1 shows the results for this series of analyses.",
"Initial Condition Setup I seeks to answer whether there is any difference between word embeddings derived from the Objective Corpus and the Subjective Corpus.",
"The word embeddings were trained on the whole data respectively.",
"Table 1 shows the corpus statistics and classification accuracies.",
"Evidently, the Subjective word embeddings outperform the Objective word embeddings on all the evaluation tasks.",
"The margins are largest for sentiment classification (86.5% vs. 81.5% or +5% Amazon, and 78.2% vs. 75.4% or +2.8% on Rotten Tomatoes or RT).",
"For subjectivity and topic classifications, the differences are smaller.",
"As earlier hypothesized, the sentiment classification task is more sensitive to subjectivity within word embeddings than the other tasks.",
"Therefore, training word embeddings on a subjective corpus may confer an advantage for such tasks.",
"On the other hand, the corpus statistics show a substantial difference in corpus size, which could be an alternative explanation for the outperformance by the Subjective Corpus if the larger corpus contains more informative distributional statistics.",
"Controlling for Corpus Size In Setup II, we keep the number of sentences in both corpora the same, by randomly downsampling sentences in the Subjective Corpus.",
"This procedure consequently reduces the number of types and tokens (see Table 1 , Setup II, Corpus Statistics).",
"Note that the number of tokens in the Subjective corpus is now fewer than in the Objective, the latter suffers no change.",
"Yet, even after a dramatic reduction in size, the Subjective embeddings still outperform the Objective significantly on both datasets of the sentiment classification task (+4% on Amazon and +2.5% on RT), while showing similar performance on subjectivity and topic classifications.",
"This bolsters the earlier observation that sentiment classification is more sensitive to subjectivity.",
"While there is a small effect due to corpus size difference, the gap in performance between Subjective and Objective embeddings on sentiment classification is still significant and cannot be explained away by the corpus size alone.",
"Controlling for Vocabulary While the Subjective Corpus has a much smaller vocabulary (i.e., # types), we turn a critical eye on whether its apparent advantage lies in having access to special word types that do not exist in the Objective Corpus.",
"In Setup III, we keep the training vocabulary the same for both, removing the types that are Table 2 : Top words of misclassified sentences present in one corpus but not in the other, so that out-of-vocabulary words are ignored in the training phase.",
"Table 1 , Setup III, shows significant reduction in types for both corpora.",
"Yet, the outperformance by the Subjective embeddings on the sentiment classification task still stands (+3.8% on Amazon and +2.3% on RT).",
"Moreover, it is so for both Amazon and Rotten Tomatoes datasets, implying that it is not due to close in-domain similarity between the corpora used for training the word embeddings and the classification tasks.",
"Significant Words To get more insights on the difference between the Subjective and Objective corpora, we analyze the mistakes word embeddings make on the development folds.",
"At this point we focus on the sentiment classification task and specifically on the Amazon data, which indicates the largest performance differences in the controlled experiments (see Table 1 , Setup III).",
"As words are still the main unit of information in distributional word embeddings, we extract words strongly associated with misclassified sentences.",
"We employed log-odds ratio with informative Dirichlet prior method (Monroe et al., 2008) to quantify this association.",
"It is used to contrast the words in misclassified vs. correctly classified sentences, and accounts for the variance of words and their prior counts taken from a large corpus.",
"Table 2 shows the top 25 words most associated with the misclassified sentences, sorted by their association scores.",
"On average 50% of the mistakes overlap for both word embeddings, therefore, some of the words are included in both lists.",
"40 − 44% of these words carry positive or negative sentiment connotations in general (see the underlined words in Table 2 ), while other words like return or send may carry sentiment connotation in e-commerce context.",
"We check if a word carries sentiment connotation using sentiment lexicon compiled by Hu and Liu (2004) , including 6789 words along with positive or negative labels.",
"We also observe linguistic negations (i.e., not, Don't).",
"For instance, the word most associated with the Objective-specific mistakes (excluding the Subjective misclassified sentences) is not, which suggests that perhaps Subjective word embedding accommodates better understanding of linguistic negations, which may partially explain the difference.",
"However, our methodology as outlined in Section 2.2 permits exchangeable word order and is not intended to analyze structural interaction between words.",
"We focus on further analysis of sentiment words, leaving linguistic negations in word embeddings for future investigation.",
"Controlling for Sentiment Words To control for the \"amount\" of sentiment in the Subjective and Objective corpora, we use sentiment lexicon compiled by Hu and Liu (2004) .",
"For each corpus, we create two subcorpora: With Sentiment contains only the sentences with at least one word from the sentiment lexicon, while Without Sentiment is the complement.",
"We match the corpora on the number of sentences, downsampling the larger corpus, train word embeddings on each subcorpus, and proceed with the classification experiments.",
"Table 3 shows the results, including that of random word embeddings for reference.",
"Sentiment lexicon has a significant impact on the performance of sentiment and subjectivity classifications, and a smaller impact on topic classification.",
"Without sentiment, the Subjective embeddings prove more robust, still outperforming the Objective on sentiment classification, while the Objective performs close to random word embeddings on Amazon .",
"In summary, evidences from the series of controlled experiments support the existence of some X-factor to the Subjective embeddings, which confers superior performance in subjectivity-sensitive tasks such as sentiment classification.",
"To leverage the consequential sentiment information, we propose a family of methods, called SentiVec, for training distributional word embeddings that are infused with information on the sentiment polarity of words.",
"The methods are built upon Word2Vec optimization algorithm and make use of available lexical sentiment resources such as SentiWordNet (Baccianella et al., 2010) , sentiment lexicon by Hu and Liu (2004) , and etc.",
"SentiVec seeks to satisfy two objectives, namely context prediction and lexical category prediction: log L = log L word2vec (W ; C) + λ log L lex (W, L), (5) where L word2vec (W ; C) is the Skip-gram objective as in (4) ; L lex (W, L) is a lexical objective for corpus W and lexical resource L; and λ is a tradeoff parameter.",
"Lexical resource L = {X i } n i=1 comprises of n word sets, each X i contains words of the same category.",
"For sentiment classification, we consider positive and negative word categories.",
"Logistic SentiVec Logistic SentiVec admits lexical resource in the form of two disjoint word sets, L = {X 1 , X 2 }, X 1 ∩ X 2 = ∅.",
"The objective is to tell apart which word set of L word w belongs to: log L lex (W, L) (6) = w∈X 1 log P(w ∈ X 1 ) + w∈X 2 log P(w ∈ X 2 ).",
"We further tie these probabilities together, and cast the objective as a logistic regression problem: P(w ∈ X 1 ) = 1 − P(w ∈ X 2 ) = σ(v w · τ ), (7) where v w is a word embedding and τ is a direction vector.",
"Since word embeddings are generally invariant to scaling and rotation when used as downstream feature representations, τ can be chosen randomly and fixed during training.",
"We experiment with randomly sampled unit length directions.",
"For simplicity, we also scale embedding v w to its unit length when computing v w · τ , which now equals to cosine similarity between v w and τ .",
"When v w is completely aligned with τ , the cosine similarity between them is 1, which maximizes P(w ∈ X 1 ) and favors words in X 1 .",
"When v w is opposite to τ , the cosine similarity equals to −1, which maximizes P(w ∈ X 2 ) and predicts vectors from X 2 .",
"Orthogonal vectors have cosine similarity of 0, which makes both w ∈ X 1 and w ∈ X 2 equally probable.",
"Optimizing (6) makes the corresponding word embeddings of X 1 and X 2 gravitate to the opposite semispaces and simulates clustering effect for the words of the same category, while the Word2Vec objective prevents words from collapsing to the same directions.",
"Optimization The objective in (6) permits simple stochastic gradient ascent optimization and can be combined with negative sampling procedure for Skip-gram in (5).",
"The gradient for unnormalized embedding v w is solved as follows: log L [w∈X 1 ] (D, L) v wi = (log P (x ∈ X 1 )) v wi = 1 v w 2 σ − v w · τ v w τ i v w − v wi v w · τ v w (8) The optimization equation for v w , when w ∈ X 2 , can be derived analogously.",
"Spherical SentiVec Spherical SentiVec extends Logistic SentiVec by dealing with any number of lexical categories, L = {X i } n i=1 .",
"As such, the lexical objective takes on generic form: log L lex (W, L) = n i=1 w∈X i log P (w ∈ X i ), (9) Each P (w ∈ X i ) defines embedding generating process.",
"We assume each length-normalized v w for w of L is generated w.r.t.",
"a mixture model of von Mises-Fisher (vMF) distributions.",
"vMF is a probability distribution on a multidimensional sphere, characterized by parameters µ (mean direction) and κ (concentration parameter).",
"Sampled points are concentrated around µ; the greater the κ, the closer the sampled points are to µ.",
"We consider only unimodal vMF distributions, restricting concentration parameters to be strictly positive.",
"Hereby, each X i ∈ L is assigned to vMF distribution parameters (µ i , κ i ) and the membership probabilities are defined as follows: P(w ∈ X i ) = P (v w ; µ i , κ i ) = 1 Z κ i e κ i µ i ·vw , (10) where Z κ is the normalization factor.",
"The Spherical SentiVec lexical objective forces words of every X i ∈ L to gravitate towards and concentrate around their direction mean µ i .",
"As in Logistic SentiVec, it simulates clustering effect for the words of the same set.",
"In comparison to the direction vector of Logistic SentiVec, mean directions of Spherical SentiVec when fixed can substantially influence word embeddings training and must be carefully selected.",
"We optimize the mean directions along with the word embeddings using alternating procedure resembling K-means clustering algorithm.",
"For simplicity, we keep concentration parameters tied, κ 1 = κ 2 = ... = κ n = κ, and treat κ as a hyperparameter of this algorithm.",
"Optimization We derive optimization procedure for updating word embeddings assuming fixed direction means.",
"Like Logistic SentiVec, Spherical SentiVec can be combined with the negative sampling procedure of Skip-gram.",
"The gradient for unnormalized word embedding v w is solved by the following equation: log L [w∈X i ] (W, L) v wj = κi µij vw − vwj vw ·µ i vw vw 2 (11) Once word embedding v w (w ∈ X i ) is updated, we revise direction mean µ i w.r.t.",
"maximum likelihood estimator: µi = w∈X i vw w∈X i vw .",
"(12) Updating the direction means in such a way ensures that the lexical objective is non-decreasing.",
"Assuming the stochastic optimization procedure for L word2vec complies with the same nondecreasing property, the proposed alternating procedure converges.",
"Related Work There have been considerable research on improving the quality of distributional word embeddings.",
"Bolukbasi et al.",
"(2016) seek to debias word embeddings from gender stereotypes.",
"Rothe and Schütze (2017) incorporate WordNet lexeme and synset information.",
"Mrkšic et al.",
"(2016) encode antonym-synonym relations.",
"Liu et al.",
"(2015) encode ordinal relations such as hypernym and hyponym.",
"Kiela et al.",
"(2015) augment Skip-gram to enforce lexical similarity or relatedness constraints, Bollegala et al.",
"(2016) modify GloVe optimization procedure for the same purpose.",
"Faruqui et al.",
"(2015) employ semantic relations of PPDB, WordNet, FrameNet to retrofit word embeddings for various prediction tasks.",
"We use this Retrofitting method 7 as a baseline.",
"Socher et al.",
"(2011) derive multi-word embeddings for sentiment distribution prediction, while we focus on lexical distributional analysis.",
"Maas et al.",
"(2011) and Tang et al.",
"(2016) use documentlevel sentiment annotations to fit word embeddings, but document annotation might not always be available for distributional analysis on neutral corpora such as Wikipedia.",
"SentiVec relies on simple sentiment lexicon instead.",
"Refining (Yu et al., 2018) aligns the sentiment scores taken from lexical resource and the cosine similarity scores of corresponding word embeddings.",
"The method generally requires fine-grained sentiment scores for the words, which may not be available in some settings.",
"We use Refining as a baseline and adopt coarse-grained sentiment lexicon for this method.",
"Villegas et al.",
"(2016) compare various distributional word embeddings arising from the same corpus for sentiment classification, whereas we focus on the differentiation in input corpora and propose novel sentiment-infused word embeddings.",
"Experiments The objective of experiments is to study the efficacy of Logistic SentiVec and Spherical SentiVec word embeddings on the aforementioned text classification tasks.",
"One natural baseline is Word2Vec, as SentiVec subsumes its context prediction objective, while further incorporating lexical category prediction.",
"We include two other baselines that can leverage the same lexical resource but in manners different from SentiVec, namely: Retrofitting (Faruqui et al., 2015) and Refining (Yu et al., 2018) .",
"For these methods, we generate their word embeddings based on Setup III (see Section 3).",
"All the methods were run multiple times with various hyperparameters, optimized via grid-search; for each we present the best performing setting.",
"First, we discuss the sentiment classification task.",
"Table 4 shows the unfolded results for the 24 classification datasets of Amazon, as well as for Rotten Tomatoes.",
"For each classification dataset (row), and for the Objective and Subjective embedding corpora respectively, the best word embedding methods are shown in bold.",
"An asterisk indicates statistically significant 8 results at 5% in comparison to Word2Vec.",
"Both SentiVec variants outperform Word2Vec in the vast majority of the cases.",
"The degree of outperformance is higher for the Objective than the Subjective word embeddings.",
"This is a reasonable trend given our previous findings in Section 3.",
"As the Objective Corpus encodes less information than the Subjective Corpus for sentiment classification, the former is more likely to benefit from the infusion of sentiment information from additional lexical resources.",
"Note that the sentiment infusion into the word embeddings comes from separate lexical resources, and does not involve any sentiment classification label.",
"SentiVec also outperforms the two baselines that benefit from the same lexical resources.",
"Retrofitting does not improve upon Word2Vec, with the two embeddings essentially indistinguishable (the difference is only noticeable at the second decimal point).",
"Refining makes the word embeddings perform worse on the sentiment classification task.",
"One possible explanation is that Refining normally requires fine-grained labeled lexicon, where the words are scored w.r.t.",
"the sentiment scale, whereas we use sentiment lexicon of two labels (i.e., positive or negative).",
"SentiVec accepts coarse-grained sentiment lexicons, and potentially could be extended to deal with fine-grained labels.",
"As previously alluded to, topic and subjectivity classifications are less sensitive to the subjectivity within word embeddings than sentiment classification.",
"One therefore would not expect much, if any, performance gain from infusion of sentiment information.",
"However, such infusion should not subtract or harm the quality of word embeddings either.",
"Table 5 shows that the unfolded results for topic classification on the six datasets, and the result for subjectivity classification are similar across methods.",
"Neither the SentiVec variants, nor Retrofitting and Refining, change the subjectivity and topic classification capabilities much, which means that the used sentiment lexicon is targeted only at the sentiment subspace of embeddings.",
"Illustrative Changes in Embeddings To give more insights on the difference between SentiVec and Word2Vec, we show \"flower\" diagrams in Figure 1 for Logistic SentiVec and Figure 2 for Spherical SentiVec.",
"Each is associated with a reference word (e.g., good for Figure 1a) , and indicates relative changes in cosine distances between the reference word and the testing words surrounding the \"flower\".",
"Every testing word is associated with a \"petal\" or black axis extending from the center of the circle.",
"The \"petal\" length is proportional to the relative distance change in two word embeddings: κ = Word2Vec embeddings correspondingly.",
"If the distance remains unchanged (κ = 1), then the \"petal\" points at the circumference; if the reference and testing words are closer in the SentiVec embedding than they are in Word2Vec (κ < 1), the \"petal\" lies inside the circle; when the distance increases (κ > 1), the \"petal\" goes beyond the circle.",
"The diagrams are presented for Objective Embeddings 9 .",
"We use three reference words: good (positive), bad (negative), time (neutral); as well as three groups of testing words: green for words randomly sampled from positive lexicon (Sector I-II), red for words randomly sampled from negative lexicon (Sector II-III), and gray for frequent neutral common nouns (Sector III-I).",
"Figure 1 shows changes produced by Logistic SentiVec.",
"For the positive reference word (Figure 1a) , the average distance to the green words is shortened, whereas the distance to the red words increases.",
"The reverse is observed for the negative reference word (Figure 1b ).",
"This observation complies with the lexical objective (7) of Logistic SentiVec, which aims to separate the words of two different classes.",
"Note that the gray words suffer only moderate change with respect to positive and negative reference words.",
"For the neutral reference word (Figure 1c ), the distances are only moderately affected across all testing groups.",
"Figure 2 shows that Spherical SentiVec tends to make embeddings more compact than Logistic SentiVec.",
"As the former's lexical objective (9) is designed for clustering, but not for separation, we look at the comparative strength of the clustering effect on the testing words.",
"For the positive reference word (Figure 2a ), the largest clustering effect is achieved for the green words.",
"For the negative reference word (Figure 2b) , as expected, the red words are affected the most.",
"The gray words suffer the least change for all the reference words.",
"In summary, SentiVec effectively provides an advantage for subjectivity-sensitive task such as sentiment classification, while not harming the performance of other text classification tasks.",
"Conclusion We explore the differences between objective and subjective corpora for generating word embeddings, and find that there is indeed a difference in the embeddings' classification task performances.",
"Identifying the presence of sentiment words as one key factor for the difference, we propose a novel method SentiVec to train word embeddings that are infused with the sentiment polarity of words derived from a separate sentiment lexicon.",
"We further identify two lexical objectives: Logistic SentiVec and Spherical SentiVec.",
"The proposed word embeddings show improvements in sentiment classification, while maintaining their performance on subjectivity and topic classifications."
]
} | {
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"3",
"4.1",
"4.2",
"5",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Data and Methodology",
"Generating Word Embeddings",
"Evaluation Tasks",
"Comparative Analyses of Subjective vs. Objective Corpora",
"Logistic SentiVec",
"Spherical SentiVec",
"Related Work",
"Experiments",
"Conclusion"
]
} | GEM-SciDuet-train-134#paper-1359#slide-5 | Subjectivity Scale | More Objective More Subjective | More Objective More Subjective | [] |
GEM-SciDuet-train-134#paper-1359#slide-6 | 1359 | Searching for the X-Factor: Exploring Corpus Subjectivity for Word Embeddings | We explore the notion of subjectivity, and hypothesize that word embeddings learnt from input corpora of varying levels of subjectivity behave differently on natural language processing tasks such as classifying a sentence by sentiment, subjectivity, or topic. Through systematic comparative analyses, we establish this to be the case indeed. Moreover, based on the discovery of the outsized role that sentiment words play on subjectivity-sensitive tasks such as sentiment classification, we develop a novel word embedding SentiVec which is infused with sentiment information from a lexical resource, and is shown to outperform baselines on such tasks. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226
],
"paper_content_text": [
"Introduction Distributional analysis methods such as Word2Vec (Mikolov et al., 2013) and GloVe (Pennington et al., 2014) have been critical for the success of many large-scale natural language processing (NLP) applications (Collobert et al., 2011; Socher et al., 2013; Goldberg, 2016) .",
"These methods employ distributional hypothesis (i.e., words used in the same contexts tend to have similar meaning) to derive distributional meaning via context prediction tasks and produce dense word embeddings.",
"While there have been active and ongoing research on improving word embedding methods (see Section 5), there is a relative dearth of study on the impact that an input corpus may have on the quality of the word embeddings.",
"The previous preoccupation centers around corpus size, i.e., a larger corpus is perceived to be richer in statistical information.",
"For instance, popular corpora include Wikipedia, Common Crawl, and Google News.",
"We postulate that there may be variations across corpora owing to factors that affect language use.",
"Intuitively, the many things we write (a work email, a product review, an academic publication, etc.)",
"may each involve certain stylistic, syntactic, and lexical choices, resulting in meaningfully different distributions of word cooccurrences.",
"Consequently, such factors may be encoded in the word embeddings, and input corpora may be differentially informative towards various NLP tasks.",
"In this work, we are interested in the notion of subjectivity.",
"Some NLP tasks, such as sentiment classification, revolve around subjective expressions of likes or dislikes.",
"Others, such as topic classification, revolve around more objective elements of whether a document belongs to a topic (e.g., science, politics).",
"Our central hypothesis is that word embeddings learnt from input corpora of contrasting levels of subjectivity perform differently when classifying sentences by sentiment, subjectivity, or topic.",
"As the first contribution, we outline an experimental scheme to explore this hypothesis in Section 2, and conduct a series of controlled experiments in Section 3 establishing that there exists a meaningful difference between word embeddings derived from objective vs. subjective corpora.",
"We further systematically investigate factors that could potentially explain the differences.",
"Upon discovering from the investigation that sentiment words play a particularly important role in subjectivity-sensitive NLP tasks, such as sentiment classification, as the second contribution, in Section 4 we develop SentiVec, a novel word embedding method infused with information from lexical resources such as a sentiment lexicon.",
"We further identify two alternative lexical objectives: Logistic SentiVec based on discriminative logistic regression, and Spherical SentiVec based on soft clustering effect of von Mises-Fisher distributions.",
"In Section 6, the proposed word embeddings show evident improvements on sentiment classification, as compared to the base model Word2Vec and other baselines using the same lexical resource.",
"Data and Methodology We lay out the methodology for generating word embeddings of contrasting subjectivity, whose effects are tested on several text classification tasks.",
"Generating Word Embeddings As it is difficult to precisely quantify the degree of subjectivity of a corpus, we resort to generating word embeddings from two corpora that contrast sharply in subjectivity, referring to them as the Objective Corpus and the Subjective Corpus.",
"Objective Corpus As virtually all contents are written by humans, an absolutely objective corpus (in the philosophical sense) may prove elusive.",
"There are however exemplars where, by construction, a corpus aspires to be as objective as possible, and probably achieves that in practical terms.",
"We postulate that one such corpus is Wikipedia.",
"Its list of policies and guidelines 1 , assiduously enforced by an editorial team, specify that an article must be written from a neutral point of view, which among other things means \"representing fairly, proportionately, and, as far as possible, without editorial bias, all of the significant views that have been published by reliable sources on a topic.\".",
"Moreover, it is a common resource for training distributional word embeddings and adopted widely by the research community to solve various NLP problems.",
"Hence, in this study, we use Wikipedia as the Objective Corpus.",
"Subjective Corpus By extension, one may then deem a corpus subjective if its content does not at least meet Wikipedia's neutral point of view requirement.",
"In other words, if the content is replete with personal feelings and opinions.",
"We posit that product reviews would be one such corpus.",
"For instance, Amazon's Community Guideline 2 states that \"Amazon values diverse opinions\", and that \"Content you submit should be relevant and based on your own honest opinions and experience.\".",
"Reviews consist of expressive content written by customers, and may not strive for the neutrality of an encyclopedia.",
"We rely on a 1 https://en.wikipedia.org/wiki/ Wikipedia:List_of_policies_and_ guidelines 2 https://www.amazon.com/gp/help/ customer/display.html?nodeId=201929730 large corpus of Amazon reviews from various categories (e.g., electronics, jewelry, books, and etc.)",
"(McAuley et al., 2015) as the Subjective Corpus.",
"Word Embeddings For the comparative analysis in Section 3, we employ Word2Vec (reviewed below) to generate word embeddings from each corpus.",
"Later on in Section 4, we will propose a new word embedding method called SentiVec.",
"For Word2Vec, we use the Skip-gram model to train distributional word embeddings on the Objective Corpus and the Subjective Corpus respectively.",
"Skip-gram aims to find word embeddings that are useful for predicting nearby words.",
"The objective is to maximize the context probability: log L(W ; C) = w∈W w ∈C(w) log P(w |w), (1) where W is an input corpus and C(w) is the context of token w. The probability of context word w , given observed word w is defined via softmax: P(w |w) = exp (v w · vw) ŵ∈V exp (vŵ · vw) , (2) where v w and v w are corresponding embeddings and V is the corpus vocabulary.",
"Though theoretically sound, the formulation is computationally impractical and requires tractable approximation.",
"Mikolov et al.",
"(2013) propose two efficient procedures to optimize (1): Hierarchical Softmax and Negative Sampling (NS).",
"In this work we focus on the widely adopted NS.",
"The intuition is that a \"good\" model should be able to differentiate observed data from noise.",
"The differentiation task is defined using logistic regression; the goal is to tell apart real context-word pair (w , w) from randomly generated noise pair (ŵ, w).",
"Formally, log L [w',w] = log σ (v w · vw) + k i=1 log σ (−vŵ i · vw), (3) where σ( · ) is a sigmoid function, and {ŵ i } k i=1 are negative samples.",
"Summing up all the contextword pairs, we derive the NS Skip-gram objective: log L word2vec (W ; C) = w∈W w ∈C(w) log L [w',w] .",
"(4) Training word embeddings with Skip-gram, we keep the same hyperparameters across all the runs: 300 dimensions for embeddings, k = 5 negative samples, and window of 5 tokens.",
"The Objective and Subjective corpora undergo the same preprocessing, i.e., discarding short sentences (< 5 tokens) and rare words (< 10 occurrences), removing punctuation, normalizing Unicode symbols.",
"Evaluation Tasks To compare word embeddings, we need a common yardstick.",
"It is difficult to define an inherent quality to word embeddings.",
"Instead, we put them through several evaluation tasks that can leverage word embeddings and standardize their formulations as binary classification tasks.",
"To boil the comparisons down to the essences of word embeddings (which is our central focus), we rely on standardized techniques so as to attribute as much of the differences as possible to the word embeddings.",
"We use logistic regression for classification, and represent a text snippet (e.g., a sentence) in the feature space as the average of the word embeddings of tokens in the snippet (ignoring out-ofvocabulary tokens).",
"The evaluation metric is the average accuracy from 10-fold cross validation.",
"There are three evaluation tasks of varying degrees of hypothetical subjectivity, as outlined below.",
"Each may involve multiple datasets.",
"Sentiment Classification Task This task classifies a sentence into either positive or negative.",
"We use two groups of datasets as follows.",
"The first group consists of 24 datasets from UCSD Amazon product data 3 corresponding to various product categories.",
"Each review has a rating from 1 to 5, which is transformed into positive (ratings 4 or 5) or negative (ratings 1 or 2) class.",
"For each dataset respectively, we sample 5000 sentences each from the positive and negative reviews.",
"Note that these sentences used for this evaluation task have not participated in the generation of word embeddings.",
"Due to space constraint, in most cases we present the average accuracy across the datasets, but where appropriate we enumerate the results for each dataset.",
"The second is Cornell's sentence polarity dataset v1.0 4 (Pang and Lee, 2005) , made up of 5331 each of positive and negative sentences from Rotten Tomatoes movie reviews.",
"The inclusion of this out-of-domain evaluation dataset is useful for examining whether the performance of word embeddings from the Subjective Corpus on the first group above may inadvertently be affected by indomain advantage arising from its Amazon origin.",
"Subjectivity Classification Task This task classifies a sentence into subjective or objective.",
"The dataset is Cornell's subjectivity dataset v1.0 5 , consisting of 5000 subjective sentences derived from Rotten Tomatoes (RT) reviews and 5000 objective sentences derived from IMDB plot summaries (Pang and Lee, 2004) .",
"This task is probably less sensitive to the subjectivity within word embeddings than sentiment classification, as determining whether a sentence is subjective or objective should ideally be an objective undertaking.",
"Topic Classification Task We use the 20 Newsgroups dataset 6 (\"bydate\" version), whereby the newsgroups are organized into six subject matter groupings.",
"We extract the message body and split them into sentences.",
"Each group's sentences then form the in-topic class, and we randomly sample an equivalent number of sentences from the remaining newsgroups to form the out-of-topic class.",
"This results in six datasets, each corresponding to a binary classification task.",
"In most cases, we present the average results, and where appropriate we enumerate the results for each dataset.",
"Hypothetically, this task is the least affected by the subjectivity within word embeddings.",
"Comparative Analyses of Subjective vs.",
"Objective Corpora We conduct a series of comparative analyses under various setups.",
"For each, we compare the performance in the evaluation tasks when using the Objective Corpus and the Subjective Corpus.",
"Table 1 shows the results for this series of analyses.",
"Initial Condition Setup I seeks to answer whether there is any difference between word embeddings derived from the Objective Corpus and the Subjective Corpus.",
"The word embeddings were trained on the whole data respectively.",
"Table 1 shows the corpus statistics and classification accuracies.",
"Evidently, the Subjective word embeddings outperform the Objective word embeddings on all the evaluation tasks.",
"The margins are largest for sentiment classification (86.5% vs. 81.5% or +5% Amazon, and 78.2% vs. 75.4% or +2.8% on Rotten Tomatoes or RT).",
"For subjectivity and topic classifications, the differences are smaller.",
"As earlier hypothesized, the sentiment classification task is more sensitive to subjectivity within word embeddings than the other tasks.",
"Therefore, training word embeddings on a subjective corpus may confer an advantage for such tasks.",
"On the other hand, the corpus statistics show a substantial difference in corpus size, which could be an alternative explanation for the outperformance by the Subjective Corpus if the larger corpus contains more informative distributional statistics.",
"Controlling for Corpus Size In Setup II, we keep the number of sentences in both corpora the same, by randomly downsampling sentences in the Subjective Corpus.",
"This procedure consequently reduces the number of types and tokens (see Table 1 , Setup II, Corpus Statistics).",
"Note that the number of tokens in the Subjective corpus is now fewer than in the Objective, the latter suffers no change.",
"Yet, even after a dramatic reduction in size, the Subjective embeddings still outperform the Objective significantly on both datasets of the sentiment classification task (+4% on Amazon and +2.5% on RT), while showing similar performance on subjectivity and topic classifications.",
"This bolsters the earlier observation that sentiment classification is more sensitive to subjectivity.",
"While there is a small effect due to corpus size difference, the gap in performance between Subjective and Objective embeddings on sentiment classification is still significant and cannot be explained away by the corpus size alone.",
"Controlling for Vocabulary While the Subjective Corpus has a much smaller vocabulary (i.e., # types), we turn a critical eye on whether its apparent advantage lies in having access to special word types that do not exist in the Objective Corpus.",
"In Setup III, we keep the training vocabulary the same for both, removing the types that are Table 2 : Top words of misclassified sentences present in one corpus but not in the other, so that out-of-vocabulary words are ignored in the training phase.",
"Table 1 , Setup III, shows significant reduction in types for both corpora.",
"Yet, the outperformance by the Subjective embeddings on the sentiment classification task still stands (+3.8% on Amazon and +2.3% on RT).",
"Moreover, it is so for both Amazon and Rotten Tomatoes datasets, implying that it is not due to close in-domain similarity between the corpora used for training the word embeddings and the classification tasks.",
"Significant Words To get more insights on the difference between the Subjective and Objective corpora, we analyze the mistakes word embeddings make on the development folds.",
"At this point we focus on the sentiment classification task and specifically on the Amazon data, which indicates the largest performance differences in the controlled experiments (see Table 1 , Setup III).",
"As words are still the main unit of information in distributional word embeddings, we extract words strongly associated with misclassified sentences.",
"We employed log-odds ratio with informative Dirichlet prior method (Monroe et al., 2008) to quantify this association.",
"It is used to contrast the words in misclassified vs. correctly classified sentences, and accounts for the variance of words and their prior counts taken from a large corpus.",
"Table 2 shows the top 25 words most associated with the misclassified sentences, sorted by their association scores.",
"On average 50% of the mistakes overlap for both word embeddings, therefore, some of the words are included in both lists.",
"40 − 44% of these words carry positive or negative sentiment connotations in general (see the underlined words in Table 2 ), while other words like return or send may carry sentiment connotation in e-commerce context.",
"We check if a word carries sentiment connotation using sentiment lexicon compiled by Hu and Liu (2004) , including 6789 words along with positive or negative labels.",
"We also observe linguistic negations (i.e., not, Don't).",
"For instance, the word most associated with the Objective-specific mistakes (excluding the Subjective misclassified sentences) is not, which suggests that perhaps Subjective word embedding accommodates better understanding of linguistic negations, which may partially explain the difference.",
"However, our methodology as outlined in Section 2.2 permits exchangeable word order and is not intended to analyze structural interaction between words.",
"We focus on further analysis of sentiment words, leaving linguistic negations in word embeddings for future investigation.",
"Controlling for Sentiment Words To control for the \"amount\" of sentiment in the Subjective and Objective corpora, we use sentiment lexicon compiled by Hu and Liu (2004) .",
"For each corpus, we create two subcorpora: With Sentiment contains only the sentences with at least one word from the sentiment lexicon, while Without Sentiment is the complement.",
"We match the corpora on the number of sentences, downsampling the larger corpus, train word embeddings on each subcorpus, and proceed with the classification experiments.",
"Table 3 shows the results, including that of random word embeddings for reference.",
"Sentiment lexicon has a significant impact on the performance of sentiment and subjectivity classifications, and a smaller impact on topic classification.",
"Without sentiment, the Subjective embeddings prove more robust, still outperforming the Objective on sentiment classification, while the Objective performs close to random word embeddings on Amazon .",
"In summary, evidences from the series of controlled experiments support the existence of some X-factor to the Subjective embeddings, which confers superior performance in subjectivity-sensitive tasks such as sentiment classification.",
"To leverage the consequential sentiment information, we propose a family of methods, called SentiVec, for training distributional word embeddings that are infused with information on the sentiment polarity of words.",
"The methods are built upon Word2Vec optimization algorithm and make use of available lexical sentiment resources such as SentiWordNet (Baccianella et al., 2010) , sentiment lexicon by Hu and Liu (2004) , and etc.",
"SentiVec seeks to satisfy two objectives, namely context prediction and lexical category prediction: log L = log L word2vec (W ; C) + λ log L lex (W, L), (5) where L word2vec (W ; C) is the Skip-gram objective as in (4) ; L lex (W, L) is a lexical objective for corpus W and lexical resource L; and λ is a tradeoff parameter.",
"Lexical resource L = {X i } n i=1 comprises of n word sets, each X i contains words of the same category.",
"For sentiment classification, we consider positive and negative word categories.",
"Logistic SentiVec Logistic SentiVec admits lexical resource in the form of two disjoint word sets, L = {X 1 , X 2 }, X 1 ∩ X 2 = ∅.",
"The objective is to tell apart which word set of L word w belongs to: log L lex (W, L) (6) = w∈X 1 log P(w ∈ X 1 ) + w∈X 2 log P(w ∈ X 2 ).",
"We further tie these probabilities together, and cast the objective as a logistic regression problem: P(w ∈ X 1 ) = 1 − P(w ∈ X 2 ) = σ(v w · τ ), (7) where v w is a word embedding and τ is a direction vector.",
"Since word embeddings are generally invariant to scaling and rotation when used as downstream feature representations, τ can be chosen randomly and fixed during training.",
"We experiment with randomly sampled unit length directions.",
"For simplicity, we also scale embedding v w to its unit length when computing v w · τ , which now equals to cosine similarity between v w and τ .",
"When v w is completely aligned with τ , the cosine similarity between them is 1, which maximizes P(w ∈ X 1 ) and favors words in X 1 .",
"When v w is opposite to τ , the cosine similarity equals to −1, which maximizes P(w ∈ X 2 ) and predicts vectors from X 2 .",
"Orthogonal vectors have cosine similarity of 0, which makes both w ∈ X 1 and w ∈ X 2 equally probable.",
"Optimizing (6) makes the corresponding word embeddings of X 1 and X 2 gravitate to the opposite semispaces and simulates clustering effect for the words of the same category, while the Word2Vec objective prevents words from collapsing to the same directions.",
"Optimization The objective in (6) permits simple stochastic gradient ascent optimization and can be combined with negative sampling procedure for Skip-gram in (5).",
"The gradient for unnormalized embedding v w is solved as follows: log L [w∈X 1 ] (D, L) v wi = (log P (x ∈ X 1 )) v wi = 1 v w 2 σ − v w · τ v w τ i v w − v wi v w · τ v w (8) The optimization equation for v w , when w ∈ X 2 , can be derived analogously.",
"Spherical SentiVec Spherical SentiVec extends Logistic SentiVec by dealing with any number of lexical categories, L = {X i } n i=1 .",
"As such, the lexical objective takes on generic form: log L lex (W, L) = n i=1 w∈X i log P (w ∈ X i ), (9) Each P (w ∈ X i ) defines embedding generating process.",
"We assume each length-normalized v w for w of L is generated w.r.t.",
"a mixture model of von Mises-Fisher (vMF) distributions.",
"vMF is a probability distribution on a multidimensional sphere, characterized by parameters µ (mean direction) and κ (concentration parameter).",
"Sampled points are concentrated around µ; the greater the κ, the closer the sampled points are to µ.",
"We consider only unimodal vMF distributions, restricting concentration parameters to be strictly positive.",
"Hereby, each X i ∈ L is assigned to vMF distribution parameters (µ i , κ i ) and the membership probabilities are defined as follows: P(w ∈ X i ) = P (v w ; µ i , κ i ) = 1 Z κ i e κ i µ i ·vw , (10) where Z κ is the normalization factor.",
"The Spherical SentiVec lexical objective forces words of every X i ∈ L to gravitate towards and concentrate around their direction mean µ i .",
"As in Logistic SentiVec, it simulates clustering effect for the words of the same set.",
"In comparison to the direction vector of Logistic SentiVec, mean directions of Spherical SentiVec when fixed can substantially influence word embeddings training and must be carefully selected.",
"We optimize the mean directions along with the word embeddings using alternating procedure resembling K-means clustering algorithm.",
"For simplicity, we keep concentration parameters tied, κ 1 = κ 2 = ... = κ n = κ, and treat κ as a hyperparameter of this algorithm.",
"Optimization We derive optimization procedure for updating word embeddings assuming fixed direction means.",
"Like Logistic SentiVec, Spherical SentiVec can be combined with the negative sampling procedure of Skip-gram.",
"The gradient for unnormalized word embedding v w is solved by the following equation: log L [w∈X i ] (W, L) v wj = κi µij vw − vwj vw ·µ i vw vw 2 (11) Once word embedding v w (w ∈ X i ) is updated, we revise direction mean µ i w.r.t.",
"maximum likelihood estimator: µi = w∈X i vw w∈X i vw .",
"(12) Updating the direction means in such a way ensures that the lexical objective is non-decreasing.",
"Assuming the stochastic optimization procedure for L word2vec complies with the same nondecreasing property, the proposed alternating procedure converges.",
"Related Work There have been considerable research on improving the quality of distributional word embeddings.",
"Bolukbasi et al.",
"(2016) seek to debias word embeddings from gender stereotypes.",
"Rothe and Schütze (2017) incorporate WordNet lexeme and synset information.",
"Mrkšic et al.",
"(2016) encode antonym-synonym relations.",
"Liu et al.",
"(2015) encode ordinal relations such as hypernym and hyponym.",
"Kiela et al.",
"(2015) augment Skip-gram to enforce lexical similarity or relatedness constraints, Bollegala et al.",
"(2016) modify GloVe optimization procedure for the same purpose.",
"Faruqui et al.",
"(2015) employ semantic relations of PPDB, WordNet, FrameNet to retrofit word embeddings for various prediction tasks.",
"We use this Retrofitting method 7 as a baseline.",
"Socher et al.",
"(2011) derive multi-word embeddings for sentiment distribution prediction, while we focus on lexical distributional analysis.",
"Maas et al.",
"(2011) and Tang et al.",
"(2016) use documentlevel sentiment annotations to fit word embeddings, but document annotation might not always be available for distributional analysis on neutral corpora such as Wikipedia.",
"SentiVec relies on simple sentiment lexicon instead.",
"Refining (Yu et al., 2018) aligns the sentiment scores taken from lexical resource and the cosine similarity scores of corresponding word embeddings.",
"The method generally requires fine-grained sentiment scores for the words, which may not be available in some settings.",
"We use Refining as a baseline and adopt coarse-grained sentiment lexicon for this method.",
"Villegas et al.",
"(2016) compare various distributional word embeddings arising from the same corpus for sentiment classification, whereas we focus on the differentiation in input corpora and propose novel sentiment-infused word embeddings.",
"Experiments The objective of experiments is to study the efficacy of Logistic SentiVec and Spherical SentiVec word embeddings on the aforementioned text classification tasks.",
"One natural baseline is Word2Vec, as SentiVec subsumes its context prediction objective, while further incorporating lexical category prediction.",
"We include two other baselines that can leverage the same lexical resource but in manners different from SentiVec, namely: Retrofitting (Faruqui et al., 2015) and Refining (Yu et al., 2018) .",
"For these methods, we generate their word embeddings based on Setup III (see Section 3).",
"All the methods were run multiple times with various hyperparameters, optimized via grid-search; for each we present the best performing setting.",
"First, we discuss the sentiment classification task.",
"Table 4 shows the unfolded results for the 24 classification datasets of Amazon, as well as for Rotten Tomatoes.",
"For each classification dataset (row), and for the Objective and Subjective embedding corpora respectively, the best word embedding methods are shown in bold.",
"An asterisk indicates statistically significant 8 results at 5% in comparison to Word2Vec.",
"Both SentiVec variants outperform Word2Vec in the vast majority of the cases.",
"The degree of outperformance is higher for the Objective than the Subjective word embeddings.",
"This is a reasonable trend given our previous findings in Section 3.",
"As the Objective Corpus encodes less information than the Subjective Corpus for sentiment classification, the former is more likely to benefit from the infusion of sentiment information from additional lexical resources.",
"Note that the sentiment infusion into the word embeddings comes from separate lexical resources, and does not involve any sentiment classification label.",
"SentiVec also outperforms the two baselines that benefit from the same lexical resources.",
"Retrofitting does not improve upon Word2Vec, with the two embeddings essentially indistinguishable (the difference is only noticeable at the second decimal point).",
"Refining makes the word embeddings perform worse on the sentiment classification task.",
"One possible explanation is that Refining normally requires fine-grained labeled lexicon, where the words are scored w.r.t.",
"the sentiment scale, whereas we use sentiment lexicon of two labels (i.e., positive or negative).",
"SentiVec accepts coarse-grained sentiment lexicons, and potentially could be extended to deal with fine-grained labels.",
"As previously alluded to, topic and subjectivity classifications are less sensitive to the subjectivity within word embeddings than sentiment classification.",
"One therefore would not expect much, if any, performance gain from infusion of sentiment information.",
"However, such infusion should not subtract or harm the quality of word embeddings either.",
"Table 5 shows that the unfolded results for topic classification on the six datasets, and the result for subjectivity classification are similar across methods.",
"Neither the SentiVec variants, nor Retrofitting and Refining, change the subjectivity and topic classification capabilities much, which means that the used sentiment lexicon is targeted only at the sentiment subspace of embeddings.",
"Illustrative Changes in Embeddings To give more insights on the difference between SentiVec and Word2Vec, we show \"flower\" diagrams in Figure 1 for Logistic SentiVec and Figure 2 for Spherical SentiVec.",
"Each is associated with a reference word (e.g., good for Figure 1a) , and indicates relative changes in cosine distances between the reference word and the testing words surrounding the \"flower\".",
"Every testing word is associated with a \"petal\" or black axis extending from the center of the circle.",
"The \"petal\" length is proportional to the relative distance change in two word embeddings: κ = Word2Vec embeddings correspondingly.",
"If the distance remains unchanged (κ = 1), then the \"petal\" points at the circumference; if the reference and testing words are closer in the SentiVec embedding than they are in Word2Vec (κ < 1), the \"petal\" lies inside the circle; when the distance increases (κ > 1), the \"petal\" goes beyond the circle.",
"The diagrams are presented for Objective Embeddings 9 .",
"We use three reference words: good (positive), bad (negative), time (neutral); as well as three groups of testing words: green for words randomly sampled from positive lexicon (Sector I-II), red for words randomly sampled from negative lexicon (Sector II-III), and gray for frequent neutral common nouns (Sector III-I).",
"Figure 1 shows changes produced by Logistic SentiVec.",
"For the positive reference word (Figure 1a) , the average distance to the green words is shortened, whereas the distance to the red words increases.",
"The reverse is observed for the negative reference word (Figure 1b ).",
"This observation complies with the lexical objective (7) of Logistic SentiVec, which aims to separate the words of two different classes.",
"Note that the gray words suffer only moderate change with respect to positive and negative reference words.",
"For the neutral reference word (Figure 1c ), the distances are only moderately affected across all testing groups.",
"Figure 2 shows that Spherical SentiVec tends to make embeddings more compact than Logistic SentiVec.",
"As the former's lexical objective (9) is designed for clustering, but not for separation, we look at the comparative strength of the clustering effect on the testing words.",
"For the positive reference word (Figure 2a ), the largest clustering effect is achieved for the green words.",
"For the negative reference word (Figure 2b) , as expected, the red words are affected the most.",
"The gray words suffer the least change for all the reference words.",
"In summary, SentiVec effectively provides an advantage for subjectivity-sensitive task such as sentiment classification, while not harming the performance of other text classification tasks.",
"Conclusion We explore the differences between objective and subjective corpora for generating word embeddings, and find that there is indeed a difference in the embeddings' classification task performances.",
"Identifying the presence of sentiment words as one key factor for the difference, we propose a novel method SentiVec to train word embeddings that are infused with the sentiment polarity of words derived from a separate sentiment lexicon.",
"We further identify two lexical objectives: Logistic SentiVec and Spherical SentiVec.",
"The proposed word embeddings show improvements in sentiment classification, while maintaining their performance on subjectivity and topic classifications."
]
} | {
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"3",
"4.1",
"4.2",
"5",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Data and Methodology",
"Generating Word Embeddings",
"Evaluation Tasks",
"Comparative Analyses of Subjective vs. Objective Corpora",
"Logistic SentiVec",
"Spherical SentiVec",
"Related Work",
"Experiments",
"Conclusion"
]
} | GEM-SciDuet-train-134#paper-1359#slide-6 | Binary Classification Tasks | Sentiment Classification (positive vs. negative):
Amazon Reviews (24 categories) + Rotten Tomatoes Reviews
A very funny movie vs. One lousy movie
Subjectivity Classification (subjective vs. objective)
The story needs more dramatic meat vs. She's an artist
Topic Classification (in-topic vs. out-of-topic)
Newsgroups Dataset (6 categories) | Sentiment Classification (positive vs. negative):
Amazon Reviews (24 categories) + Rotten Tomatoes Reviews
A very funny movie vs. One lousy movie
Subjectivity Classification (subjective vs. objective)
The story needs more dramatic meat vs. She's an artist
Topic Classification (in-topic vs. out-of-topic)
Newsgroups Dataset (6 categories) | [] |
GEM-SciDuet-train-134#paper-1359#slide-7 | 1359 | Searching for the X-Factor: Exploring Corpus Subjectivity for Word Embeddings | We explore the notion of subjectivity, and hypothesize that word embeddings learnt from input corpora of varying levels of subjectivity behave differently on natural language processing tasks such as classifying a sentence by sentiment, subjectivity, or topic. Through systematic comparative analyses, we establish this to be the case indeed. Moreover, based on the discovery of the outsized role that sentiment words play on subjectivity-sensitive tasks such as sentiment classification, we develop a novel word embedding SentiVec which is infused with sentiment information from a lexical resource, and is shown to outperform baselines on such tasks. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226
],
"paper_content_text": [
"Introduction Distributional analysis methods such as Word2Vec (Mikolov et al., 2013) and GloVe (Pennington et al., 2014) have been critical for the success of many large-scale natural language processing (NLP) applications (Collobert et al., 2011; Socher et al., 2013; Goldberg, 2016) .",
"These methods employ distributional hypothesis (i.e., words used in the same contexts tend to have similar meaning) to derive distributional meaning via context prediction tasks and produce dense word embeddings.",
"While there have been active and ongoing research on improving word embedding methods (see Section 5), there is a relative dearth of study on the impact that an input corpus may have on the quality of the word embeddings.",
"The previous preoccupation centers around corpus size, i.e., a larger corpus is perceived to be richer in statistical information.",
"For instance, popular corpora include Wikipedia, Common Crawl, and Google News.",
"We postulate that there may be variations across corpora owing to factors that affect language use.",
"Intuitively, the many things we write (a work email, a product review, an academic publication, etc.)",
"may each involve certain stylistic, syntactic, and lexical choices, resulting in meaningfully different distributions of word cooccurrences.",
"Consequently, such factors may be encoded in the word embeddings, and input corpora may be differentially informative towards various NLP tasks.",
"In this work, we are interested in the notion of subjectivity.",
"Some NLP tasks, such as sentiment classification, revolve around subjective expressions of likes or dislikes.",
"Others, such as topic classification, revolve around more objective elements of whether a document belongs to a topic (e.g., science, politics).",
"Our central hypothesis is that word embeddings learnt from input corpora of contrasting levels of subjectivity perform differently when classifying sentences by sentiment, subjectivity, or topic.",
"As the first contribution, we outline an experimental scheme to explore this hypothesis in Section 2, and conduct a series of controlled experiments in Section 3 establishing that there exists a meaningful difference between word embeddings derived from objective vs. subjective corpora.",
"We further systematically investigate factors that could potentially explain the differences.",
"Upon discovering from the investigation that sentiment words play a particularly important role in subjectivity-sensitive NLP tasks, such as sentiment classification, as the second contribution, in Section 4 we develop SentiVec, a novel word embedding method infused with information from lexical resources such as a sentiment lexicon.",
"We further identify two alternative lexical objectives: Logistic SentiVec based on discriminative logistic regression, and Spherical SentiVec based on soft clustering effect of von Mises-Fisher distributions.",
"In Section 6, the proposed word embeddings show evident improvements on sentiment classification, as compared to the base model Word2Vec and other baselines using the same lexical resource.",
"Data and Methodology We lay out the methodology for generating word embeddings of contrasting subjectivity, whose effects are tested on several text classification tasks.",
"Generating Word Embeddings As it is difficult to precisely quantify the degree of subjectivity of a corpus, we resort to generating word embeddings from two corpora that contrast sharply in subjectivity, referring to them as the Objective Corpus and the Subjective Corpus.",
"Objective Corpus As virtually all contents are written by humans, an absolutely objective corpus (in the philosophical sense) may prove elusive.",
"There are however exemplars where, by construction, a corpus aspires to be as objective as possible, and probably achieves that in practical terms.",
"We postulate that one such corpus is Wikipedia.",
"Its list of policies and guidelines 1 , assiduously enforced by an editorial team, specify that an article must be written from a neutral point of view, which among other things means \"representing fairly, proportionately, and, as far as possible, without editorial bias, all of the significant views that have been published by reliable sources on a topic.\".",
"Moreover, it is a common resource for training distributional word embeddings and adopted widely by the research community to solve various NLP problems.",
"Hence, in this study, we use Wikipedia as the Objective Corpus.",
"Subjective Corpus By extension, one may then deem a corpus subjective if its content does not at least meet Wikipedia's neutral point of view requirement.",
"In other words, if the content is replete with personal feelings and opinions.",
"We posit that product reviews would be one such corpus.",
"For instance, Amazon's Community Guideline 2 states that \"Amazon values diverse opinions\", and that \"Content you submit should be relevant and based on your own honest opinions and experience.\".",
"Reviews consist of expressive content written by customers, and may not strive for the neutrality of an encyclopedia.",
"We rely on a 1 https://en.wikipedia.org/wiki/ Wikipedia:List_of_policies_and_ guidelines 2 https://www.amazon.com/gp/help/ customer/display.html?nodeId=201929730 large corpus of Amazon reviews from various categories (e.g., electronics, jewelry, books, and etc.)",
"(McAuley et al., 2015) as the Subjective Corpus.",
"Word Embeddings For the comparative analysis in Section 3, we employ Word2Vec (reviewed below) to generate word embeddings from each corpus.",
"Later on in Section 4, we will propose a new word embedding method called SentiVec.",
"For Word2Vec, we use the Skip-gram model to train distributional word embeddings on the Objective Corpus and the Subjective Corpus respectively.",
"Skip-gram aims to find word embeddings that are useful for predicting nearby words.",
"The objective is to maximize the context probability: log L(W ; C) = w∈W w ∈C(w) log P(w |w), (1) where W is an input corpus and C(w) is the context of token w. The probability of context word w , given observed word w is defined via softmax: P(w |w) = exp (v w · vw) ŵ∈V exp (vŵ · vw) , (2) where v w and v w are corresponding embeddings and V is the corpus vocabulary.",
"Though theoretically sound, the formulation is computationally impractical and requires tractable approximation.",
"Mikolov et al.",
"(2013) propose two efficient procedures to optimize (1): Hierarchical Softmax and Negative Sampling (NS).",
"In this work we focus on the widely adopted NS.",
"The intuition is that a \"good\" model should be able to differentiate observed data from noise.",
"The differentiation task is defined using logistic regression; the goal is to tell apart real context-word pair (w , w) from randomly generated noise pair (ŵ, w).",
"Formally, log L [w',w] = log σ (v w · vw) + k i=1 log σ (−vŵ i · vw), (3) where σ( · ) is a sigmoid function, and {ŵ i } k i=1 are negative samples.",
"Summing up all the contextword pairs, we derive the NS Skip-gram objective: log L word2vec (W ; C) = w∈W w ∈C(w) log L [w',w] .",
"(4) Training word embeddings with Skip-gram, we keep the same hyperparameters across all the runs: 300 dimensions for embeddings, k = 5 negative samples, and window of 5 tokens.",
"The Objective and Subjective corpora undergo the same preprocessing, i.e., discarding short sentences (< 5 tokens) and rare words (< 10 occurrences), removing punctuation, normalizing Unicode symbols.",
"Evaluation Tasks To compare word embeddings, we need a common yardstick.",
"It is difficult to define an inherent quality to word embeddings.",
"Instead, we put them through several evaluation tasks that can leverage word embeddings and standardize their formulations as binary classification tasks.",
"To boil the comparisons down to the essences of word embeddings (which is our central focus), we rely on standardized techniques so as to attribute as much of the differences as possible to the word embeddings.",
"We use logistic regression for classification, and represent a text snippet (e.g., a sentence) in the feature space as the average of the word embeddings of tokens in the snippet (ignoring out-ofvocabulary tokens).",
"The evaluation metric is the average accuracy from 10-fold cross validation.",
"There are three evaluation tasks of varying degrees of hypothetical subjectivity, as outlined below.",
"Each may involve multiple datasets.",
"Sentiment Classification Task This task classifies a sentence into either positive or negative.",
"We use two groups of datasets as follows.",
"The first group consists of 24 datasets from UCSD Amazon product data 3 corresponding to various product categories.",
"Each review has a rating from 1 to 5, which is transformed into positive (ratings 4 or 5) or negative (ratings 1 or 2) class.",
"For each dataset respectively, we sample 5000 sentences each from the positive and negative reviews.",
"Note that these sentences used for this evaluation task have not participated in the generation of word embeddings.",
"Due to space constraint, in most cases we present the average accuracy across the datasets, but where appropriate we enumerate the results for each dataset.",
"The second is Cornell's sentence polarity dataset v1.0 4 (Pang and Lee, 2005) , made up of 5331 each of positive and negative sentences from Rotten Tomatoes movie reviews.",
"The inclusion of this out-of-domain evaluation dataset is useful for examining whether the performance of word embeddings from the Subjective Corpus on the first group above may inadvertently be affected by indomain advantage arising from its Amazon origin.",
"Subjectivity Classification Task This task classifies a sentence into subjective or objective.",
"The dataset is Cornell's subjectivity dataset v1.0 5 , consisting of 5000 subjective sentences derived from Rotten Tomatoes (RT) reviews and 5000 objective sentences derived from IMDB plot summaries (Pang and Lee, 2004) .",
"This task is probably less sensitive to the subjectivity within word embeddings than sentiment classification, as determining whether a sentence is subjective or objective should ideally be an objective undertaking.",
"Topic Classification Task We use the 20 Newsgroups dataset 6 (\"bydate\" version), whereby the newsgroups are organized into six subject matter groupings.",
"We extract the message body and split them into sentences.",
"Each group's sentences then form the in-topic class, and we randomly sample an equivalent number of sentences from the remaining newsgroups to form the out-of-topic class.",
"This results in six datasets, each corresponding to a binary classification task.",
"In most cases, we present the average results, and where appropriate we enumerate the results for each dataset.",
"Hypothetically, this task is the least affected by the subjectivity within word embeddings.",
"Comparative Analyses of Subjective vs.",
"Objective Corpora We conduct a series of comparative analyses under various setups.",
"For each, we compare the performance in the evaluation tasks when using the Objective Corpus and the Subjective Corpus.",
"Table 1 shows the results for this series of analyses.",
"Initial Condition Setup I seeks to answer whether there is any difference between word embeddings derived from the Objective Corpus and the Subjective Corpus.",
"The word embeddings were trained on the whole data respectively.",
"Table 1 shows the corpus statistics and classification accuracies.",
"Evidently, the Subjective word embeddings outperform the Objective word embeddings on all the evaluation tasks.",
"The margins are largest for sentiment classification (86.5% vs. 81.5% or +5% Amazon, and 78.2% vs. 75.4% or +2.8% on Rotten Tomatoes or RT).",
"For subjectivity and topic classifications, the differences are smaller.",
"As earlier hypothesized, the sentiment classification task is more sensitive to subjectivity within word embeddings than the other tasks.",
"Therefore, training word embeddings on a subjective corpus may confer an advantage for such tasks.",
"On the other hand, the corpus statistics show a substantial difference in corpus size, which could be an alternative explanation for the outperformance by the Subjective Corpus if the larger corpus contains more informative distributional statistics.",
"Controlling for Corpus Size In Setup II, we keep the number of sentences in both corpora the same, by randomly downsampling sentences in the Subjective Corpus.",
"This procedure consequently reduces the number of types and tokens (see Table 1 , Setup II, Corpus Statistics).",
"Note that the number of tokens in the Subjective corpus is now fewer than in the Objective, the latter suffers no change.",
"Yet, even after a dramatic reduction in size, the Subjective embeddings still outperform the Objective significantly on both datasets of the sentiment classification task (+4% on Amazon and +2.5% on RT), while showing similar performance on subjectivity and topic classifications.",
"This bolsters the earlier observation that sentiment classification is more sensitive to subjectivity.",
"While there is a small effect due to corpus size difference, the gap in performance between Subjective and Objective embeddings on sentiment classification is still significant and cannot be explained away by the corpus size alone.",
"Controlling for Vocabulary While the Subjective Corpus has a much smaller vocabulary (i.e., # types), we turn a critical eye on whether its apparent advantage lies in having access to special word types that do not exist in the Objective Corpus.",
"In Setup III, we keep the training vocabulary the same for both, removing the types that are Table 2 : Top words of misclassified sentences present in one corpus but not in the other, so that out-of-vocabulary words are ignored in the training phase.",
"Table 1 , Setup III, shows significant reduction in types for both corpora.",
"Yet, the outperformance by the Subjective embeddings on the sentiment classification task still stands (+3.8% on Amazon and +2.3% on RT).",
"Moreover, it is so for both Amazon and Rotten Tomatoes datasets, implying that it is not due to close in-domain similarity between the corpora used for training the word embeddings and the classification tasks.",
"Significant Words To get more insights on the difference between the Subjective and Objective corpora, we analyze the mistakes word embeddings make on the development folds.",
"At this point we focus on the sentiment classification task and specifically on the Amazon data, which indicates the largest performance differences in the controlled experiments (see Table 1 , Setup III).",
"As words are still the main unit of information in distributional word embeddings, we extract words strongly associated with misclassified sentences.",
"We employed log-odds ratio with informative Dirichlet prior method (Monroe et al., 2008) to quantify this association.",
"It is used to contrast the words in misclassified vs. correctly classified sentences, and accounts for the variance of words and their prior counts taken from a large corpus.",
"Table 2 shows the top 25 words most associated with the misclassified sentences, sorted by their association scores.",
"On average 50% of the mistakes overlap for both word embeddings, therefore, some of the words are included in both lists.",
"40 − 44% of these words carry positive or negative sentiment connotations in general (see the underlined words in Table 2 ), while other words like return or send may carry sentiment connotation in e-commerce context.",
"We check if a word carries sentiment connotation using sentiment lexicon compiled by Hu and Liu (2004) , including 6789 words along with positive or negative labels.",
"We also observe linguistic negations (i.e., not, Don't).",
"For instance, the word most associated with the Objective-specific mistakes (excluding the Subjective misclassified sentences) is not, which suggests that perhaps Subjective word embedding accommodates better understanding of linguistic negations, which may partially explain the difference.",
"However, our methodology as outlined in Section 2.2 permits exchangeable word order and is not intended to analyze structural interaction between words.",
"We focus on further analysis of sentiment words, leaving linguistic negations in word embeddings for future investigation.",
"Controlling for Sentiment Words To control for the \"amount\" of sentiment in the Subjective and Objective corpora, we use sentiment lexicon compiled by Hu and Liu (2004) .",
"For each corpus, we create two subcorpora: With Sentiment contains only the sentences with at least one word from the sentiment lexicon, while Without Sentiment is the complement.",
"We match the corpora on the number of sentences, downsampling the larger corpus, train word embeddings on each subcorpus, and proceed with the classification experiments.",
"Table 3 shows the results, including that of random word embeddings for reference.",
"Sentiment lexicon has a significant impact on the performance of sentiment and subjectivity classifications, and a smaller impact on topic classification.",
"Without sentiment, the Subjective embeddings prove more robust, still outperforming the Objective on sentiment classification, while the Objective performs close to random word embeddings on Amazon .",
"In summary, evidences from the series of controlled experiments support the existence of some X-factor to the Subjective embeddings, which confers superior performance in subjectivity-sensitive tasks such as sentiment classification.",
"To leverage the consequential sentiment information, we propose a family of methods, called SentiVec, for training distributional word embeddings that are infused with information on the sentiment polarity of words.",
"The methods are built upon Word2Vec optimization algorithm and make use of available lexical sentiment resources such as SentiWordNet (Baccianella et al., 2010) , sentiment lexicon by Hu and Liu (2004) , and etc.",
"SentiVec seeks to satisfy two objectives, namely context prediction and lexical category prediction: log L = log L word2vec (W ; C) + λ log L lex (W, L), (5) where L word2vec (W ; C) is the Skip-gram objective as in (4) ; L lex (W, L) is a lexical objective for corpus W and lexical resource L; and λ is a tradeoff parameter.",
"Lexical resource L = {X i } n i=1 comprises of n word sets, each X i contains words of the same category.",
"For sentiment classification, we consider positive and negative word categories.",
"Logistic SentiVec Logistic SentiVec admits lexical resource in the form of two disjoint word sets, L = {X 1 , X 2 }, X 1 ∩ X 2 = ∅.",
"The objective is to tell apart which word set of L word w belongs to: log L lex (W, L) (6) = w∈X 1 log P(w ∈ X 1 ) + w∈X 2 log P(w ∈ X 2 ).",
"We further tie these probabilities together, and cast the objective as a logistic regression problem: P(w ∈ X 1 ) = 1 − P(w ∈ X 2 ) = σ(v w · τ ), (7) where v w is a word embedding and τ is a direction vector.",
"Since word embeddings are generally invariant to scaling and rotation when used as downstream feature representations, τ can be chosen randomly and fixed during training.",
"We experiment with randomly sampled unit length directions.",
"For simplicity, we also scale embedding v w to its unit length when computing v w · τ , which now equals to cosine similarity between v w and τ .",
"When v w is completely aligned with τ , the cosine similarity between them is 1, which maximizes P(w ∈ X 1 ) and favors words in X 1 .",
"When v w is opposite to τ , the cosine similarity equals to −1, which maximizes P(w ∈ X 2 ) and predicts vectors from X 2 .",
"Orthogonal vectors have cosine similarity of 0, which makes both w ∈ X 1 and w ∈ X 2 equally probable.",
"Optimizing (6) makes the corresponding word embeddings of X 1 and X 2 gravitate to the opposite semispaces and simulates clustering effect for the words of the same category, while the Word2Vec objective prevents words from collapsing to the same directions.",
"Optimization The objective in (6) permits simple stochastic gradient ascent optimization and can be combined with negative sampling procedure for Skip-gram in (5).",
"The gradient for unnormalized embedding v w is solved as follows: log L [w∈X 1 ] (D, L) v wi = (log P (x ∈ X 1 )) v wi = 1 v w 2 σ − v w · τ v w τ i v w − v wi v w · τ v w (8) The optimization equation for v w , when w ∈ X 2 , can be derived analogously.",
"Spherical SentiVec Spherical SentiVec extends Logistic SentiVec by dealing with any number of lexical categories, L = {X i } n i=1 .",
"As such, the lexical objective takes on generic form: log L lex (W, L) = n i=1 w∈X i log P (w ∈ X i ), (9) Each P (w ∈ X i ) defines embedding generating process.",
"We assume each length-normalized v w for w of L is generated w.r.t.",
"a mixture model of von Mises-Fisher (vMF) distributions.",
"vMF is a probability distribution on a multidimensional sphere, characterized by parameters µ (mean direction) and κ (concentration parameter).",
"Sampled points are concentrated around µ; the greater the κ, the closer the sampled points are to µ.",
"We consider only unimodal vMF distributions, restricting concentration parameters to be strictly positive.",
"Hereby, each X i ∈ L is assigned to vMF distribution parameters (µ i , κ i ) and the membership probabilities are defined as follows: P(w ∈ X i ) = P (v w ; µ i , κ i ) = 1 Z κ i e κ i µ i ·vw , (10) where Z κ is the normalization factor.",
"The Spherical SentiVec lexical objective forces words of every X i ∈ L to gravitate towards and concentrate around their direction mean µ i .",
"As in Logistic SentiVec, it simulates clustering effect for the words of the same set.",
"In comparison to the direction vector of Logistic SentiVec, mean directions of Spherical SentiVec when fixed can substantially influence word embeddings training and must be carefully selected.",
"We optimize the mean directions along with the word embeddings using alternating procedure resembling K-means clustering algorithm.",
"For simplicity, we keep concentration parameters tied, κ 1 = κ 2 = ... = κ n = κ, and treat κ as a hyperparameter of this algorithm.",
"Optimization We derive optimization procedure for updating word embeddings assuming fixed direction means.",
"Like Logistic SentiVec, Spherical SentiVec can be combined with the negative sampling procedure of Skip-gram.",
"The gradient for unnormalized word embedding v w is solved by the following equation: log L [w∈X i ] (W, L) v wj = κi µij vw − vwj vw ·µ i vw vw 2 (11) Once word embedding v w (w ∈ X i ) is updated, we revise direction mean µ i w.r.t.",
"maximum likelihood estimator: µi = w∈X i vw w∈X i vw .",
"(12) Updating the direction means in such a way ensures that the lexical objective is non-decreasing.",
"Assuming the stochastic optimization procedure for L word2vec complies with the same nondecreasing property, the proposed alternating procedure converges.",
"Related Work There have been considerable research on improving the quality of distributional word embeddings.",
"Bolukbasi et al.",
"(2016) seek to debias word embeddings from gender stereotypes.",
"Rothe and Schütze (2017) incorporate WordNet lexeme and synset information.",
"Mrkšic et al.",
"(2016) encode antonym-synonym relations.",
"Liu et al.",
"(2015) encode ordinal relations such as hypernym and hyponym.",
"Kiela et al.",
"(2015) augment Skip-gram to enforce lexical similarity or relatedness constraints, Bollegala et al.",
"(2016) modify GloVe optimization procedure for the same purpose.",
"Faruqui et al.",
"(2015) employ semantic relations of PPDB, WordNet, FrameNet to retrofit word embeddings for various prediction tasks.",
"We use this Retrofitting method 7 as a baseline.",
"Socher et al.",
"(2011) derive multi-word embeddings for sentiment distribution prediction, while we focus on lexical distributional analysis.",
"Maas et al.",
"(2011) and Tang et al.",
"(2016) use documentlevel sentiment annotations to fit word embeddings, but document annotation might not always be available for distributional analysis on neutral corpora such as Wikipedia.",
"SentiVec relies on simple sentiment lexicon instead.",
"Refining (Yu et al., 2018) aligns the sentiment scores taken from lexical resource and the cosine similarity scores of corresponding word embeddings.",
"The method generally requires fine-grained sentiment scores for the words, which may not be available in some settings.",
"We use Refining as a baseline and adopt coarse-grained sentiment lexicon for this method.",
"Villegas et al.",
"(2016) compare various distributional word embeddings arising from the same corpus for sentiment classification, whereas we focus on the differentiation in input corpora and propose novel sentiment-infused word embeddings.",
"Experiments The objective of experiments is to study the efficacy of Logistic SentiVec and Spherical SentiVec word embeddings on the aforementioned text classification tasks.",
"One natural baseline is Word2Vec, as SentiVec subsumes its context prediction objective, while further incorporating lexical category prediction.",
"We include two other baselines that can leverage the same lexical resource but in manners different from SentiVec, namely: Retrofitting (Faruqui et al., 2015) and Refining (Yu et al., 2018) .",
"For these methods, we generate their word embeddings based on Setup III (see Section 3).",
"All the methods were run multiple times with various hyperparameters, optimized via grid-search; for each we present the best performing setting.",
"First, we discuss the sentiment classification task.",
"Table 4 shows the unfolded results for the 24 classification datasets of Amazon, as well as for Rotten Tomatoes.",
"For each classification dataset (row), and for the Objective and Subjective embedding corpora respectively, the best word embedding methods are shown in bold.",
"An asterisk indicates statistically significant 8 results at 5% in comparison to Word2Vec.",
"Both SentiVec variants outperform Word2Vec in the vast majority of the cases.",
"The degree of outperformance is higher for the Objective than the Subjective word embeddings.",
"This is a reasonable trend given our previous findings in Section 3.",
"As the Objective Corpus encodes less information than the Subjective Corpus for sentiment classification, the former is more likely to benefit from the infusion of sentiment information from additional lexical resources.",
"Note that the sentiment infusion into the word embeddings comes from separate lexical resources, and does not involve any sentiment classification label.",
"SentiVec also outperforms the two baselines that benefit from the same lexical resources.",
"Retrofitting does not improve upon Word2Vec, with the two embeddings essentially indistinguishable (the difference is only noticeable at the second decimal point).",
"Refining makes the word embeddings perform worse on the sentiment classification task.",
"One possible explanation is that Refining normally requires fine-grained labeled lexicon, where the words are scored w.r.t.",
"the sentiment scale, whereas we use sentiment lexicon of two labels (i.e., positive or negative).",
"SentiVec accepts coarse-grained sentiment lexicons, and potentially could be extended to deal with fine-grained labels.",
"As previously alluded to, topic and subjectivity classifications are less sensitive to the subjectivity within word embeddings than sentiment classification.",
"One therefore would not expect much, if any, performance gain from infusion of sentiment information.",
"However, such infusion should not subtract or harm the quality of word embeddings either.",
"Table 5 shows that the unfolded results for topic classification on the six datasets, and the result for subjectivity classification are similar across methods.",
"Neither the SentiVec variants, nor Retrofitting and Refining, change the subjectivity and topic classification capabilities much, which means that the used sentiment lexicon is targeted only at the sentiment subspace of embeddings.",
"Illustrative Changes in Embeddings To give more insights on the difference between SentiVec and Word2Vec, we show \"flower\" diagrams in Figure 1 for Logistic SentiVec and Figure 2 for Spherical SentiVec.",
"Each is associated with a reference word (e.g., good for Figure 1a) , and indicates relative changes in cosine distances between the reference word and the testing words surrounding the \"flower\".",
"Every testing word is associated with a \"petal\" or black axis extending from the center of the circle.",
"The \"petal\" length is proportional to the relative distance change in two word embeddings: κ = Word2Vec embeddings correspondingly.",
"If the distance remains unchanged (κ = 1), then the \"petal\" points at the circumference; if the reference and testing words are closer in the SentiVec embedding than they are in Word2Vec (κ < 1), the \"petal\" lies inside the circle; when the distance increases (κ > 1), the \"petal\" goes beyond the circle.",
"The diagrams are presented for Objective Embeddings 9 .",
"We use three reference words: good (positive), bad (negative), time (neutral); as well as three groups of testing words: green for words randomly sampled from positive lexicon (Sector I-II), red for words randomly sampled from negative lexicon (Sector II-III), and gray for frequent neutral common nouns (Sector III-I).",
"Figure 1 shows changes produced by Logistic SentiVec.",
"For the positive reference word (Figure 1a) , the average distance to the green words is shortened, whereas the distance to the red words increases.",
"The reverse is observed for the negative reference word (Figure 1b ).",
"This observation complies with the lexical objective (7) of Logistic SentiVec, which aims to separate the words of two different classes.",
"Note that the gray words suffer only moderate change with respect to positive and negative reference words.",
"For the neutral reference word (Figure 1c ), the distances are only moderately affected across all testing groups.",
"Figure 2 shows that Spherical SentiVec tends to make embeddings more compact than Logistic SentiVec.",
"As the former's lexical objective (9) is designed for clustering, but not for separation, we look at the comparative strength of the clustering effect on the testing words.",
"For the positive reference word (Figure 2a ), the largest clustering effect is achieved for the green words.",
"For the negative reference word (Figure 2b) , as expected, the red words are affected the most.",
"The gray words suffer the least change for all the reference words.",
"In summary, SentiVec effectively provides an advantage for subjectivity-sensitive task such as sentiment classification, while not harming the performance of other text classification tasks.",
"Conclusion We explore the differences between objective and subjective corpora for generating word embeddings, and find that there is indeed a difference in the embeddings' classification task performances.",
"Identifying the presence of sentiment words as one key factor for the difference, we propose a novel method SentiVec to train word embeddings that are infused with the sentiment polarity of words derived from a separate sentiment lexicon.",
"We further identify two lexical objectives: Logistic SentiVec and Spherical SentiVec.",
"The proposed word embeddings show improvements in sentiment classification, while maintaining their performance on subjectivity and topic classifications."
]
} | {
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"3",
"4.1",
"4.2",
"5",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Data and Methodology",
"Generating Word Embeddings",
"Evaluation Tasks",
"Comparative Analyses of Subjective vs. Objective Corpora",
"Logistic SentiVec",
"Spherical SentiVec",
"Related Work",
"Experiments",
"Conclusion"
]
} | GEM-SciDuet-train-134#paper-1359#slide-7 | Methodology | Cross-validation on balanced samples
Binary logistic regression classifier
Sentence embedding = average of word embeddings
The same number of sentences and the same vocabulary when training embeddings | Cross-validation on balanced samples
Binary logistic regression classifier
Sentence embedding = average of word embeddings
The same number of sentences and the same vocabulary when training embeddings | [] |
GEM-SciDuet-train-134#paper-1359#slide-8 | 1359 | Searching for the X-Factor: Exploring Corpus Subjectivity for Word Embeddings | We explore the notion of subjectivity, and hypothesize that word embeddings learnt from input corpora of varying levels of subjectivity behave differently on natural language processing tasks such as classifying a sentence by sentiment, subjectivity, or topic. Through systematic comparative analyses, we establish this to be the case indeed. Moreover, based on the discovery of the outsized role that sentiment words play on subjectivity-sensitive tasks such as sentiment classification, we develop a novel word embedding SentiVec which is infused with sentiment information from a lexical resource, and is shown to outperform baselines on such tasks. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226
],
"paper_content_text": [
"Introduction Distributional analysis methods such as Word2Vec (Mikolov et al., 2013) and GloVe (Pennington et al., 2014) have been critical for the success of many large-scale natural language processing (NLP) applications (Collobert et al., 2011; Socher et al., 2013; Goldberg, 2016) .",
"These methods employ distributional hypothesis (i.e., words used in the same contexts tend to have similar meaning) to derive distributional meaning via context prediction tasks and produce dense word embeddings.",
"While there have been active and ongoing research on improving word embedding methods (see Section 5), there is a relative dearth of study on the impact that an input corpus may have on the quality of the word embeddings.",
"The previous preoccupation centers around corpus size, i.e., a larger corpus is perceived to be richer in statistical information.",
"For instance, popular corpora include Wikipedia, Common Crawl, and Google News.",
"We postulate that there may be variations across corpora owing to factors that affect language use.",
"Intuitively, the many things we write (a work email, a product review, an academic publication, etc.)",
"may each involve certain stylistic, syntactic, and lexical choices, resulting in meaningfully different distributions of word cooccurrences.",
"Consequently, such factors may be encoded in the word embeddings, and input corpora may be differentially informative towards various NLP tasks.",
"In this work, we are interested in the notion of subjectivity.",
"Some NLP tasks, such as sentiment classification, revolve around subjective expressions of likes or dislikes.",
"Others, such as topic classification, revolve around more objective elements of whether a document belongs to a topic (e.g., science, politics).",
"Our central hypothesis is that word embeddings learnt from input corpora of contrasting levels of subjectivity perform differently when classifying sentences by sentiment, subjectivity, or topic.",
"As the first contribution, we outline an experimental scheme to explore this hypothesis in Section 2, and conduct a series of controlled experiments in Section 3 establishing that there exists a meaningful difference between word embeddings derived from objective vs. subjective corpora.",
"We further systematically investigate factors that could potentially explain the differences.",
"Upon discovering from the investigation that sentiment words play a particularly important role in subjectivity-sensitive NLP tasks, such as sentiment classification, as the second contribution, in Section 4 we develop SentiVec, a novel word embedding method infused with information from lexical resources such as a sentiment lexicon.",
"We further identify two alternative lexical objectives: Logistic SentiVec based on discriminative logistic regression, and Spherical SentiVec based on soft clustering effect of von Mises-Fisher distributions.",
"In Section 6, the proposed word embeddings show evident improvements on sentiment classification, as compared to the base model Word2Vec and other baselines using the same lexical resource.",
"Data and Methodology We lay out the methodology for generating word embeddings of contrasting subjectivity, whose effects are tested on several text classification tasks.",
"Generating Word Embeddings As it is difficult to precisely quantify the degree of subjectivity of a corpus, we resort to generating word embeddings from two corpora that contrast sharply in subjectivity, referring to them as the Objective Corpus and the Subjective Corpus.",
"Objective Corpus As virtually all contents are written by humans, an absolutely objective corpus (in the philosophical sense) may prove elusive.",
"There are however exemplars where, by construction, a corpus aspires to be as objective as possible, and probably achieves that in practical terms.",
"We postulate that one such corpus is Wikipedia.",
"Its list of policies and guidelines 1 , assiduously enforced by an editorial team, specify that an article must be written from a neutral point of view, which among other things means \"representing fairly, proportionately, and, as far as possible, without editorial bias, all of the significant views that have been published by reliable sources on a topic.\".",
"Moreover, it is a common resource for training distributional word embeddings and adopted widely by the research community to solve various NLP problems.",
"Hence, in this study, we use Wikipedia as the Objective Corpus.",
"Subjective Corpus By extension, one may then deem a corpus subjective if its content does not at least meet Wikipedia's neutral point of view requirement.",
"In other words, if the content is replete with personal feelings and opinions.",
"We posit that product reviews would be one such corpus.",
"For instance, Amazon's Community Guideline 2 states that \"Amazon values diverse opinions\", and that \"Content you submit should be relevant and based on your own honest opinions and experience.\".",
"Reviews consist of expressive content written by customers, and may not strive for the neutrality of an encyclopedia.",
"We rely on a 1 https://en.wikipedia.org/wiki/ Wikipedia:List_of_policies_and_ guidelines 2 https://www.amazon.com/gp/help/ customer/display.html?nodeId=201929730 large corpus of Amazon reviews from various categories (e.g., electronics, jewelry, books, and etc.)",
"(McAuley et al., 2015) as the Subjective Corpus.",
"Word Embeddings For the comparative analysis in Section 3, we employ Word2Vec (reviewed below) to generate word embeddings from each corpus.",
"Later on in Section 4, we will propose a new word embedding method called SentiVec.",
"For Word2Vec, we use the Skip-gram model to train distributional word embeddings on the Objective Corpus and the Subjective Corpus respectively.",
"Skip-gram aims to find word embeddings that are useful for predicting nearby words.",
"The objective is to maximize the context probability: log L(W ; C) = w∈W w ∈C(w) log P(w |w), (1) where W is an input corpus and C(w) is the context of token w. The probability of context word w , given observed word w is defined via softmax: P(w |w) = exp (v w · vw) ŵ∈V exp (vŵ · vw) , (2) where v w and v w are corresponding embeddings and V is the corpus vocabulary.",
"Though theoretically sound, the formulation is computationally impractical and requires tractable approximation.",
"Mikolov et al.",
"(2013) propose two efficient procedures to optimize (1): Hierarchical Softmax and Negative Sampling (NS).",
"In this work we focus on the widely adopted NS.",
"The intuition is that a \"good\" model should be able to differentiate observed data from noise.",
"The differentiation task is defined using logistic regression; the goal is to tell apart real context-word pair (w , w) from randomly generated noise pair (ŵ, w).",
"Formally, log L [w',w] = log σ (v w · vw) + k i=1 log σ (−vŵ i · vw), (3) where σ( · ) is a sigmoid function, and {ŵ i } k i=1 are negative samples.",
"Summing up all the contextword pairs, we derive the NS Skip-gram objective: log L word2vec (W ; C) = w∈W w ∈C(w) log L [w',w] .",
"(4) Training word embeddings with Skip-gram, we keep the same hyperparameters across all the runs: 300 dimensions for embeddings, k = 5 negative samples, and window of 5 tokens.",
"The Objective and Subjective corpora undergo the same preprocessing, i.e., discarding short sentences (< 5 tokens) and rare words (< 10 occurrences), removing punctuation, normalizing Unicode symbols.",
"Evaluation Tasks To compare word embeddings, we need a common yardstick.",
"It is difficult to define an inherent quality to word embeddings.",
"Instead, we put them through several evaluation tasks that can leverage word embeddings and standardize their formulations as binary classification tasks.",
"To boil the comparisons down to the essences of word embeddings (which is our central focus), we rely on standardized techniques so as to attribute as much of the differences as possible to the word embeddings.",
"We use logistic regression for classification, and represent a text snippet (e.g., a sentence) in the feature space as the average of the word embeddings of tokens in the snippet (ignoring out-ofvocabulary tokens).",
"The evaluation metric is the average accuracy from 10-fold cross validation.",
"There are three evaluation tasks of varying degrees of hypothetical subjectivity, as outlined below.",
"Each may involve multiple datasets.",
"Sentiment Classification Task This task classifies a sentence into either positive or negative.",
"We use two groups of datasets as follows.",
"The first group consists of 24 datasets from UCSD Amazon product data 3 corresponding to various product categories.",
"Each review has a rating from 1 to 5, which is transformed into positive (ratings 4 or 5) or negative (ratings 1 or 2) class.",
"For each dataset respectively, we sample 5000 sentences each from the positive and negative reviews.",
"Note that these sentences used for this evaluation task have not participated in the generation of word embeddings.",
"Due to space constraint, in most cases we present the average accuracy across the datasets, but where appropriate we enumerate the results for each dataset.",
"The second is Cornell's sentence polarity dataset v1.0 4 (Pang and Lee, 2005) , made up of 5331 each of positive and negative sentences from Rotten Tomatoes movie reviews.",
"The inclusion of this out-of-domain evaluation dataset is useful for examining whether the performance of word embeddings from the Subjective Corpus on the first group above may inadvertently be affected by indomain advantage arising from its Amazon origin.",
"Subjectivity Classification Task This task classifies a sentence into subjective or objective.",
"The dataset is Cornell's subjectivity dataset v1.0 5 , consisting of 5000 subjective sentences derived from Rotten Tomatoes (RT) reviews and 5000 objective sentences derived from IMDB plot summaries (Pang and Lee, 2004) .",
"This task is probably less sensitive to the subjectivity within word embeddings than sentiment classification, as determining whether a sentence is subjective or objective should ideally be an objective undertaking.",
"Topic Classification Task We use the 20 Newsgroups dataset 6 (\"bydate\" version), whereby the newsgroups are organized into six subject matter groupings.",
"We extract the message body and split them into sentences.",
"Each group's sentences then form the in-topic class, and we randomly sample an equivalent number of sentences from the remaining newsgroups to form the out-of-topic class.",
"This results in six datasets, each corresponding to a binary classification task.",
"In most cases, we present the average results, and where appropriate we enumerate the results for each dataset.",
"Hypothetically, this task is the least affected by the subjectivity within word embeddings.",
"Comparative Analyses of Subjective vs.",
"Objective Corpora We conduct a series of comparative analyses under various setups.",
"For each, we compare the performance in the evaluation tasks when using the Objective Corpus and the Subjective Corpus.",
"Table 1 shows the results for this series of analyses.",
"Initial Condition Setup I seeks to answer whether there is any difference between word embeddings derived from the Objective Corpus and the Subjective Corpus.",
"The word embeddings were trained on the whole data respectively.",
"Table 1 shows the corpus statistics and classification accuracies.",
"Evidently, the Subjective word embeddings outperform the Objective word embeddings on all the evaluation tasks.",
"The margins are largest for sentiment classification (86.5% vs. 81.5% or +5% Amazon, and 78.2% vs. 75.4% or +2.8% on Rotten Tomatoes or RT).",
"For subjectivity and topic classifications, the differences are smaller.",
"As earlier hypothesized, the sentiment classification task is more sensitive to subjectivity within word embeddings than the other tasks.",
"Therefore, training word embeddings on a subjective corpus may confer an advantage for such tasks.",
"On the other hand, the corpus statistics show a substantial difference in corpus size, which could be an alternative explanation for the outperformance by the Subjective Corpus if the larger corpus contains more informative distributional statistics.",
"Controlling for Corpus Size In Setup II, we keep the number of sentences in both corpora the same, by randomly downsampling sentences in the Subjective Corpus.",
"This procedure consequently reduces the number of types and tokens (see Table 1 , Setup II, Corpus Statistics).",
"Note that the number of tokens in the Subjective corpus is now fewer than in the Objective, the latter suffers no change.",
"Yet, even after a dramatic reduction in size, the Subjective embeddings still outperform the Objective significantly on both datasets of the sentiment classification task (+4% on Amazon and +2.5% on RT), while showing similar performance on subjectivity and topic classifications.",
"This bolsters the earlier observation that sentiment classification is more sensitive to subjectivity.",
"While there is a small effect due to corpus size difference, the gap in performance between Subjective and Objective embeddings on sentiment classification is still significant and cannot be explained away by the corpus size alone.",
"Controlling for Vocabulary While the Subjective Corpus has a much smaller vocabulary (i.e., # types), we turn a critical eye on whether its apparent advantage lies in having access to special word types that do not exist in the Objective Corpus.",
"In Setup III, we keep the training vocabulary the same for both, removing the types that are Table 2 : Top words of misclassified sentences present in one corpus but not in the other, so that out-of-vocabulary words are ignored in the training phase.",
"Table 1 , Setup III, shows significant reduction in types for both corpora.",
"Yet, the outperformance by the Subjective embeddings on the sentiment classification task still stands (+3.8% on Amazon and +2.3% on RT).",
"Moreover, it is so for both Amazon and Rotten Tomatoes datasets, implying that it is not due to close in-domain similarity between the corpora used for training the word embeddings and the classification tasks.",
"Significant Words To get more insights on the difference between the Subjective and Objective corpora, we analyze the mistakes word embeddings make on the development folds.",
"At this point we focus on the sentiment classification task and specifically on the Amazon data, which indicates the largest performance differences in the controlled experiments (see Table 1 , Setup III).",
"As words are still the main unit of information in distributional word embeddings, we extract words strongly associated with misclassified sentences.",
"We employed log-odds ratio with informative Dirichlet prior method (Monroe et al., 2008) to quantify this association.",
"It is used to contrast the words in misclassified vs. correctly classified sentences, and accounts for the variance of words and their prior counts taken from a large corpus.",
"Table 2 shows the top 25 words most associated with the misclassified sentences, sorted by their association scores.",
"On average 50% of the mistakes overlap for both word embeddings, therefore, some of the words are included in both lists.",
"40 − 44% of these words carry positive or negative sentiment connotations in general (see the underlined words in Table 2 ), while other words like return or send may carry sentiment connotation in e-commerce context.",
"We check if a word carries sentiment connotation using sentiment lexicon compiled by Hu and Liu (2004) , including 6789 words along with positive or negative labels.",
"We also observe linguistic negations (i.e., not, Don't).",
"For instance, the word most associated with the Objective-specific mistakes (excluding the Subjective misclassified sentences) is not, which suggests that perhaps Subjective word embedding accommodates better understanding of linguistic negations, which may partially explain the difference.",
"However, our methodology as outlined in Section 2.2 permits exchangeable word order and is not intended to analyze structural interaction between words.",
"We focus on further analysis of sentiment words, leaving linguistic negations in word embeddings for future investigation.",
"Controlling for Sentiment Words To control for the \"amount\" of sentiment in the Subjective and Objective corpora, we use sentiment lexicon compiled by Hu and Liu (2004) .",
"For each corpus, we create two subcorpora: With Sentiment contains only the sentences with at least one word from the sentiment lexicon, while Without Sentiment is the complement.",
"We match the corpora on the number of sentences, downsampling the larger corpus, train word embeddings on each subcorpus, and proceed with the classification experiments.",
"Table 3 shows the results, including that of random word embeddings for reference.",
"Sentiment lexicon has a significant impact on the performance of sentiment and subjectivity classifications, and a smaller impact on topic classification.",
"Without sentiment, the Subjective embeddings prove more robust, still outperforming the Objective on sentiment classification, while the Objective performs close to random word embeddings on Amazon .",
"In summary, evidences from the series of controlled experiments support the existence of some X-factor to the Subjective embeddings, which confers superior performance in subjectivity-sensitive tasks such as sentiment classification.",
"To leverage the consequential sentiment information, we propose a family of methods, called SentiVec, for training distributional word embeddings that are infused with information on the sentiment polarity of words.",
"The methods are built upon Word2Vec optimization algorithm and make use of available lexical sentiment resources such as SentiWordNet (Baccianella et al., 2010) , sentiment lexicon by Hu and Liu (2004) , and etc.",
"SentiVec seeks to satisfy two objectives, namely context prediction and lexical category prediction: log L = log L word2vec (W ; C) + λ log L lex (W, L), (5) where L word2vec (W ; C) is the Skip-gram objective as in (4) ; L lex (W, L) is a lexical objective for corpus W and lexical resource L; and λ is a tradeoff parameter.",
"Lexical resource L = {X i } n i=1 comprises of n word sets, each X i contains words of the same category.",
"For sentiment classification, we consider positive and negative word categories.",
"Logistic SentiVec Logistic SentiVec admits lexical resource in the form of two disjoint word sets, L = {X 1 , X 2 }, X 1 ∩ X 2 = ∅.",
"The objective is to tell apart which word set of L word w belongs to: log L lex (W, L) (6) = w∈X 1 log P(w ∈ X 1 ) + w∈X 2 log P(w ∈ X 2 ).",
"We further tie these probabilities together, and cast the objective as a logistic regression problem: P(w ∈ X 1 ) = 1 − P(w ∈ X 2 ) = σ(v w · τ ), (7) where v w is a word embedding and τ is a direction vector.",
"Since word embeddings are generally invariant to scaling and rotation when used as downstream feature representations, τ can be chosen randomly and fixed during training.",
"We experiment with randomly sampled unit length directions.",
"For simplicity, we also scale embedding v w to its unit length when computing v w · τ , which now equals to cosine similarity between v w and τ .",
"When v w is completely aligned with τ , the cosine similarity between them is 1, which maximizes P(w ∈ X 1 ) and favors words in X 1 .",
"When v w is opposite to τ , the cosine similarity equals to −1, which maximizes P(w ∈ X 2 ) and predicts vectors from X 2 .",
"Orthogonal vectors have cosine similarity of 0, which makes both w ∈ X 1 and w ∈ X 2 equally probable.",
"Optimizing (6) makes the corresponding word embeddings of X 1 and X 2 gravitate to the opposite semispaces and simulates clustering effect for the words of the same category, while the Word2Vec objective prevents words from collapsing to the same directions.",
"Optimization The objective in (6) permits simple stochastic gradient ascent optimization and can be combined with negative sampling procedure for Skip-gram in (5).",
"The gradient for unnormalized embedding v w is solved as follows: log L [w∈X 1 ] (D, L) v wi = (log P (x ∈ X 1 )) v wi = 1 v w 2 σ − v w · τ v w τ i v w − v wi v w · τ v w (8) The optimization equation for v w , when w ∈ X 2 , can be derived analogously.",
"Spherical SentiVec Spherical SentiVec extends Logistic SentiVec by dealing with any number of lexical categories, L = {X i } n i=1 .",
"As such, the lexical objective takes on generic form: log L lex (W, L) = n i=1 w∈X i log P (w ∈ X i ), (9) Each P (w ∈ X i ) defines embedding generating process.",
"We assume each length-normalized v w for w of L is generated w.r.t.",
"a mixture model of von Mises-Fisher (vMF) distributions.",
"vMF is a probability distribution on a multidimensional sphere, characterized by parameters µ (mean direction) and κ (concentration parameter).",
"Sampled points are concentrated around µ; the greater the κ, the closer the sampled points are to µ.",
"We consider only unimodal vMF distributions, restricting concentration parameters to be strictly positive.",
"Hereby, each X i ∈ L is assigned to vMF distribution parameters (µ i , κ i ) and the membership probabilities are defined as follows: P(w ∈ X i ) = P (v w ; µ i , κ i ) = 1 Z κ i e κ i µ i ·vw , (10) where Z κ is the normalization factor.",
"The Spherical SentiVec lexical objective forces words of every X i ∈ L to gravitate towards and concentrate around their direction mean µ i .",
"As in Logistic SentiVec, it simulates clustering effect for the words of the same set.",
"In comparison to the direction vector of Logistic SentiVec, mean directions of Spherical SentiVec when fixed can substantially influence word embeddings training and must be carefully selected.",
"We optimize the mean directions along with the word embeddings using alternating procedure resembling K-means clustering algorithm.",
"For simplicity, we keep concentration parameters tied, κ 1 = κ 2 = ... = κ n = κ, and treat κ as a hyperparameter of this algorithm.",
"Optimization We derive optimization procedure for updating word embeddings assuming fixed direction means.",
"Like Logistic SentiVec, Spherical SentiVec can be combined with the negative sampling procedure of Skip-gram.",
"The gradient for unnormalized word embedding v w is solved by the following equation: log L [w∈X i ] (W, L) v wj = κi µij vw − vwj vw ·µ i vw vw 2 (11) Once word embedding v w (w ∈ X i ) is updated, we revise direction mean µ i w.r.t.",
"maximum likelihood estimator: µi = w∈X i vw w∈X i vw .",
"(12) Updating the direction means in such a way ensures that the lexical objective is non-decreasing.",
"Assuming the stochastic optimization procedure for L word2vec complies with the same nondecreasing property, the proposed alternating procedure converges.",
"Related Work There have been considerable research on improving the quality of distributional word embeddings.",
"Bolukbasi et al.",
"(2016) seek to debias word embeddings from gender stereotypes.",
"Rothe and Schütze (2017) incorporate WordNet lexeme and synset information.",
"Mrkšic et al.",
"(2016) encode antonym-synonym relations.",
"Liu et al.",
"(2015) encode ordinal relations such as hypernym and hyponym.",
"Kiela et al.",
"(2015) augment Skip-gram to enforce lexical similarity or relatedness constraints, Bollegala et al.",
"(2016) modify GloVe optimization procedure for the same purpose.",
"Faruqui et al.",
"(2015) employ semantic relations of PPDB, WordNet, FrameNet to retrofit word embeddings for various prediction tasks.",
"We use this Retrofitting method 7 as a baseline.",
"Socher et al.",
"(2011) derive multi-word embeddings for sentiment distribution prediction, while we focus on lexical distributional analysis.",
"Maas et al.",
"(2011) and Tang et al.",
"(2016) use documentlevel sentiment annotations to fit word embeddings, but document annotation might not always be available for distributional analysis on neutral corpora such as Wikipedia.",
"SentiVec relies on simple sentiment lexicon instead.",
"Refining (Yu et al., 2018) aligns the sentiment scores taken from lexical resource and the cosine similarity scores of corresponding word embeddings.",
"The method generally requires fine-grained sentiment scores for the words, which may not be available in some settings.",
"We use Refining as a baseline and adopt coarse-grained sentiment lexicon for this method.",
"Villegas et al.",
"(2016) compare various distributional word embeddings arising from the same corpus for sentiment classification, whereas we focus on the differentiation in input corpora and propose novel sentiment-infused word embeddings.",
"Experiments The objective of experiments is to study the efficacy of Logistic SentiVec and Spherical SentiVec word embeddings on the aforementioned text classification tasks.",
"One natural baseline is Word2Vec, as SentiVec subsumes its context prediction objective, while further incorporating lexical category prediction.",
"We include two other baselines that can leverage the same lexical resource but in manners different from SentiVec, namely: Retrofitting (Faruqui et al., 2015) and Refining (Yu et al., 2018) .",
"For these methods, we generate their word embeddings based on Setup III (see Section 3).",
"All the methods were run multiple times with various hyperparameters, optimized via grid-search; for each we present the best performing setting.",
"First, we discuss the sentiment classification task.",
"Table 4 shows the unfolded results for the 24 classification datasets of Amazon, as well as for Rotten Tomatoes.",
"For each classification dataset (row), and for the Objective and Subjective embedding corpora respectively, the best word embedding methods are shown in bold.",
"An asterisk indicates statistically significant 8 results at 5% in comparison to Word2Vec.",
"Both SentiVec variants outperform Word2Vec in the vast majority of the cases.",
"The degree of outperformance is higher for the Objective than the Subjective word embeddings.",
"This is a reasonable trend given our previous findings in Section 3.",
"As the Objective Corpus encodes less information than the Subjective Corpus for sentiment classification, the former is more likely to benefit from the infusion of sentiment information from additional lexical resources.",
"Note that the sentiment infusion into the word embeddings comes from separate lexical resources, and does not involve any sentiment classification label.",
"SentiVec also outperforms the two baselines that benefit from the same lexical resources.",
"Retrofitting does not improve upon Word2Vec, with the two embeddings essentially indistinguishable (the difference is only noticeable at the second decimal point).",
"Refining makes the word embeddings perform worse on the sentiment classification task.",
"One possible explanation is that Refining normally requires fine-grained labeled lexicon, where the words are scored w.r.t.",
"the sentiment scale, whereas we use sentiment lexicon of two labels (i.e., positive or negative).",
"SentiVec accepts coarse-grained sentiment lexicons, and potentially could be extended to deal with fine-grained labels.",
"As previously alluded to, topic and subjectivity classifications are less sensitive to the subjectivity within word embeddings than sentiment classification.",
"One therefore would not expect much, if any, performance gain from infusion of sentiment information.",
"However, such infusion should not subtract or harm the quality of word embeddings either.",
"Table 5 shows that the unfolded results for topic classification on the six datasets, and the result for subjectivity classification are similar across methods.",
"Neither the SentiVec variants, nor Retrofitting and Refining, change the subjectivity and topic classification capabilities much, which means that the used sentiment lexicon is targeted only at the sentiment subspace of embeddings.",
"Illustrative Changes in Embeddings To give more insights on the difference between SentiVec and Word2Vec, we show \"flower\" diagrams in Figure 1 for Logistic SentiVec and Figure 2 for Spherical SentiVec.",
"Each is associated with a reference word (e.g., good for Figure 1a) , and indicates relative changes in cosine distances between the reference word and the testing words surrounding the \"flower\".",
"Every testing word is associated with a \"petal\" or black axis extending from the center of the circle.",
"The \"petal\" length is proportional to the relative distance change in two word embeddings: κ = Word2Vec embeddings correspondingly.",
"If the distance remains unchanged (κ = 1), then the \"petal\" points at the circumference; if the reference and testing words are closer in the SentiVec embedding than they are in Word2Vec (κ < 1), the \"petal\" lies inside the circle; when the distance increases (κ > 1), the \"petal\" goes beyond the circle.",
"The diagrams are presented for Objective Embeddings 9 .",
"We use three reference words: good (positive), bad (negative), time (neutral); as well as three groups of testing words: green for words randomly sampled from positive lexicon (Sector I-II), red for words randomly sampled from negative lexicon (Sector II-III), and gray for frequent neutral common nouns (Sector III-I).",
"Figure 1 shows changes produced by Logistic SentiVec.",
"For the positive reference word (Figure 1a) , the average distance to the green words is shortened, whereas the distance to the red words increases.",
"The reverse is observed for the negative reference word (Figure 1b ).",
"This observation complies with the lexical objective (7) of Logistic SentiVec, which aims to separate the words of two different classes.",
"Note that the gray words suffer only moderate change with respect to positive and negative reference words.",
"For the neutral reference word (Figure 1c ), the distances are only moderately affected across all testing groups.",
"Figure 2 shows that Spherical SentiVec tends to make embeddings more compact than Logistic SentiVec.",
"As the former's lexical objective (9) is designed for clustering, but not for separation, we look at the comparative strength of the clustering effect on the testing words.",
"For the positive reference word (Figure 2a ), the largest clustering effect is achieved for the green words.",
"For the negative reference word (Figure 2b) , as expected, the red words are affected the most.",
"The gray words suffer the least change for all the reference words.",
"In summary, SentiVec effectively provides an advantage for subjectivity-sensitive task such as sentiment classification, while not harming the performance of other text classification tasks.",
"Conclusion We explore the differences between objective and subjective corpora for generating word embeddings, and find that there is indeed a difference in the embeddings' classification task performances.",
"Identifying the presence of sentiment words as one key factor for the difference, we propose a novel method SentiVec to train word embeddings that are infused with the sentiment polarity of words derived from a separate sentiment lexicon.",
"We further identify two lexical objectives: Logistic SentiVec and Spherical SentiVec.",
"The proposed word embeddings show improvements in sentiment classification, while maintaining their performance on subjectivity and topic classifications."
]
} | {
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"3",
"4.1",
"4.2",
"5",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Data and Methodology",
"Generating Word Embeddings",
"Evaluation Tasks",
"Comparative Analyses of Subjective vs. Objective Corpora",
"Logistic SentiVec",
"Spherical SentiVec",
"Related Work",
"Experiments",
"Conclusion"
]
} | GEM-SciDuet-train-134#paper-1359#slide-8 | Empirical Findings | SE and OE are very similar on objective tasks
SE understand sentiment words better than OE?
Subjectivity Classification Topic Classification Amazon Sentiment Rotten Tomatoes Sentiment
SentiVec does not affect objective classification tasks
Amazon Sentiment (average over 24 categories) Rotten Tomatoes Sentiment | SE and OE are very similar on objective tasks
SE understand sentiment words better than OE?
Subjectivity Classification Topic Classification Amazon Sentiment Rotten Tomatoes Sentiment
SentiVec does not affect objective classification tasks
Amazon Sentiment (average over 24 categories) Rotten Tomatoes Sentiment | [] |
GEM-SciDuet-train-134#paper-1359#slide-9 | 1359 | Searching for the X-Factor: Exploring Corpus Subjectivity for Word Embeddings | We explore the notion of subjectivity, and hypothesize that word embeddings learnt from input corpora of varying levels of subjectivity behave differently on natural language processing tasks such as classifying a sentence by sentiment, subjectivity, or topic. Through systematic comparative analyses, we establish this to be the case indeed. Moreover, based on the discovery of the outsized role that sentiment words play on subjectivity-sensitive tasks such as sentiment classification, we develop a novel word embedding SentiVec which is infused with sentiment information from a lexical resource, and is shown to outperform baselines on such tasks. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226
],
"paper_content_text": [
"Introduction Distributional analysis methods such as Word2Vec (Mikolov et al., 2013) and GloVe (Pennington et al., 2014) have been critical for the success of many large-scale natural language processing (NLP) applications (Collobert et al., 2011; Socher et al., 2013; Goldberg, 2016) .",
"These methods employ distributional hypothesis (i.e., words used in the same contexts tend to have similar meaning) to derive distributional meaning via context prediction tasks and produce dense word embeddings.",
"While there have been active and ongoing research on improving word embedding methods (see Section 5), there is a relative dearth of study on the impact that an input corpus may have on the quality of the word embeddings.",
"The previous preoccupation centers around corpus size, i.e., a larger corpus is perceived to be richer in statistical information.",
"For instance, popular corpora include Wikipedia, Common Crawl, and Google News.",
"We postulate that there may be variations across corpora owing to factors that affect language use.",
"Intuitively, the many things we write (a work email, a product review, an academic publication, etc.)",
"may each involve certain stylistic, syntactic, and lexical choices, resulting in meaningfully different distributions of word cooccurrences.",
"Consequently, such factors may be encoded in the word embeddings, and input corpora may be differentially informative towards various NLP tasks.",
"In this work, we are interested in the notion of subjectivity.",
"Some NLP tasks, such as sentiment classification, revolve around subjective expressions of likes or dislikes.",
"Others, such as topic classification, revolve around more objective elements of whether a document belongs to a topic (e.g., science, politics).",
"Our central hypothesis is that word embeddings learnt from input corpora of contrasting levels of subjectivity perform differently when classifying sentences by sentiment, subjectivity, or topic.",
"As the first contribution, we outline an experimental scheme to explore this hypothesis in Section 2, and conduct a series of controlled experiments in Section 3 establishing that there exists a meaningful difference between word embeddings derived from objective vs. subjective corpora.",
"We further systematically investigate factors that could potentially explain the differences.",
"Upon discovering from the investigation that sentiment words play a particularly important role in subjectivity-sensitive NLP tasks, such as sentiment classification, as the second contribution, in Section 4 we develop SentiVec, a novel word embedding method infused with information from lexical resources such as a sentiment lexicon.",
"We further identify two alternative lexical objectives: Logistic SentiVec based on discriminative logistic regression, and Spherical SentiVec based on soft clustering effect of von Mises-Fisher distributions.",
"In Section 6, the proposed word embeddings show evident improvements on sentiment classification, as compared to the base model Word2Vec and other baselines using the same lexical resource.",
"Data and Methodology We lay out the methodology for generating word embeddings of contrasting subjectivity, whose effects are tested on several text classification tasks.",
"Generating Word Embeddings As it is difficult to precisely quantify the degree of subjectivity of a corpus, we resort to generating word embeddings from two corpora that contrast sharply in subjectivity, referring to them as the Objective Corpus and the Subjective Corpus.",
"Objective Corpus As virtually all contents are written by humans, an absolutely objective corpus (in the philosophical sense) may prove elusive.",
"There are however exemplars where, by construction, a corpus aspires to be as objective as possible, and probably achieves that in practical terms.",
"We postulate that one such corpus is Wikipedia.",
"Its list of policies and guidelines 1 , assiduously enforced by an editorial team, specify that an article must be written from a neutral point of view, which among other things means \"representing fairly, proportionately, and, as far as possible, without editorial bias, all of the significant views that have been published by reliable sources on a topic.\".",
"Moreover, it is a common resource for training distributional word embeddings and adopted widely by the research community to solve various NLP problems.",
"Hence, in this study, we use Wikipedia as the Objective Corpus.",
"Subjective Corpus By extension, one may then deem a corpus subjective if its content does not at least meet Wikipedia's neutral point of view requirement.",
"In other words, if the content is replete with personal feelings and opinions.",
"We posit that product reviews would be one such corpus.",
"For instance, Amazon's Community Guideline 2 states that \"Amazon values diverse opinions\", and that \"Content you submit should be relevant and based on your own honest opinions and experience.\".",
"Reviews consist of expressive content written by customers, and may not strive for the neutrality of an encyclopedia.",
"We rely on a 1 https://en.wikipedia.org/wiki/ Wikipedia:List_of_policies_and_ guidelines 2 https://www.amazon.com/gp/help/ customer/display.html?nodeId=201929730 large corpus of Amazon reviews from various categories (e.g., electronics, jewelry, books, and etc.)",
"(McAuley et al., 2015) as the Subjective Corpus.",
"Word Embeddings For the comparative analysis in Section 3, we employ Word2Vec (reviewed below) to generate word embeddings from each corpus.",
"Later on in Section 4, we will propose a new word embedding method called SentiVec.",
"For Word2Vec, we use the Skip-gram model to train distributional word embeddings on the Objective Corpus and the Subjective Corpus respectively.",
"Skip-gram aims to find word embeddings that are useful for predicting nearby words.",
"The objective is to maximize the context probability: log L(W ; C) = w∈W w ∈C(w) log P(w |w), (1) where W is an input corpus and C(w) is the context of token w. The probability of context word w , given observed word w is defined via softmax: P(w |w) = exp (v w · vw) ŵ∈V exp (vŵ · vw) , (2) where v w and v w are corresponding embeddings and V is the corpus vocabulary.",
"Though theoretically sound, the formulation is computationally impractical and requires tractable approximation.",
"Mikolov et al.",
"(2013) propose two efficient procedures to optimize (1): Hierarchical Softmax and Negative Sampling (NS).",
"In this work we focus on the widely adopted NS.",
"The intuition is that a \"good\" model should be able to differentiate observed data from noise.",
"The differentiation task is defined using logistic regression; the goal is to tell apart real context-word pair (w , w) from randomly generated noise pair (ŵ, w).",
"Formally, log L [w',w] = log σ (v w · vw) + k i=1 log σ (−vŵ i · vw), (3) where σ( · ) is a sigmoid function, and {ŵ i } k i=1 are negative samples.",
"Summing up all the contextword pairs, we derive the NS Skip-gram objective: log L word2vec (W ; C) = w∈W w ∈C(w) log L [w',w] .",
"(4) Training word embeddings with Skip-gram, we keep the same hyperparameters across all the runs: 300 dimensions for embeddings, k = 5 negative samples, and window of 5 tokens.",
"The Objective and Subjective corpora undergo the same preprocessing, i.e., discarding short sentences (< 5 tokens) and rare words (< 10 occurrences), removing punctuation, normalizing Unicode symbols.",
"Evaluation Tasks To compare word embeddings, we need a common yardstick.",
"It is difficult to define an inherent quality to word embeddings.",
"Instead, we put them through several evaluation tasks that can leverage word embeddings and standardize their formulations as binary classification tasks.",
"To boil the comparisons down to the essences of word embeddings (which is our central focus), we rely on standardized techniques so as to attribute as much of the differences as possible to the word embeddings.",
"We use logistic regression for classification, and represent a text snippet (e.g., a sentence) in the feature space as the average of the word embeddings of tokens in the snippet (ignoring out-ofvocabulary tokens).",
"The evaluation metric is the average accuracy from 10-fold cross validation.",
"There are three evaluation tasks of varying degrees of hypothetical subjectivity, as outlined below.",
"Each may involve multiple datasets.",
"Sentiment Classification Task This task classifies a sentence into either positive or negative.",
"We use two groups of datasets as follows.",
"The first group consists of 24 datasets from UCSD Amazon product data 3 corresponding to various product categories.",
"Each review has a rating from 1 to 5, which is transformed into positive (ratings 4 or 5) or negative (ratings 1 or 2) class.",
"For each dataset respectively, we sample 5000 sentences each from the positive and negative reviews.",
"Note that these sentences used for this evaluation task have not participated in the generation of word embeddings.",
"Due to space constraint, in most cases we present the average accuracy across the datasets, but where appropriate we enumerate the results for each dataset.",
"The second is Cornell's sentence polarity dataset v1.0 4 (Pang and Lee, 2005) , made up of 5331 each of positive and negative sentences from Rotten Tomatoes movie reviews.",
"The inclusion of this out-of-domain evaluation dataset is useful for examining whether the performance of word embeddings from the Subjective Corpus on the first group above may inadvertently be affected by indomain advantage arising from its Amazon origin.",
"Subjectivity Classification Task This task classifies a sentence into subjective or objective.",
"The dataset is Cornell's subjectivity dataset v1.0 5 , consisting of 5000 subjective sentences derived from Rotten Tomatoes (RT) reviews and 5000 objective sentences derived from IMDB plot summaries (Pang and Lee, 2004) .",
"This task is probably less sensitive to the subjectivity within word embeddings than sentiment classification, as determining whether a sentence is subjective or objective should ideally be an objective undertaking.",
"Topic Classification Task We use the 20 Newsgroups dataset 6 (\"bydate\" version), whereby the newsgroups are organized into six subject matter groupings.",
"We extract the message body and split them into sentences.",
"Each group's sentences then form the in-topic class, and we randomly sample an equivalent number of sentences from the remaining newsgroups to form the out-of-topic class.",
"This results in six datasets, each corresponding to a binary classification task.",
"In most cases, we present the average results, and where appropriate we enumerate the results for each dataset.",
"Hypothetically, this task is the least affected by the subjectivity within word embeddings.",
"Comparative Analyses of Subjective vs.",
"Objective Corpora We conduct a series of comparative analyses under various setups.",
"For each, we compare the performance in the evaluation tasks when using the Objective Corpus and the Subjective Corpus.",
"Table 1 shows the results for this series of analyses.",
"Initial Condition Setup I seeks to answer whether there is any difference between word embeddings derived from the Objective Corpus and the Subjective Corpus.",
"The word embeddings were trained on the whole data respectively.",
"Table 1 shows the corpus statistics and classification accuracies.",
"Evidently, the Subjective word embeddings outperform the Objective word embeddings on all the evaluation tasks.",
"The margins are largest for sentiment classification (86.5% vs. 81.5% or +5% Amazon, and 78.2% vs. 75.4% or +2.8% on Rotten Tomatoes or RT).",
"For subjectivity and topic classifications, the differences are smaller.",
"As earlier hypothesized, the sentiment classification task is more sensitive to subjectivity within word embeddings than the other tasks.",
"Therefore, training word embeddings on a subjective corpus may confer an advantage for such tasks.",
"On the other hand, the corpus statistics show a substantial difference in corpus size, which could be an alternative explanation for the outperformance by the Subjective Corpus if the larger corpus contains more informative distributional statistics.",
"Controlling for Corpus Size In Setup II, we keep the number of sentences in both corpora the same, by randomly downsampling sentences in the Subjective Corpus.",
"This procedure consequently reduces the number of types and tokens (see Table 1 , Setup II, Corpus Statistics).",
"Note that the number of tokens in the Subjective corpus is now fewer than in the Objective, the latter suffers no change.",
"Yet, even after a dramatic reduction in size, the Subjective embeddings still outperform the Objective significantly on both datasets of the sentiment classification task (+4% on Amazon and +2.5% on RT), while showing similar performance on subjectivity and topic classifications.",
"This bolsters the earlier observation that sentiment classification is more sensitive to subjectivity.",
"While there is a small effect due to corpus size difference, the gap in performance between Subjective and Objective embeddings on sentiment classification is still significant and cannot be explained away by the corpus size alone.",
"Controlling for Vocabulary While the Subjective Corpus has a much smaller vocabulary (i.e., # types), we turn a critical eye on whether its apparent advantage lies in having access to special word types that do not exist in the Objective Corpus.",
"In Setup III, we keep the training vocabulary the same for both, removing the types that are Table 2 : Top words of misclassified sentences present in one corpus but not in the other, so that out-of-vocabulary words are ignored in the training phase.",
"Table 1 , Setup III, shows significant reduction in types for both corpora.",
"Yet, the outperformance by the Subjective embeddings on the sentiment classification task still stands (+3.8% on Amazon and +2.3% on RT).",
"Moreover, it is so for both Amazon and Rotten Tomatoes datasets, implying that it is not due to close in-domain similarity between the corpora used for training the word embeddings and the classification tasks.",
"Significant Words To get more insights on the difference between the Subjective and Objective corpora, we analyze the mistakes word embeddings make on the development folds.",
"At this point we focus on the sentiment classification task and specifically on the Amazon data, which indicates the largest performance differences in the controlled experiments (see Table 1 , Setup III).",
"As words are still the main unit of information in distributional word embeddings, we extract words strongly associated with misclassified sentences.",
"We employed log-odds ratio with informative Dirichlet prior method (Monroe et al., 2008) to quantify this association.",
"It is used to contrast the words in misclassified vs. correctly classified sentences, and accounts for the variance of words and their prior counts taken from a large corpus.",
"Table 2 shows the top 25 words most associated with the misclassified sentences, sorted by their association scores.",
"On average 50% of the mistakes overlap for both word embeddings, therefore, some of the words are included in both lists.",
"40 − 44% of these words carry positive or negative sentiment connotations in general (see the underlined words in Table 2 ), while other words like return or send may carry sentiment connotation in e-commerce context.",
"We check if a word carries sentiment connotation using sentiment lexicon compiled by Hu and Liu (2004) , including 6789 words along with positive or negative labels.",
"We also observe linguistic negations (i.e., not, Don't).",
"For instance, the word most associated with the Objective-specific mistakes (excluding the Subjective misclassified sentences) is not, which suggests that perhaps Subjective word embedding accommodates better understanding of linguistic negations, which may partially explain the difference.",
"However, our methodology as outlined in Section 2.2 permits exchangeable word order and is not intended to analyze structural interaction between words.",
"We focus on further analysis of sentiment words, leaving linguistic negations in word embeddings for future investigation.",
"Controlling for Sentiment Words To control for the \"amount\" of sentiment in the Subjective and Objective corpora, we use sentiment lexicon compiled by Hu and Liu (2004) .",
"For each corpus, we create two subcorpora: With Sentiment contains only the sentences with at least one word from the sentiment lexicon, while Without Sentiment is the complement.",
"We match the corpora on the number of sentences, downsampling the larger corpus, train word embeddings on each subcorpus, and proceed with the classification experiments.",
"Table 3 shows the results, including that of random word embeddings for reference.",
"Sentiment lexicon has a significant impact on the performance of sentiment and subjectivity classifications, and a smaller impact on topic classification.",
"Without sentiment, the Subjective embeddings prove more robust, still outperforming the Objective on sentiment classification, while the Objective performs close to random word embeddings on Amazon .",
"In summary, evidences from the series of controlled experiments support the existence of some X-factor to the Subjective embeddings, which confers superior performance in subjectivity-sensitive tasks such as sentiment classification.",
"To leverage the consequential sentiment information, we propose a family of methods, called SentiVec, for training distributional word embeddings that are infused with information on the sentiment polarity of words.",
"The methods are built upon Word2Vec optimization algorithm and make use of available lexical sentiment resources such as SentiWordNet (Baccianella et al., 2010) , sentiment lexicon by Hu and Liu (2004) , and etc.",
"SentiVec seeks to satisfy two objectives, namely context prediction and lexical category prediction: log L = log L word2vec (W ; C) + λ log L lex (W, L), (5) where L word2vec (W ; C) is the Skip-gram objective as in (4) ; L lex (W, L) is a lexical objective for corpus W and lexical resource L; and λ is a tradeoff parameter.",
"Lexical resource L = {X i } n i=1 comprises of n word sets, each X i contains words of the same category.",
"For sentiment classification, we consider positive and negative word categories.",
"Logistic SentiVec Logistic SentiVec admits lexical resource in the form of two disjoint word sets, L = {X 1 , X 2 }, X 1 ∩ X 2 = ∅.",
"The objective is to tell apart which word set of L word w belongs to: log L lex (W, L) (6) = w∈X 1 log P(w ∈ X 1 ) + w∈X 2 log P(w ∈ X 2 ).",
"We further tie these probabilities together, and cast the objective as a logistic regression problem: P(w ∈ X 1 ) = 1 − P(w ∈ X 2 ) = σ(v w · τ ), (7) where v w is a word embedding and τ is a direction vector.",
"Since word embeddings are generally invariant to scaling and rotation when used as downstream feature representations, τ can be chosen randomly and fixed during training.",
"We experiment with randomly sampled unit length directions.",
"For simplicity, we also scale embedding v w to its unit length when computing v w · τ , which now equals to cosine similarity between v w and τ .",
"When v w is completely aligned with τ , the cosine similarity between them is 1, which maximizes P(w ∈ X 1 ) and favors words in X 1 .",
"When v w is opposite to τ , the cosine similarity equals to −1, which maximizes P(w ∈ X 2 ) and predicts vectors from X 2 .",
"Orthogonal vectors have cosine similarity of 0, which makes both w ∈ X 1 and w ∈ X 2 equally probable.",
"Optimizing (6) makes the corresponding word embeddings of X 1 and X 2 gravitate to the opposite semispaces and simulates clustering effect for the words of the same category, while the Word2Vec objective prevents words from collapsing to the same directions.",
"Optimization The objective in (6) permits simple stochastic gradient ascent optimization and can be combined with negative sampling procedure for Skip-gram in (5).",
"The gradient for unnormalized embedding v w is solved as follows: log L [w∈X 1 ] (D, L) v wi = (log P (x ∈ X 1 )) v wi = 1 v w 2 σ − v w · τ v w τ i v w − v wi v w · τ v w (8) The optimization equation for v w , when w ∈ X 2 , can be derived analogously.",
"Spherical SentiVec Spherical SentiVec extends Logistic SentiVec by dealing with any number of lexical categories, L = {X i } n i=1 .",
"As such, the lexical objective takes on generic form: log L lex (W, L) = n i=1 w∈X i log P (w ∈ X i ), (9) Each P (w ∈ X i ) defines embedding generating process.",
"We assume each length-normalized v w for w of L is generated w.r.t.",
"a mixture model of von Mises-Fisher (vMF) distributions.",
"vMF is a probability distribution on a multidimensional sphere, characterized by parameters µ (mean direction) and κ (concentration parameter).",
"Sampled points are concentrated around µ; the greater the κ, the closer the sampled points are to µ.",
"We consider only unimodal vMF distributions, restricting concentration parameters to be strictly positive.",
"Hereby, each X i ∈ L is assigned to vMF distribution parameters (µ i , κ i ) and the membership probabilities are defined as follows: P(w ∈ X i ) = P (v w ; µ i , κ i ) = 1 Z κ i e κ i µ i ·vw , (10) where Z κ is the normalization factor.",
"The Spherical SentiVec lexical objective forces words of every X i ∈ L to gravitate towards and concentrate around their direction mean µ i .",
"As in Logistic SentiVec, it simulates clustering effect for the words of the same set.",
"In comparison to the direction vector of Logistic SentiVec, mean directions of Spherical SentiVec when fixed can substantially influence word embeddings training and must be carefully selected.",
"We optimize the mean directions along with the word embeddings using alternating procedure resembling K-means clustering algorithm.",
"For simplicity, we keep concentration parameters tied, κ 1 = κ 2 = ... = κ n = κ, and treat κ as a hyperparameter of this algorithm.",
"Optimization We derive optimization procedure for updating word embeddings assuming fixed direction means.",
"Like Logistic SentiVec, Spherical SentiVec can be combined with the negative sampling procedure of Skip-gram.",
"The gradient for unnormalized word embedding v w is solved by the following equation: log L [w∈X i ] (W, L) v wj = κi µij vw − vwj vw ·µ i vw vw 2 (11) Once word embedding v w (w ∈ X i ) is updated, we revise direction mean µ i w.r.t.",
"maximum likelihood estimator: µi = w∈X i vw w∈X i vw .",
"(12) Updating the direction means in such a way ensures that the lexical objective is non-decreasing.",
"Assuming the stochastic optimization procedure for L word2vec complies with the same nondecreasing property, the proposed alternating procedure converges.",
"Related Work There have been considerable research on improving the quality of distributional word embeddings.",
"Bolukbasi et al.",
"(2016) seek to debias word embeddings from gender stereotypes.",
"Rothe and Schütze (2017) incorporate WordNet lexeme and synset information.",
"Mrkšic et al.",
"(2016) encode antonym-synonym relations.",
"Liu et al.",
"(2015) encode ordinal relations such as hypernym and hyponym.",
"Kiela et al.",
"(2015) augment Skip-gram to enforce lexical similarity or relatedness constraints, Bollegala et al.",
"(2016) modify GloVe optimization procedure for the same purpose.",
"Faruqui et al.",
"(2015) employ semantic relations of PPDB, WordNet, FrameNet to retrofit word embeddings for various prediction tasks.",
"We use this Retrofitting method 7 as a baseline.",
"Socher et al.",
"(2011) derive multi-word embeddings for sentiment distribution prediction, while we focus on lexical distributional analysis.",
"Maas et al.",
"(2011) and Tang et al.",
"(2016) use documentlevel sentiment annotations to fit word embeddings, but document annotation might not always be available for distributional analysis on neutral corpora such as Wikipedia.",
"SentiVec relies on simple sentiment lexicon instead.",
"Refining (Yu et al., 2018) aligns the sentiment scores taken from lexical resource and the cosine similarity scores of corresponding word embeddings.",
"The method generally requires fine-grained sentiment scores for the words, which may not be available in some settings.",
"We use Refining as a baseline and adopt coarse-grained sentiment lexicon for this method.",
"Villegas et al.",
"(2016) compare various distributional word embeddings arising from the same corpus for sentiment classification, whereas we focus on the differentiation in input corpora and propose novel sentiment-infused word embeddings.",
"Experiments The objective of experiments is to study the efficacy of Logistic SentiVec and Spherical SentiVec word embeddings on the aforementioned text classification tasks.",
"One natural baseline is Word2Vec, as SentiVec subsumes its context prediction objective, while further incorporating lexical category prediction.",
"We include two other baselines that can leverage the same lexical resource but in manners different from SentiVec, namely: Retrofitting (Faruqui et al., 2015) and Refining (Yu et al., 2018) .",
"For these methods, we generate their word embeddings based on Setup III (see Section 3).",
"All the methods were run multiple times with various hyperparameters, optimized via grid-search; for each we present the best performing setting.",
"First, we discuss the sentiment classification task.",
"Table 4 shows the unfolded results for the 24 classification datasets of Amazon, as well as for Rotten Tomatoes.",
"For each classification dataset (row), and for the Objective and Subjective embedding corpora respectively, the best word embedding methods are shown in bold.",
"An asterisk indicates statistically significant 8 results at 5% in comparison to Word2Vec.",
"Both SentiVec variants outperform Word2Vec in the vast majority of the cases.",
"The degree of outperformance is higher for the Objective than the Subjective word embeddings.",
"This is a reasonable trend given our previous findings in Section 3.",
"As the Objective Corpus encodes less information than the Subjective Corpus for sentiment classification, the former is more likely to benefit from the infusion of sentiment information from additional lexical resources.",
"Note that the sentiment infusion into the word embeddings comes from separate lexical resources, and does not involve any sentiment classification label.",
"SentiVec also outperforms the two baselines that benefit from the same lexical resources.",
"Retrofitting does not improve upon Word2Vec, with the two embeddings essentially indistinguishable (the difference is only noticeable at the second decimal point).",
"Refining makes the word embeddings perform worse on the sentiment classification task.",
"One possible explanation is that Refining normally requires fine-grained labeled lexicon, where the words are scored w.r.t.",
"the sentiment scale, whereas we use sentiment lexicon of two labels (i.e., positive or negative).",
"SentiVec accepts coarse-grained sentiment lexicons, and potentially could be extended to deal with fine-grained labels.",
"As previously alluded to, topic and subjectivity classifications are less sensitive to the subjectivity within word embeddings than sentiment classification.",
"One therefore would not expect much, if any, performance gain from infusion of sentiment information.",
"However, such infusion should not subtract or harm the quality of word embeddings either.",
"Table 5 shows that the unfolded results for topic classification on the six datasets, and the result for subjectivity classification are similar across methods.",
"Neither the SentiVec variants, nor Retrofitting and Refining, change the subjectivity and topic classification capabilities much, which means that the used sentiment lexicon is targeted only at the sentiment subspace of embeddings.",
"Illustrative Changes in Embeddings To give more insights on the difference between SentiVec and Word2Vec, we show \"flower\" diagrams in Figure 1 for Logistic SentiVec and Figure 2 for Spherical SentiVec.",
"Each is associated with a reference word (e.g., good for Figure 1a) , and indicates relative changes in cosine distances between the reference word and the testing words surrounding the \"flower\".",
"Every testing word is associated with a \"petal\" or black axis extending from the center of the circle.",
"The \"petal\" length is proportional to the relative distance change in two word embeddings: κ = Word2Vec embeddings correspondingly.",
"If the distance remains unchanged (κ = 1), then the \"petal\" points at the circumference; if the reference and testing words are closer in the SentiVec embedding than they are in Word2Vec (κ < 1), the \"petal\" lies inside the circle; when the distance increases (κ > 1), the \"petal\" goes beyond the circle.",
"The diagrams are presented for Objective Embeddings 9 .",
"We use three reference words: good (positive), bad (negative), time (neutral); as well as three groups of testing words: green for words randomly sampled from positive lexicon (Sector I-II), red for words randomly sampled from negative lexicon (Sector II-III), and gray for frequent neutral common nouns (Sector III-I).",
"Figure 1 shows changes produced by Logistic SentiVec.",
"For the positive reference word (Figure 1a) , the average distance to the green words is shortened, whereas the distance to the red words increases.",
"The reverse is observed for the negative reference word (Figure 1b ).",
"This observation complies with the lexical objective (7) of Logistic SentiVec, which aims to separate the words of two different classes.",
"Note that the gray words suffer only moderate change with respect to positive and negative reference words.",
"For the neutral reference word (Figure 1c ), the distances are only moderately affected across all testing groups.",
"Figure 2 shows that Spherical SentiVec tends to make embeddings more compact than Logistic SentiVec.",
"As the former's lexical objective (9) is designed for clustering, but not for separation, we look at the comparative strength of the clustering effect on the testing words.",
"For the positive reference word (Figure 2a ), the largest clustering effect is achieved for the green words.",
"For the negative reference word (Figure 2b) , as expected, the red words are affected the most.",
"The gray words suffer the least change for all the reference words.",
"In summary, SentiVec effectively provides an advantage for subjectivity-sensitive task such as sentiment classification, while not harming the performance of other text classification tasks.",
"Conclusion We explore the differences between objective and subjective corpora for generating word embeddings, and find that there is indeed a difference in the embeddings' classification task performances.",
"Identifying the presence of sentiment words as one key factor for the difference, we propose a novel method SentiVec to train word embeddings that are infused with the sentiment polarity of words derived from a separate sentiment lexicon.",
"We further identify two lexical objectives: Logistic SentiVec and Spherical SentiVec.",
"The proposed word embeddings show improvements in sentiment classification, while maintaining their performance on subjectivity and topic classifications."
]
} | {
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"3",
"4.1",
"4.2",
"5",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Data and Methodology",
"Generating Word Embeddings",
"Evaluation Tasks",
"Comparative Analyses of Subjective vs. Objective Corpora",
"Logistic SentiVec",
"Spherical SentiVec",
"Related Work",
"Experiments",
"Conclusion"
]
} | GEM-SciDuet-train-134#paper-1359#slide-9 | Top Words Similar to good | Word Similarity Word Similarity | Word Similarity Word Similarity | [] |
GEM-SciDuet-train-134#paper-1359#slide-10 | 1359 | Searching for the X-Factor: Exploring Corpus Subjectivity for Word Embeddings | We explore the notion of subjectivity, and hypothesize that word embeddings learnt from input corpora of varying levels of subjectivity behave differently on natural language processing tasks such as classifying a sentence by sentiment, subjectivity, or topic. Through systematic comparative analyses, we establish this to be the case indeed. Moreover, based on the discovery of the outsized role that sentiment words play on subjectivity-sensitive tasks such as sentiment classification, we develop a novel word embedding SentiVec which is infused with sentiment information from a lexical resource, and is shown to outperform baselines on such tasks. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226
],
"paper_content_text": [
"Introduction Distributional analysis methods such as Word2Vec (Mikolov et al., 2013) and GloVe (Pennington et al., 2014) have been critical for the success of many large-scale natural language processing (NLP) applications (Collobert et al., 2011; Socher et al., 2013; Goldberg, 2016) .",
"These methods employ distributional hypothesis (i.e., words used in the same contexts tend to have similar meaning) to derive distributional meaning via context prediction tasks and produce dense word embeddings.",
"While there have been active and ongoing research on improving word embedding methods (see Section 5), there is a relative dearth of study on the impact that an input corpus may have on the quality of the word embeddings.",
"The previous preoccupation centers around corpus size, i.e., a larger corpus is perceived to be richer in statistical information.",
"For instance, popular corpora include Wikipedia, Common Crawl, and Google News.",
"We postulate that there may be variations across corpora owing to factors that affect language use.",
"Intuitively, the many things we write (a work email, a product review, an academic publication, etc.)",
"may each involve certain stylistic, syntactic, and lexical choices, resulting in meaningfully different distributions of word cooccurrences.",
"Consequently, such factors may be encoded in the word embeddings, and input corpora may be differentially informative towards various NLP tasks.",
"In this work, we are interested in the notion of subjectivity.",
"Some NLP tasks, such as sentiment classification, revolve around subjective expressions of likes or dislikes.",
"Others, such as topic classification, revolve around more objective elements of whether a document belongs to a topic (e.g., science, politics).",
"Our central hypothesis is that word embeddings learnt from input corpora of contrasting levels of subjectivity perform differently when classifying sentences by sentiment, subjectivity, or topic.",
"As the first contribution, we outline an experimental scheme to explore this hypothesis in Section 2, and conduct a series of controlled experiments in Section 3 establishing that there exists a meaningful difference between word embeddings derived from objective vs. subjective corpora.",
"We further systematically investigate factors that could potentially explain the differences.",
"Upon discovering from the investigation that sentiment words play a particularly important role in subjectivity-sensitive NLP tasks, such as sentiment classification, as the second contribution, in Section 4 we develop SentiVec, a novel word embedding method infused with information from lexical resources such as a sentiment lexicon.",
"We further identify two alternative lexical objectives: Logistic SentiVec based on discriminative logistic regression, and Spherical SentiVec based on soft clustering effect of von Mises-Fisher distributions.",
"In Section 6, the proposed word embeddings show evident improvements on sentiment classification, as compared to the base model Word2Vec and other baselines using the same lexical resource.",
"Data and Methodology We lay out the methodology for generating word embeddings of contrasting subjectivity, whose effects are tested on several text classification tasks.",
"Generating Word Embeddings As it is difficult to precisely quantify the degree of subjectivity of a corpus, we resort to generating word embeddings from two corpora that contrast sharply in subjectivity, referring to them as the Objective Corpus and the Subjective Corpus.",
"Objective Corpus As virtually all contents are written by humans, an absolutely objective corpus (in the philosophical sense) may prove elusive.",
"There are however exemplars where, by construction, a corpus aspires to be as objective as possible, and probably achieves that in practical terms.",
"We postulate that one such corpus is Wikipedia.",
"Its list of policies and guidelines 1 , assiduously enforced by an editorial team, specify that an article must be written from a neutral point of view, which among other things means \"representing fairly, proportionately, and, as far as possible, without editorial bias, all of the significant views that have been published by reliable sources on a topic.\".",
"Moreover, it is a common resource for training distributional word embeddings and adopted widely by the research community to solve various NLP problems.",
"Hence, in this study, we use Wikipedia as the Objective Corpus.",
"Subjective Corpus By extension, one may then deem a corpus subjective if its content does not at least meet Wikipedia's neutral point of view requirement.",
"In other words, if the content is replete with personal feelings and opinions.",
"We posit that product reviews would be one such corpus.",
"For instance, Amazon's Community Guideline 2 states that \"Amazon values diverse opinions\", and that \"Content you submit should be relevant and based on your own honest opinions and experience.\".",
"Reviews consist of expressive content written by customers, and may not strive for the neutrality of an encyclopedia.",
"We rely on a 1 https://en.wikipedia.org/wiki/ Wikipedia:List_of_policies_and_ guidelines 2 https://www.amazon.com/gp/help/ customer/display.html?nodeId=201929730 large corpus of Amazon reviews from various categories (e.g., electronics, jewelry, books, and etc.)",
"(McAuley et al., 2015) as the Subjective Corpus.",
"Word Embeddings For the comparative analysis in Section 3, we employ Word2Vec (reviewed below) to generate word embeddings from each corpus.",
"Later on in Section 4, we will propose a new word embedding method called SentiVec.",
"For Word2Vec, we use the Skip-gram model to train distributional word embeddings on the Objective Corpus and the Subjective Corpus respectively.",
"Skip-gram aims to find word embeddings that are useful for predicting nearby words.",
"The objective is to maximize the context probability: log L(W ; C) = w∈W w ∈C(w) log P(w |w), (1) where W is an input corpus and C(w) is the context of token w. The probability of context word w , given observed word w is defined via softmax: P(w |w) = exp (v w · vw) ŵ∈V exp (vŵ · vw) , (2) where v w and v w are corresponding embeddings and V is the corpus vocabulary.",
"Though theoretically sound, the formulation is computationally impractical and requires tractable approximation.",
"Mikolov et al.",
"(2013) propose two efficient procedures to optimize (1): Hierarchical Softmax and Negative Sampling (NS).",
"In this work we focus on the widely adopted NS.",
"The intuition is that a \"good\" model should be able to differentiate observed data from noise.",
"The differentiation task is defined using logistic regression; the goal is to tell apart real context-word pair (w , w) from randomly generated noise pair (ŵ, w).",
"Formally, log L [w',w] = log σ (v w · vw) + k i=1 log σ (−vŵ i · vw), (3) where σ( · ) is a sigmoid function, and {ŵ i } k i=1 are negative samples.",
"Summing up all the contextword pairs, we derive the NS Skip-gram objective: log L word2vec (W ; C) = w∈W w ∈C(w) log L [w',w] .",
"(4) Training word embeddings with Skip-gram, we keep the same hyperparameters across all the runs: 300 dimensions for embeddings, k = 5 negative samples, and window of 5 tokens.",
"The Objective and Subjective corpora undergo the same preprocessing, i.e., discarding short sentences (< 5 tokens) and rare words (< 10 occurrences), removing punctuation, normalizing Unicode symbols.",
"Evaluation Tasks To compare word embeddings, we need a common yardstick.",
"It is difficult to define an inherent quality to word embeddings.",
"Instead, we put them through several evaluation tasks that can leverage word embeddings and standardize their formulations as binary classification tasks.",
"To boil the comparisons down to the essences of word embeddings (which is our central focus), we rely on standardized techniques so as to attribute as much of the differences as possible to the word embeddings.",
"We use logistic regression for classification, and represent a text snippet (e.g., a sentence) in the feature space as the average of the word embeddings of tokens in the snippet (ignoring out-ofvocabulary tokens).",
"The evaluation metric is the average accuracy from 10-fold cross validation.",
"There are three evaluation tasks of varying degrees of hypothetical subjectivity, as outlined below.",
"Each may involve multiple datasets.",
"Sentiment Classification Task This task classifies a sentence into either positive or negative.",
"We use two groups of datasets as follows.",
"The first group consists of 24 datasets from UCSD Amazon product data 3 corresponding to various product categories.",
"Each review has a rating from 1 to 5, which is transformed into positive (ratings 4 or 5) or negative (ratings 1 or 2) class.",
"For each dataset respectively, we sample 5000 sentences each from the positive and negative reviews.",
"Note that these sentences used for this evaluation task have not participated in the generation of word embeddings.",
"Due to space constraint, in most cases we present the average accuracy across the datasets, but where appropriate we enumerate the results for each dataset.",
"The second is Cornell's sentence polarity dataset v1.0 4 (Pang and Lee, 2005) , made up of 5331 each of positive and negative sentences from Rotten Tomatoes movie reviews.",
"The inclusion of this out-of-domain evaluation dataset is useful for examining whether the performance of word embeddings from the Subjective Corpus on the first group above may inadvertently be affected by indomain advantage arising from its Amazon origin.",
"Subjectivity Classification Task This task classifies a sentence into subjective or objective.",
"The dataset is Cornell's subjectivity dataset v1.0 5 , consisting of 5000 subjective sentences derived from Rotten Tomatoes (RT) reviews and 5000 objective sentences derived from IMDB plot summaries (Pang and Lee, 2004) .",
"This task is probably less sensitive to the subjectivity within word embeddings than sentiment classification, as determining whether a sentence is subjective or objective should ideally be an objective undertaking.",
"Topic Classification Task We use the 20 Newsgroups dataset 6 (\"bydate\" version), whereby the newsgroups are organized into six subject matter groupings.",
"We extract the message body and split them into sentences.",
"Each group's sentences then form the in-topic class, and we randomly sample an equivalent number of sentences from the remaining newsgroups to form the out-of-topic class.",
"This results in six datasets, each corresponding to a binary classification task.",
"In most cases, we present the average results, and where appropriate we enumerate the results for each dataset.",
"Hypothetically, this task is the least affected by the subjectivity within word embeddings.",
"Comparative Analyses of Subjective vs.",
"Objective Corpora We conduct a series of comparative analyses under various setups.",
"For each, we compare the performance in the evaluation tasks when using the Objective Corpus and the Subjective Corpus.",
"Table 1 shows the results for this series of analyses.",
"Initial Condition Setup I seeks to answer whether there is any difference between word embeddings derived from the Objective Corpus and the Subjective Corpus.",
"The word embeddings were trained on the whole data respectively.",
"Table 1 shows the corpus statistics and classification accuracies.",
"Evidently, the Subjective word embeddings outperform the Objective word embeddings on all the evaluation tasks.",
"The margins are largest for sentiment classification (86.5% vs. 81.5% or +5% Amazon, and 78.2% vs. 75.4% or +2.8% on Rotten Tomatoes or RT).",
"For subjectivity and topic classifications, the differences are smaller.",
"As earlier hypothesized, the sentiment classification task is more sensitive to subjectivity within word embeddings than the other tasks.",
"Therefore, training word embeddings on a subjective corpus may confer an advantage for such tasks.",
"On the other hand, the corpus statistics show a substantial difference in corpus size, which could be an alternative explanation for the outperformance by the Subjective Corpus if the larger corpus contains more informative distributional statistics.",
"Controlling for Corpus Size In Setup II, we keep the number of sentences in both corpora the same, by randomly downsampling sentences in the Subjective Corpus.",
"This procedure consequently reduces the number of types and tokens (see Table 1 , Setup II, Corpus Statistics).",
"Note that the number of tokens in the Subjective corpus is now fewer than in the Objective, the latter suffers no change.",
"Yet, even after a dramatic reduction in size, the Subjective embeddings still outperform the Objective significantly on both datasets of the sentiment classification task (+4% on Amazon and +2.5% on RT), while showing similar performance on subjectivity and topic classifications.",
"This bolsters the earlier observation that sentiment classification is more sensitive to subjectivity.",
"While there is a small effect due to corpus size difference, the gap in performance between Subjective and Objective embeddings on sentiment classification is still significant and cannot be explained away by the corpus size alone.",
"Controlling for Vocabulary While the Subjective Corpus has a much smaller vocabulary (i.e., # types), we turn a critical eye on whether its apparent advantage lies in having access to special word types that do not exist in the Objective Corpus.",
"In Setup III, we keep the training vocabulary the same for both, removing the types that are Table 2 : Top words of misclassified sentences present in one corpus but not in the other, so that out-of-vocabulary words are ignored in the training phase.",
"Table 1 , Setup III, shows significant reduction in types for both corpora.",
"Yet, the outperformance by the Subjective embeddings on the sentiment classification task still stands (+3.8% on Amazon and +2.3% on RT).",
"Moreover, it is so for both Amazon and Rotten Tomatoes datasets, implying that it is not due to close in-domain similarity between the corpora used for training the word embeddings and the classification tasks.",
"Significant Words To get more insights on the difference between the Subjective and Objective corpora, we analyze the mistakes word embeddings make on the development folds.",
"At this point we focus on the sentiment classification task and specifically on the Amazon data, which indicates the largest performance differences in the controlled experiments (see Table 1 , Setup III).",
"As words are still the main unit of information in distributional word embeddings, we extract words strongly associated with misclassified sentences.",
"We employed log-odds ratio with informative Dirichlet prior method (Monroe et al., 2008) to quantify this association.",
"It is used to contrast the words in misclassified vs. correctly classified sentences, and accounts for the variance of words and their prior counts taken from a large corpus.",
"Table 2 shows the top 25 words most associated with the misclassified sentences, sorted by their association scores.",
"On average 50% of the mistakes overlap for both word embeddings, therefore, some of the words are included in both lists.",
"40 − 44% of these words carry positive or negative sentiment connotations in general (see the underlined words in Table 2 ), while other words like return or send may carry sentiment connotation in e-commerce context.",
"We check if a word carries sentiment connotation using sentiment lexicon compiled by Hu and Liu (2004) , including 6789 words along with positive or negative labels.",
"We also observe linguistic negations (i.e., not, Don't).",
"For instance, the word most associated with the Objective-specific mistakes (excluding the Subjective misclassified sentences) is not, which suggests that perhaps Subjective word embedding accommodates better understanding of linguistic negations, which may partially explain the difference.",
"However, our methodology as outlined in Section 2.2 permits exchangeable word order and is not intended to analyze structural interaction between words.",
"We focus on further analysis of sentiment words, leaving linguistic negations in word embeddings for future investigation.",
"Controlling for Sentiment Words To control for the \"amount\" of sentiment in the Subjective and Objective corpora, we use sentiment lexicon compiled by Hu and Liu (2004) .",
"For each corpus, we create two subcorpora: With Sentiment contains only the sentences with at least one word from the sentiment lexicon, while Without Sentiment is the complement.",
"We match the corpora on the number of sentences, downsampling the larger corpus, train word embeddings on each subcorpus, and proceed with the classification experiments.",
"Table 3 shows the results, including that of random word embeddings for reference.",
"Sentiment lexicon has a significant impact on the performance of sentiment and subjectivity classifications, and a smaller impact on topic classification.",
"Without sentiment, the Subjective embeddings prove more robust, still outperforming the Objective on sentiment classification, while the Objective performs close to random word embeddings on Amazon .",
"In summary, evidences from the series of controlled experiments support the existence of some X-factor to the Subjective embeddings, which confers superior performance in subjectivity-sensitive tasks such as sentiment classification.",
"To leverage the consequential sentiment information, we propose a family of methods, called SentiVec, for training distributional word embeddings that are infused with information on the sentiment polarity of words.",
"The methods are built upon Word2Vec optimization algorithm and make use of available lexical sentiment resources such as SentiWordNet (Baccianella et al., 2010) , sentiment lexicon by Hu and Liu (2004) , and etc.",
"SentiVec seeks to satisfy two objectives, namely context prediction and lexical category prediction: log L = log L word2vec (W ; C) + λ log L lex (W, L), (5) where L word2vec (W ; C) is the Skip-gram objective as in (4) ; L lex (W, L) is a lexical objective for corpus W and lexical resource L; and λ is a tradeoff parameter.",
"Lexical resource L = {X i } n i=1 comprises of n word sets, each X i contains words of the same category.",
"For sentiment classification, we consider positive and negative word categories.",
"Logistic SentiVec Logistic SentiVec admits lexical resource in the form of two disjoint word sets, L = {X 1 , X 2 }, X 1 ∩ X 2 = ∅.",
"The objective is to tell apart which word set of L word w belongs to: log L lex (W, L) (6) = w∈X 1 log P(w ∈ X 1 ) + w∈X 2 log P(w ∈ X 2 ).",
"We further tie these probabilities together, and cast the objective as a logistic regression problem: P(w ∈ X 1 ) = 1 − P(w ∈ X 2 ) = σ(v w · τ ), (7) where v w is a word embedding and τ is a direction vector.",
"Since word embeddings are generally invariant to scaling and rotation when used as downstream feature representations, τ can be chosen randomly and fixed during training.",
"We experiment with randomly sampled unit length directions.",
"For simplicity, we also scale embedding v w to its unit length when computing v w · τ , which now equals to cosine similarity between v w and τ .",
"When v w is completely aligned with τ , the cosine similarity between them is 1, which maximizes P(w ∈ X 1 ) and favors words in X 1 .",
"When v w is opposite to τ , the cosine similarity equals to −1, which maximizes P(w ∈ X 2 ) and predicts vectors from X 2 .",
"Orthogonal vectors have cosine similarity of 0, which makes both w ∈ X 1 and w ∈ X 2 equally probable.",
"Optimizing (6) makes the corresponding word embeddings of X 1 and X 2 gravitate to the opposite semispaces and simulates clustering effect for the words of the same category, while the Word2Vec objective prevents words from collapsing to the same directions.",
"Optimization The objective in (6) permits simple stochastic gradient ascent optimization and can be combined with negative sampling procedure for Skip-gram in (5).",
"The gradient for unnormalized embedding v w is solved as follows: log L [w∈X 1 ] (D, L) v wi = (log P (x ∈ X 1 )) v wi = 1 v w 2 σ − v w · τ v w τ i v w − v wi v w · τ v w (8) The optimization equation for v w , when w ∈ X 2 , can be derived analogously.",
"Spherical SentiVec Spherical SentiVec extends Logistic SentiVec by dealing with any number of lexical categories, L = {X i } n i=1 .",
"As such, the lexical objective takes on generic form: log L lex (W, L) = n i=1 w∈X i log P (w ∈ X i ), (9) Each P (w ∈ X i ) defines embedding generating process.",
"We assume each length-normalized v w for w of L is generated w.r.t.",
"a mixture model of von Mises-Fisher (vMF) distributions.",
"vMF is a probability distribution on a multidimensional sphere, characterized by parameters µ (mean direction) and κ (concentration parameter).",
"Sampled points are concentrated around µ; the greater the κ, the closer the sampled points are to µ.",
"We consider only unimodal vMF distributions, restricting concentration parameters to be strictly positive.",
"Hereby, each X i ∈ L is assigned to vMF distribution parameters (µ i , κ i ) and the membership probabilities are defined as follows: P(w ∈ X i ) = P (v w ; µ i , κ i ) = 1 Z κ i e κ i µ i ·vw , (10) where Z κ is the normalization factor.",
"The Spherical SentiVec lexical objective forces words of every X i ∈ L to gravitate towards and concentrate around their direction mean µ i .",
"As in Logistic SentiVec, it simulates clustering effect for the words of the same set.",
"In comparison to the direction vector of Logistic SentiVec, mean directions of Spherical SentiVec when fixed can substantially influence word embeddings training and must be carefully selected.",
"We optimize the mean directions along with the word embeddings using alternating procedure resembling K-means clustering algorithm.",
"For simplicity, we keep concentration parameters tied, κ 1 = κ 2 = ... = κ n = κ, and treat κ as a hyperparameter of this algorithm.",
"Optimization We derive optimization procedure for updating word embeddings assuming fixed direction means.",
"Like Logistic SentiVec, Spherical SentiVec can be combined with the negative sampling procedure of Skip-gram.",
"The gradient for unnormalized word embedding v w is solved by the following equation: log L [w∈X i ] (W, L) v wj = κi µij vw − vwj vw ·µ i vw vw 2 (11) Once word embedding v w (w ∈ X i ) is updated, we revise direction mean µ i w.r.t.",
"maximum likelihood estimator: µi = w∈X i vw w∈X i vw .",
"(12) Updating the direction means in such a way ensures that the lexical objective is non-decreasing.",
"Assuming the stochastic optimization procedure for L word2vec complies with the same nondecreasing property, the proposed alternating procedure converges.",
"Related Work There have been considerable research on improving the quality of distributional word embeddings.",
"Bolukbasi et al.",
"(2016) seek to debias word embeddings from gender stereotypes.",
"Rothe and Schütze (2017) incorporate WordNet lexeme and synset information.",
"Mrkšic et al.",
"(2016) encode antonym-synonym relations.",
"Liu et al.",
"(2015) encode ordinal relations such as hypernym and hyponym.",
"Kiela et al.",
"(2015) augment Skip-gram to enforce lexical similarity or relatedness constraints, Bollegala et al.",
"(2016) modify GloVe optimization procedure for the same purpose.",
"Faruqui et al.",
"(2015) employ semantic relations of PPDB, WordNet, FrameNet to retrofit word embeddings for various prediction tasks.",
"We use this Retrofitting method 7 as a baseline.",
"Socher et al.",
"(2011) derive multi-word embeddings for sentiment distribution prediction, while we focus on lexical distributional analysis.",
"Maas et al.",
"(2011) and Tang et al.",
"(2016) use documentlevel sentiment annotations to fit word embeddings, but document annotation might not always be available for distributional analysis on neutral corpora such as Wikipedia.",
"SentiVec relies on simple sentiment lexicon instead.",
"Refining (Yu et al., 2018) aligns the sentiment scores taken from lexical resource and the cosine similarity scores of corresponding word embeddings.",
"The method generally requires fine-grained sentiment scores for the words, which may not be available in some settings.",
"We use Refining as a baseline and adopt coarse-grained sentiment lexicon for this method.",
"Villegas et al.",
"(2016) compare various distributional word embeddings arising from the same corpus for sentiment classification, whereas we focus on the differentiation in input corpora and propose novel sentiment-infused word embeddings.",
"Experiments The objective of experiments is to study the efficacy of Logistic SentiVec and Spherical SentiVec word embeddings on the aforementioned text classification tasks.",
"One natural baseline is Word2Vec, as SentiVec subsumes its context prediction objective, while further incorporating lexical category prediction.",
"We include two other baselines that can leverage the same lexical resource but in manners different from SentiVec, namely: Retrofitting (Faruqui et al., 2015) and Refining (Yu et al., 2018) .",
"For these methods, we generate their word embeddings based on Setup III (see Section 3).",
"All the methods were run multiple times with various hyperparameters, optimized via grid-search; for each we present the best performing setting.",
"First, we discuss the sentiment classification task.",
"Table 4 shows the unfolded results for the 24 classification datasets of Amazon, as well as for Rotten Tomatoes.",
"For each classification dataset (row), and for the Objective and Subjective embedding corpora respectively, the best word embedding methods are shown in bold.",
"An asterisk indicates statistically significant 8 results at 5% in comparison to Word2Vec.",
"Both SentiVec variants outperform Word2Vec in the vast majority of the cases.",
"The degree of outperformance is higher for the Objective than the Subjective word embeddings.",
"This is a reasonable trend given our previous findings in Section 3.",
"As the Objective Corpus encodes less information than the Subjective Corpus for sentiment classification, the former is more likely to benefit from the infusion of sentiment information from additional lexical resources.",
"Note that the sentiment infusion into the word embeddings comes from separate lexical resources, and does not involve any sentiment classification label.",
"SentiVec also outperforms the two baselines that benefit from the same lexical resources.",
"Retrofitting does not improve upon Word2Vec, with the two embeddings essentially indistinguishable (the difference is only noticeable at the second decimal point).",
"Refining makes the word embeddings perform worse on the sentiment classification task.",
"One possible explanation is that Refining normally requires fine-grained labeled lexicon, where the words are scored w.r.t.",
"the sentiment scale, whereas we use sentiment lexicon of two labels (i.e., positive or negative).",
"SentiVec accepts coarse-grained sentiment lexicons, and potentially could be extended to deal with fine-grained labels.",
"As previously alluded to, topic and subjectivity classifications are less sensitive to the subjectivity within word embeddings than sentiment classification.",
"One therefore would not expect much, if any, performance gain from infusion of sentiment information.",
"However, such infusion should not subtract or harm the quality of word embeddings either.",
"Table 5 shows that the unfolded results for topic classification on the six datasets, and the result for subjectivity classification are similar across methods.",
"Neither the SentiVec variants, nor Retrofitting and Refining, change the subjectivity and topic classification capabilities much, which means that the used sentiment lexicon is targeted only at the sentiment subspace of embeddings.",
"Illustrative Changes in Embeddings To give more insights on the difference between SentiVec and Word2Vec, we show \"flower\" diagrams in Figure 1 for Logistic SentiVec and Figure 2 for Spherical SentiVec.",
"Each is associated with a reference word (e.g., good for Figure 1a) , and indicates relative changes in cosine distances between the reference word and the testing words surrounding the \"flower\".",
"Every testing word is associated with a \"petal\" or black axis extending from the center of the circle.",
"The \"petal\" length is proportional to the relative distance change in two word embeddings: κ = Word2Vec embeddings correspondingly.",
"If the distance remains unchanged (κ = 1), then the \"petal\" points at the circumference; if the reference and testing words are closer in the SentiVec embedding than they are in Word2Vec (κ < 1), the \"petal\" lies inside the circle; when the distance increases (κ > 1), the \"petal\" goes beyond the circle.",
"The diagrams are presented for Objective Embeddings 9 .",
"We use three reference words: good (positive), bad (negative), time (neutral); as well as three groups of testing words: green for words randomly sampled from positive lexicon (Sector I-II), red for words randomly sampled from negative lexicon (Sector II-III), and gray for frequent neutral common nouns (Sector III-I).",
"Figure 1 shows changes produced by Logistic SentiVec.",
"For the positive reference word (Figure 1a) , the average distance to the green words is shortened, whereas the distance to the red words increases.",
"The reverse is observed for the negative reference word (Figure 1b ).",
"This observation complies with the lexical objective (7) of Logistic SentiVec, which aims to separate the words of two different classes.",
"Note that the gray words suffer only moderate change with respect to positive and negative reference words.",
"For the neutral reference word (Figure 1c ), the distances are only moderately affected across all testing groups.",
"Figure 2 shows that Spherical SentiVec tends to make embeddings more compact than Logistic SentiVec.",
"As the former's lexical objective (9) is designed for clustering, but not for separation, we look at the comparative strength of the clustering effect on the testing words.",
"For the positive reference word (Figure 2a ), the largest clustering effect is achieved for the green words.",
"For the negative reference word (Figure 2b) , as expected, the red words are affected the most.",
"The gray words suffer the least change for all the reference words.",
"In summary, SentiVec effectively provides an advantage for subjectivity-sensitive task such as sentiment classification, while not harming the performance of other text classification tasks.",
"Conclusion We explore the differences between objective and subjective corpora for generating word embeddings, and find that there is indeed a difference in the embeddings' classification task performances.",
"Identifying the presence of sentiment words as one key factor for the difference, we propose a novel method SentiVec to train word embeddings that are infused with the sentiment polarity of words derived from a separate sentiment lexicon.",
"We further identify two lexical objectives: Logistic SentiVec and Spherical SentiVec.",
"The proposed word embeddings show improvements in sentiment classification, while maintaining their performance on subjectivity and topic classifications."
]
} | {
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"3",
"4.1",
"4.2",
"5",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Data and Methodology",
"Generating Word Embeddings",
"Evaluation Tasks",
"Comparative Analyses of Subjective vs. Objective Corpora",
"Logistic SentiVec",
"Spherical SentiVec",
"Related Work",
"Experiments",
"Conclusion"
]
} | GEM-SciDuet-train-134#paper-1359#slide-10 | Sentiment Words Still Cause Troubles | Word A Word B Their Similarity | Word A Word B Their Similarity | [] |
GEM-SciDuet-train-134#paper-1359#slide-11 | 1359 | Searching for the X-Factor: Exploring Corpus Subjectivity for Word Embeddings | We explore the notion of subjectivity, and hypothesize that word embeddings learnt from input corpora of varying levels of subjectivity behave differently on natural language processing tasks such as classifying a sentence by sentiment, subjectivity, or topic. Through systematic comparative analyses, we establish this to be the case indeed. Moreover, based on the discovery of the outsized role that sentiment words play on subjectivity-sensitive tasks such as sentiment classification, we develop a novel word embedding SentiVec which is infused with sentiment information from a lexical resource, and is shown to outperform baselines on such tasks. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226
],
"paper_content_text": [
"Introduction Distributional analysis methods such as Word2Vec (Mikolov et al., 2013) and GloVe (Pennington et al., 2014) have been critical for the success of many large-scale natural language processing (NLP) applications (Collobert et al., 2011; Socher et al., 2013; Goldberg, 2016) .",
"These methods employ distributional hypothesis (i.e., words used in the same contexts tend to have similar meaning) to derive distributional meaning via context prediction tasks and produce dense word embeddings.",
"While there have been active and ongoing research on improving word embedding methods (see Section 5), there is a relative dearth of study on the impact that an input corpus may have on the quality of the word embeddings.",
"The previous preoccupation centers around corpus size, i.e., a larger corpus is perceived to be richer in statistical information.",
"For instance, popular corpora include Wikipedia, Common Crawl, and Google News.",
"We postulate that there may be variations across corpora owing to factors that affect language use.",
"Intuitively, the many things we write (a work email, a product review, an academic publication, etc.)",
"may each involve certain stylistic, syntactic, and lexical choices, resulting in meaningfully different distributions of word cooccurrences.",
"Consequently, such factors may be encoded in the word embeddings, and input corpora may be differentially informative towards various NLP tasks.",
"In this work, we are interested in the notion of subjectivity.",
"Some NLP tasks, such as sentiment classification, revolve around subjective expressions of likes or dislikes.",
"Others, such as topic classification, revolve around more objective elements of whether a document belongs to a topic (e.g., science, politics).",
"Our central hypothesis is that word embeddings learnt from input corpora of contrasting levels of subjectivity perform differently when classifying sentences by sentiment, subjectivity, or topic.",
"As the first contribution, we outline an experimental scheme to explore this hypothesis in Section 2, and conduct a series of controlled experiments in Section 3 establishing that there exists a meaningful difference between word embeddings derived from objective vs. subjective corpora.",
"We further systematically investigate factors that could potentially explain the differences.",
"Upon discovering from the investigation that sentiment words play a particularly important role in subjectivity-sensitive NLP tasks, such as sentiment classification, as the second contribution, in Section 4 we develop SentiVec, a novel word embedding method infused with information from lexical resources such as a sentiment lexicon.",
"We further identify two alternative lexical objectives: Logistic SentiVec based on discriminative logistic regression, and Spherical SentiVec based on soft clustering effect of von Mises-Fisher distributions.",
"In Section 6, the proposed word embeddings show evident improvements on sentiment classification, as compared to the base model Word2Vec and other baselines using the same lexical resource.",
"Data and Methodology We lay out the methodology for generating word embeddings of contrasting subjectivity, whose effects are tested on several text classification tasks.",
"Generating Word Embeddings As it is difficult to precisely quantify the degree of subjectivity of a corpus, we resort to generating word embeddings from two corpora that contrast sharply in subjectivity, referring to them as the Objective Corpus and the Subjective Corpus.",
"Objective Corpus As virtually all contents are written by humans, an absolutely objective corpus (in the philosophical sense) may prove elusive.",
"There are however exemplars where, by construction, a corpus aspires to be as objective as possible, and probably achieves that in practical terms.",
"We postulate that one such corpus is Wikipedia.",
"Its list of policies and guidelines 1 , assiduously enforced by an editorial team, specify that an article must be written from a neutral point of view, which among other things means \"representing fairly, proportionately, and, as far as possible, without editorial bias, all of the significant views that have been published by reliable sources on a topic.\".",
"Moreover, it is a common resource for training distributional word embeddings and adopted widely by the research community to solve various NLP problems.",
"Hence, in this study, we use Wikipedia as the Objective Corpus.",
"Subjective Corpus By extension, one may then deem a corpus subjective if its content does not at least meet Wikipedia's neutral point of view requirement.",
"In other words, if the content is replete with personal feelings and opinions.",
"We posit that product reviews would be one such corpus.",
"For instance, Amazon's Community Guideline 2 states that \"Amazon values diverse opinions\", and that \"Content you submit should be relevant and based on your own honest opinions and experience.\".",
"Reviews consist of expressive content written by customers, and may not strive for the neutrality of an encyclopedia.",
"We rely on a 1 https://en.wikipedia.org/wiki/ Wikipedia:List_of_policies_and_ guidelines 2 https://www.amazon.com/gp/help/ customer/display.html?nodeId=201929730 large corpus of Amazon reviews from various categories (e.g., electronics, jewelry, books, and etc.)",
"(McAuley et al., 2015) as the Subjective Corpus.",
"Word Embeddings For the comparative analysis in Section 3, we employ Word2Vec (reviewed below) to generate word embeddings from each corpus.",
"Later on in Section 4, we will propose a new word embedding method called SentiVec.",
"For Word2Vec, we use the Skip-gram model to train distributional word embeddings on the Objective Corpus and the Subjective Corpus respectively.",
"Skip-gram aims to find word embeddings that are useful for predicting nearby words.",
"The objective is to maximize the context probability: log L(W ; C) = w∈W w ∈C(w) log P(w |w), (1) where W is an input corpus and C(w) is the context of token w. The probability of context word w , given observed word w is defined via softmax: P(w |w) = exp (v w · vw) ŵ∈V exp (vŵ · vw) , (2) where v w and v w are corresponding embeddings and V is the corpus vocabulary.",
"Though theoretically sound, the formulation is computationally impractical and requires tractable approximation.",
"Mikolov et al.",
"(2013) propose two efficient procedures to optimize (1): Hierarchical Softmax and Negative Sampling (NS).",
"In this work we focus on the widely adopted NS.",
"The intuition is that a \"good\" model should be able to differentiate observed data from noise.",
"The differentiation task is defined using logistic regression; the goal is to tell apart real context-word pair (w , w) from randomly generated noise pair (ŵ, w).",
"Formally, log L [w',w] = log σ (v w · vw) + k i=1 log σ (−vŵ i · vw), (3) where σ( · ) is a sigmoid function, and {ŵ i } k i=1 are negative samples.",
"Summing up all the contextword pairs, we derive the NS Skip-gram objective: log L word2vec (W ; C) = w∈W w ∈C(w) log L [w',w] .",
"(4) Training word embeddings with Skip-gram, we keep the same hyperparameters across all the runs: 300 dimensions for embeddings, k = 5 negative samples, and window of 5 tokens.",
"The Objective and Subjective corpora undergo the same preprocessing, i.e., discarding short sentences (< 5 tokens) and rare words (< 10 occurrences), removing punctuation, normalizing Unicode symbols.",
"Evaluation Tasks To compare word embeddings, we need a common yardstick.",
"It is difficult to define an inherent quality to word embeddings.",
"Instead, we put them through several evaluation tasks that can leverage word embeddings and standardize their formulations as binary classification tasks.",
"To boil the comparisons down to the essences of word embeddings (which is our central focus), we rely on standardized techniques so as to attribute as much of the differences as possible to the word embeddings.",
"We use logistic regression for classification, and represent a text snippet (e.g., a sentence) in the feature space as the average of the word embeddings of tokens in the snippet (ignoring out-ofvocabulary tokens).",
"The evaluation metric is the average accuracy from 10-fold cross validation.",
"There are three evaluation tasks of varying degrees of hypothetical subjectivity, as outlined below.",
"Each may involve multiple datasets.",
"Sentiment Classification Task This task classifies a sentence into either positive or negative.",
"We use two groups of datasets as follows.",
"The first group consists of 24 datasets from UCSD Amazon product data 3 corresponding to various product categories.",
"Each review has a rating from 1 to 5, which is transformed into positive (ratings 4 or 5) or negative (ratings 1 or 2) class.",
"For each dataset respectively, we sample 5000 sentences each from the positive and negative reviews.",
"Note that these sentences used for this evaluation task have not participated in the generation of word embeddings.",
"Due to space constraint, in most cases we present the average accuracy across the datasets, but where appropriate we enumerate the results for each dataset.",
"The second is Cornell's sentence polarity dataset v1.0 4 (Pang and Lee, 2005) , made up of 5331 each of positive and negative sentences from Rotten Tomatoes movie reviews.",
"The inclusion of this out-of-domain evaluation dataset is useful for examining whether the performance of word embeddings from the Subjective Corpus on the first group above may inadvertently be affected by indomain advantage arising from its Amazon origin.",
"Subjectivity Classification Task This task classifies a sentence into subjective or objective.",
"The dataset is Cornell's subjectivity dataset v1.0 5 , consisting of 5000 subjective sentences derived from Rotten Tomatoes (RT) reviews and 5000 objective sentences derived from IMDB plot summaries (Pang and Lee, 2004) .",
"This task is probably less sensitive to the subjectivity within word embeddings than sentiment classification, as determining whether a sentence is subjective or objective should ideally be an objective undertaking.",
"Topic Classification Task We use the 20 Newsgroups dataset 6 (\"bydate\" version), whereby the newsgroups are organized into six subject matter groupings.",
"We extract the message body and split them into sentences.",
"Each group's sentences then form the in-topic class, and we randomly sample an equivalent number of sentences from the remaining newsgroups to form the out-of-topic class.",
"This results in six datasets, each corresponding to a binary classification task.",
"In most cases, we present the average results, and where appropriate we enumerate the results for each dataset.",
"Hypothetically, this task is the least affected by the subjectivity within word embeddings.",
"Comparative Analyses of Subjective vs.",
"Objective Corpora We conduct a series of comparative analyses under various setups.",
"For each, we compare the performance in the evaluation tasks when using the Objective Corpus and the Subjective Corpus.",
"Table 1 shows the results for this series of analyses.",
"Initial Condition Setup I seeks to answer whether there is any difference between word embeddings derived from the Objective Corpus and the Subjective Corpus.",
"The word embeddings were trained on the whole data respectively.",
"Table 1 shows the corpus statistics and classification accuracies.",
"Evidently, the Subjective word embeddings outperform the Objective word embeddings on all the evaluation tasks.",
"The margins are largest for sentiment classification (86.5% vs. 81.5% or +5% Amazon, and 78.2% vs. 75.4% or +2.8% on Rotten Tomatoes or RT).",
"For subjectivity and topic classifications, the differences are smaller.",
"As earlier hypothesized, the sentiment classification task is more sensitive to subjectivity within word embeddings than the other tasks.",
"Therefore, training word embeddings on a subjective corpus may confer an advantage for such tasks.",
"On the other hand, the corpus statistics show a substantial difference in corpus size, which could be an alternative explanation for the outperformance by the Subjective Corpus if the larger corpus contains more informative distributional statistics.",
"Controlling for Corpus Size In Setup II, we keep the number of sentences in both corpora the same, by randomly downsampling sentences in the Subjective Corpus.",
"This procedure consequently reduces the number of types and tokens (see Table 1 , Setup II, Corpus Statistics).",
"Note that the number of tokens in the Subjective corpus is now fewer than in the Objective, the latter suffers no change.",
"Yet, even after a dramatic reduction in size, the Subjective embeddings still outperform the Objective significantly on both datasets of the sentiment classification task (+4% on Amazon and +2.5% on RT), while showing similar performance on subjectivity and topic classifications.",
"This bolsters the earlier observation that sentiment classification is more sensitive to subjectivity.",
"While there is a small effect due to corpus size difference, the gap in performance between Subjective and Objective embeddings on sentiment classification is still significant and cannot be explained away by the corpus size alone.",
"Controlling for Vocabulary While the Subjective Corpus has a much smaller vocabulary (i.e., # types), we turn a critical eye on whether its apparent advantage lies in having access to special word types that do not exist in the Objective Corpus.",
"In Setup III, we keep the training vocabulary the same for both, removing the types that are Table 2 : Top words of misclassified sentences present in one corpus but not in the other, so that out-of-vocabulary words are ignored in the training phase.",
"Table 1 , Setup III, shows significant reduction in types for both corpora.",
"Yet, the outperformance by the Subjective embeddings on the sentiment classification task still stands (+3.8% on Amazon and +2.3% on RT).",
"Moreover, it is so for both Amazon and Rotten Tomatoes datasets, implying that it is not due to close in-domain similarity between the corpora used for training the word embeddings and the classification tasks.",
"Significant Words To get more insights on the difference between the Subjective and Objective corpora, we analyze the mistakes word embeddings make on the development folds.",
"At this point we focus on the sentiment classification task and specifically on the Amazon data, which indicates the largest performance differences in the controlled experiments (see Table 1 , Setup III).",
"As words are still the main unit of information in distributional word embeddings, we extract words strongly associated with misclassified sentences.",
"We employed log-odds ratio with informative Dirichlet prior method (Monroe et al., 2008) to quantify this association.",
"It is used to contrast the words in misclassified vs. correctly classified sentences, and accounts for the variance of words and their prior counts taken from a large corpus.",
"Table 2 shows the top 25 words most associated with the misclassified sentences, sorted by their association scores.",
"On average 50% of the mistakes overlap for both word embeddings, therefore, some of the words are included in both lists.",
"40 − 44% of these words carry positive or negative sentiment connotations in general (see the underlined words in Table 2 ), while other words like return or send may carry sentiment connotation in e-commerce context.",
"We check if a word carries sentiment connotation using sentiment lexicon compiled by Hu and Liu (2004) , including 6789 words along with positive or negative labels.",
"We also observe linguistic negations (i.e., not, Don't).",
"For instance, the word most associated with the Objective-specific mistakes (excluding the Subjective misclassified sentences) is not, which suggests that perhaps Subjective word embedding accommodates better understanding of linguistic negations, which may partially explain the difference.",
"However, our methodology as outlined in Section 2.2 permits exchangeable word order and is not intended to analyze structural interaction between words.",
"We focus on further analysis of sentiment words, leaving linguistic negations in word embeddings for future investigation.",
"Controlling for Sentiment Words To control for the \"amount\" of sentiment in the Subjective and Objective corpora, we use sentiment lexicon compiled by Hu and Liu (2004) .",
"For each corpus, we create two subcorpora: With Sentiment contains only the sentences with at least one word from the sentiment lexicon, while Without Sentiment is the complement.",
"We match the corpora on the number of sentences, downsampling the larger corpus, train word embeddings on each subcorpus, and proceed with the classification experiments.",
"Table 3 shows the results, including that of random word embeddings for reference.",
"Sentiment lexicon has a significant impact on the performance of sentiment and subjectivity classifications, and a smaller impact on topic classification.",
"Without sentiment, the Subjective embeddings prove more robust, still outperforming the Objective on sentiment classification, while the Objective performs close to random word embeddings on Amazon .",
"In summary, evidences from the series of controlled experiments support the existence of some X-factor to the Subjective embeddings, which confers superior performance in subjectivity-sensitive tasks such as sentiment classification.",
"To leverage the consequential sentiment information, we propose a family of methods, called SentiVec, for training distributional word embeddings that are infused with information on the sentiment polarity of words.",
"The methods are built upon Word2Vec optimization algorithm and make use of available lexical sentiment resources such as SentiWordNet (Baccianella et al., 2010) , sentiment lexicon by Hu and Liu (2004) , and etc.",
"SentiVec seeks to satisfy two objectives, namely context prediction and lexical category prediction: log L = log L word2vec (W ; C) + λ log L lex (W, L), (5) where L word2vec (W ; C) is the Skip-gram objective as in (4) ; L lex (W, L) is a lexical objective for corpus W and lexical resource L; and λ is a tradeoff parameter.",
"Lexical resource L = {X i } n i=1 comprises of n word sets, each X i contains words of the same category.",
"For sentiment classification, we consider positive and negative word categories.",
"Logistic SentiVec Logistic SentiVec admits lexical resource in the form of two disjoint word sets, L = {X 1 , X 2 }, X 1 ∩ X 2 = ∅.",
"The objective is to tell apart which word set of L word w belongs to: log L lex (W, L) (6) = w∈X 1 log P(w ∈ X 1 ) + w∈X 2 log P(w ∈ X 2 ).",
"We further tie these probabilities together, and cast the objective as a logistic regression problem: P(w ∈ X 1 ) = 1 − P(w ∈ X 2 ) = σ(v w · τ ), (7) where v w is a word embedding and τ is a direction vector.",
"Since word embeddings are generally invariant to scaling and rotation when used as downstream feature representations, τ can be chosen randomly and fixed during training.",
"We experiment with randomly sampled unit length directions.",
"For simplicity, we also scale embedding v w to its unit length when computing v w · τ , which now equals to cosine similarity between v w and τ .",
"When v w is completely aligned with τ , the cosine similarity between them is 1, which maximizes P(w ∈ X 1 ) and favors words in X 1 .",
"When v w is opposite to τ , the cosine similarity equals to −1, which maximizes P(w ∈ X 2 ) and predicts vectors from X 2 .",
"Orthogonal vectors have cosine similarity of 0, which makes both w ∈ X 1 and w ∈ X 2 equally probable.",
"Optimizing (6) makes the corresponding word embeddings of X 1 and X 2 gravitate to the opposite semispaces and simulates clustering effect for the words of the same category, while the Word2Vec objective prevents words from collapsing to the same directions.",
"Optimization The objective in (6) permits simple stochastic gradient ascent optimization and can be combined with negative sampling procedure for Skip-gram in (5).",
"The gradient for unnormalized embedding v w is solved as follows: log L [w∈X 1 ] (D, L) v wi = (log P (x ∈ X 1 )) v wi = 1 v w 2 σ − v w · τ v w τ i v w − v wi v w · τ v w (8) The optimization equation for v w , when w ∈ X 2 , can be derived analogously.",
"Spherical SentiVec Spherical SentiVec extends Logistic SentiVec by dealing with any number of lexical categories, L = {X i } n i=1 .",
"As such, the lexical objective takes on generic form: log L lex (W, L) = n i=1 w∈X i log P (w ∈ X i ), (9) Each P (w ∈ X i ) defines embedding generating process.",
"We assume each length-normalized v w for w of L is generated w.r.t.",
"a mixture model of von Mises-Fisher (vMF) distributions.",
"vMF is a probability distribution on a multidimensional sphere, characterized by parameters µ (mean direction) and κ (concentration parameter).",
"Sampled points are concentrated around µ; the greater the κ, the closer the sampled points are to µ.",
"We consider only unimodal vMF distributions, restricting concentration parameters to be strictly positive.",
"Hereby, each X i ∈ L is assigned to vMF distribution parameters (µ i , κ i ) and the membership probabilities are defined as follows: P(w ∈ X i ) = P (v w ; µ i , κ i ) = 1 Z κ i e κ i µ i ·vw , (10) where Z κ is the normalization factor.",
"The Spherical SentiVec lexical objective forces words of every X i ∈ L to gravitate towards and concentrate around their direction mean µ i .",
"As in Logistic SentiVec, it simulates clustering effect for the words of the same set.",
"In comparison to the direction vector of Logistic SentiVec, mean directions of Spherical SentiVec when fixed can substantially influence word embeddings training and must be carefully selected.",
"We optimize the mean directions along with the word embeddings using alternating procedure resembling K-means clustering algorithm.",
"For simplicity, we keep concentration parameters tied, κ 1 = κ 2 = ... = κ n = κ, and treat κ as a hyperparameter of this algorithm.",
"Optimization We derive optimization procedure for updating word embeddings assuming fixed direction means.",
"Like Logistic SentiVec, Spherical SentiVec can be combined with the negative sampling procedure of Skip-gram.",
"The gradient for unnormalized word embedding v w is solved by the following equation: log L [w∈X i ] (W, L) v wj = κi µij vw − vwj vw ·µ i vw vw 2 (11) Once word embedding v w (w ∈ X i ) is updated, we revise direction mean µ i w.r.t.",
"maximum likelihood estimator: µi = w∈X i vw w∈X i vw .",
"(12) Updating the direction means in such a way ensures that the lexical objective is non-decreasing.",
"Assuming the stochastic optimization procedure for L word2vec complies with the same nondecreasing property, the proposed alternating procedure converges.",
"Related Work There have been considerable research on improving the quality of distributional word embeddings.",
"Bolukbasi et al.",
"(2016) seek to debias word embeddings from gender stereotypes.",
"Rothe and Schütze (2017) incorporate WordNet lexeme and synset information.",
"Mrkšic et al.",
"(2016) encode antonym-synonym relations.",
"Liu et al.",
"(2015) encode ordinal relations such as hypernym and hyponym.",
"Kiela et al.",
"(2015) augment Skip-gram to enforce lexical similarity or relatedness constraints, Bollegala et al.",
"(2016) modify GloVe optimization procedure for the same purpose.",
"Faruqui et al.",
"(2015) employ semantic relations of PPDB, WordNet, FrameNet to retrofit word embeddings for various prediction tasks.",
"We use this Retrofitting method 7 as a baseline.",
"Socher et al.",
"(2011) derive multi-word embeddings for sentiment distribution prediction, while we focus on lexical distributional analysis.",
"Maas et al.",
"(2011) and Tang et al.",
"(2016) use documentlevel sentiment annotations to fit word embeddings, but document annotation might not always be available for distributional analysis on neutral corpora such as Wikipedia.",
"SentiVec relies on simple sentiment lexicon instead.",
"Refining (Yu et al., 2018) aligns the sentiment scores taken from lexical resource and the cosine similarity scores of corresponding word embeddings.",
"The method generally requires fine-grained sentiment scores for the words, which may not be available in some settings.",
"We use Refining as a baseline and adopt coarse-grained sentiment lexicon for this method.",
"Villegas et al.",
"(2016) compare various distributional word embeddings arising from the same corpus for sentiment classification, whereas we focus on the differentiation in input corpora and propose novel sentiment-infused word embeddings.",
"Experiments The objective of experiments is to study the efficacy of Logistic SentiVec and Spherical SentiVec word embeddings on the aforementioned text classification tasks.",
"One natural baseline is Word2Vec, as SentiVec subsumes its context prediction objective, while further incorporating lexical category prediction.",
"We include two other baselines that can leverage the same lexical resource but in manners different from SentiVec, namely: Retrofitting (Faruqui et al., 2015) and Refining (Yu et al., 2018) .",
"For these methods, we generate their word embeddings based on Setup III (see Section 3).",
"All the methods were run multiple times with various hyperparameters, optimized via grid-search; for each we present the best performing setting.",
"First, we discuss the sentiment classification task.",
"Table 4 shows the unfolded results for the 24 classification datasets of Amazon, as well as for Rotten Tomatoes.",
"For each classification dataset (row), and for the Objective and Subjective embedding corpora respectively, the best word embedding methods are shown in bold.",
"An asterisk indicates statistically significant 8 results at 5% in comparison to Word2Vec.",
"Both SentiVec variants outperform Word2Vec in the vast majority of the cases.",
"The degree of outperformance is higher for the Objective than the Subjective word embeddings.",
"This is a reasonable trend given our previous findings in Section 3.",
"As the Objective Corpus encodes less information than the Subjective Corpus for sentiment classification, the former is more likely to benefit from the infusion of sentiment information from additional lexical resources.",
"Note that the sentiment infusion into the word embeddings comes from separate lexical resources, and does not involve any sentiment classification label.",
"SentiVec also outperforms the two baselines that benefit from the same lexical resources.",
"Retrofitting does not improve upon Word2Vec, with the two embeddings essentially indistinguishable (the difference is only noticeable at the second decimal point).",
"Refining makes the word embeddings perform worse on the sentiment classification task.",
"One possible explanation is that Refining normally requires fine-grained labeled lexicon, where the words are scored w.r.t.",
"the sentiment scale, whereas we use sentiment lexicon of two labels (i.e., positive or negative).",
"SentiVec accepts coarse-grained sentiment lexicons, and potentially could be extended to deal with fine-grained labels.",
"As previously alluded to, topic and subjectivity classifications are less sensitive to the subjectivity within word embeddings than sentiment classification.",
"One therefore would not expect much, if any, performance gain from infusion of sentiment information.",
"However, such infusion should not subtract or harm the quality of word embeddings either.",
"Table 5 shows that the unfolded results for topic classification on the six datasets, and the result for subjectivity classification are similar across methods.",
"Neither the SentiVec variants, nor Retrofitting and Refining, change the subjectivity and topic classification capabilities much, which means that the used sentiment lexicon is targeted only at the sentiment subspace of embeddings.",
"Illustrative Changes in Embeddings To give more insights on the difference between SentiVec and Word2Vec, we show \"flower\" diagrams in Figure 1 for Logistic SentiVec and Figure 2 for Spherical SentiVec.",
"Each is associated with a reference word (e.g., good for Figure 1a) , and indicates relative changes in cosine distances between the reference word and the testing words surrounding the \"flower\".",
"Every testing word is associated with a \"petal\" or black axis extending from the center of the circle.",
"The \"petal\" length is proportional to the relative distance change in two word embeddings: κ = Word2Vec embeddings correspondingly.",
"If the distance remains unchanged (κ = 1), then the \"petal\" points at the circumference; if the reference and testing words are closer in the SentiVec embedding than they are in Word2Vec (κ < 1), the \"petal\" lies inside the circle; when the distance increases (κ > 1), the \"petal\" goes beyond the circle.",
"The diagrams are presented for Objective Embeddings 9 .",
"We use three reference words: good (positive), bad (negative), time (neutral); as well as three groups of testing words: green for words randomly sampled from positive lexicon (Sector I-II), red for words randomly sampled from negative lexicon (Sector II-III), and gray for frequent neutral common nouns (Sector III-I).",
"Figure 1 shows changes produced by Logistic SentiVec.",
"For the positive reference word (Figure 1a) , the average distance to the green words is shortened, whereas the distance to the red words increases.",
"The reverse is observed for the negative reference word (Figure 1b ).",
"This observation complies with the lexical objective (7) of Logistic SentiVec, which aims to separate the words of two different classes.",
"Note that the gray words suffer only moderate change with respect to positive and negative reference words.",
"For the neutral reference word (Figure 1c ), the distances are only moderately affected across all testing groups.",
"Figure 2 shows that Spherical SentiVec tends to make embeddings more compact than Logistic SentiVec.",
"As the former's lexical objective (9) is designed for clustering, but not for separation, we look at the comparative strength of the clustering effect on the testing words.",
"For the positive reference word (Figure 2a ), the largest clustering effect is achieved for the green words.",
"For the negative reference word (Figure 2b) , as expected, the red words are affected the most.",
"The gray words suffer the least change for all the reference words.",
"In summary, SentiVec effectively provides an advantage for subjectivity-sensitive task such as sentiment classification, while not harming the performance of other text classification tasks.",
"Conclusion We explore the differences between objective and subjective corpora for generating word embeddings, and find that there is indeed a difference in the embeddings' classification task performances.",
"Identifying the presence of sentiment words as one key factor for the difference, we propose a novel method SentiVec to train word embeddings that are infused with the sentiment polarity of words derived from a separate sentiment lexicon.",
"We further identify two lexical objectives: Logistic SentiVec and Spherical SentiVec.",
"The proposed word embeddings show improvements in sentiment classification, while maintaining their performance on subjectivity and topic classifications."
]
} | {
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"3",
"4.1",
"4.2",
"5",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Data and Methodology",
"Generating Word Embeddings",
"Evaluation Tasks",
"Comparative Analyses of Subjective vs. Objective Corpora",
"Logistic SentiVec",
"Spherical SentiVec",
"Related Work",
"Experiments",
"Conclusion"
]
} | GEM-SciDuet-train-134#paper-1359#slide-11 | SentiVec Embeddings | Similar to good Similarity Similar to good Similarity | Similar to good Similarity Similar to good Similarity | [] |
GEM-SciDuet-train-134#paper-1359#slide-12 | 1359 | Searching for the X-Factor: Exploring Corpus Subjectivity for Word Embeddings | We explore the notion of subjectivity, and hypothesize that word embeddings learnt from input corpora of varying levels of subjectivity behave differently on natural language processing tasks such as classifying a sentence by sentiment, subjectivity, or topic. Through systematic comparative analyses, we establish this to be the case indeed. Moreover, based on the discovery of the outsized role that sentiment words play on subjectivity-sensitive tasks such as sentiment classification, we develop a novel word embedding SentiVec which is infused with sentiment information from a lexical resource, and is shown to outperform baselines on such tasks. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226
],
"paper_content_text": [
"Introduction Distributional analysis methods such as Word2Vec (Mikolov et al., 2013) and GloVe (Pennington et al., 2014) have been critical for the success of many large-scale natural language processing (NLP) applications (Collobert et al., 2011; Socher et al., 2013; Goldberg, 2016) .",
"These methods employ distributional hypothesis (i.e., words used in the same contexts tend to have similar meaning) to derive distributional meaning via context prediction tasks and produce dense word embeddings.",
"While there have been active and ongoing research on improving word embedding methods (see Section 5), there is a relative dearth of study on the impact that an input corpus may have on the quality of the word embeddings.",
"The previous preoccupation centers around corpus size, i.e., a larger corpus is perceived to be richer in statistical information.",
"For instance, popular corpora include Wikipedia, Common Crawl, and Google News.",
"We postulate that there may be variations across corpora owing to factors that affect language use.",
"Intuitively, the many things we write (a work email, a product review, an academic publication, etc.)",
"may each involve certain stylistic, syntactic, and lexical choices, resulting in meaningfully different distributions of word cooccurrences.",
"Consequently, such factors may be encoded in the word embeddings, and input corpora may be differentially informative towards various NLP tasks.",
"In this work, we are interested in the notion of subjectivity.",
"Some NLP tasks, such as sentiment classification, revolve around subjective expressions of likes or dislikes.",
"Others, such as topic classification, revolve around more objective elements of whether a document belongs to a topic (e.g., science, politics).",
"Our central hypothesis is that word embeddings learnt from input corpora of contrasting levels of subjectivity perform differently when classifying sentences by sentiment, subjectivity, or topic.",
"As the first contribution, we outline an experimental scheme to explore this hypothesis in Section 2, and conduct a series of controlled experiments in Section 3 establishing that there exists a meaningful difference between word embeddings derived from objective vs. subjective corpora.",
"We further systematically investigate factors that could potentially explain the differences.",
"Upon discovering from the investigation that sentiment words play a particularly important role in subjectivity-sensitive NLP tasks, such as sentiment classification, as the second contribution, in Section 4 we develop SentiVec, a novel word embedding method infused with information from lexical resources such as a sentiment lexicon.",
"We further identify two alternative lexical objectives: Logistic SentiVec based on discriminative logistic regression, and Spherical SentiVec based on soft clustering effect of von Mises-Fisher distributions.",
"In Section 6, the proposed word embeddings show evident improvements on sentiment classification, as compared to the base model Word2Vec and other baselines using the same lexical resource.",
"Data and Methodology We lay out the methodology for generating word embeddings of contrasting subjectivity, whose effects are tested on several text classification tasks.",
"Generating Word Embeddings As it is difficult to precisely quantify the degree of subjectivity of a corpus, we resort to generating word embeddings from two corpora that contrast sharply in subjectivity, referring to them as the Objective Corpus and the Subjective Corpus.",
"Objective Corpus As virtually all contents are written by humans, an absolutely objective corpus (in the philosophical sense) may prove elusive.",
"There are however exemplars where, by construction, a corpus aspires to be as objective as possible, and probably achieves that in practical terms.",
"We postulate that one such corpus is Wikipedia.",
"Its list of policies and guidelines 1 , assiduously enforced by an editorial team, specify that an article must be written from a neutral point of view, which among other things means \"representing fairly, proportionately, and, as far as possible, without editorial bias, all of the significant views that have been published by reliable sources on a topic.\".",
"Moreover, it is a common resource for training distributional word embeddings and adopted widely by the research community to solve various NLP problems.",
"Hence, in this study, we use Wikipedia as the Objective Corpus.",
"Subjective Corpus By extension, one may then deem a corpus subjective if its content does not at least meet Wikipedia's neutral point of view requirement.",
"In other words, if the content is replete with personal feelings and opinions.",
"We posit that product reviews would be one such corpus.",
"For instance, Amazon's Community Guideline 2 states that \"Amazon values diverse opinions\", and that \"Content you submit should be relevant and based on your own honest opinions and experience.\".",
"Reviews consist of expressive content written by customers, and may not strive for the neutrality of an encyclopedia.",
"We rely on a 1 https://en.wikipedia.org/wiki/ Wikipedia:List_of_policies_and_ guidelines 2 https://www.amazon.com/gp/help/ customer/display.html?nodeId=201929730 large corpus of Amazon reviews from various categories (e.g., electronics, jewelry, books, and etc.)",
"(McAuley et al., 2015) as the Subjective Corpus.",
"Word Embeddings For the comparative analysis in Section 3, we employ Word2Vec (reviewed below) to generate word embeddings from each corpus.",
"Later on in Section 4, we will propose a new word embedding method called SentiVec.",
"For Word2Vec, we use the Skip-gram model to train distributional word embeddings on the Objective Corpus and the Subjective Corpus respectively.",
"Skip-gram aims to find word embeddings that are useful for predicting nearby words.",
"The objective is to maximize the context probability: log L(W ; C) = w∈W w ∈C(w) log P(w |w), (1) where W is an input corpus and C(w) is the context of token w. The probability of context word w , given observed word w is defined via softmax: P(w |w) = exp (v w · vw) ŵ∈V exp (vŵ · vw) , (2) where v w and v w are corresponding embeddings and V is the corpus vocabulary.",
"Though theoretically sound, the formulation is computationally impractical and requires tractable approximation.",
"Mikolov et al.",
"(2013) propose two efficient procedures to optimize (1): Hierarchical Softmax and Negative Sampling (NS).",
"In this work we focus on the widely adopted NS.",
"The intuition is that a \"good\" model should be able to differentiate observed data from noise.",
"The differentiation task is defined using logistic regression; the goal is to tell apart real context-word pair (w , w) from randomly generated noise pair (ŵ, w).",
"Formally, log L [w',w] = log σ (v w · vw) + k i=1 log σ (−vŵ i · vw), (3) where σ( · ) is a sigmoid function, and {ŵ i } k i=1 are negative samples.",
"Summing up all the contextword pairs, we derive the NS Skip-gram objective: log L word2vec (W ; C) = w∈W w ∈C(w) log L [w',w] .",
"(4) Training word embeddings with Skip-gram, we keep the same hyperparameters across all the runs: 300 dimensions for embeddings, k = 5 negative samples, and window of 5 tokens.",
"The Objective and Subjective corpora undergo the same preprocessing, i.e., discarding short sentences (< 5 tokens) and rare words (< 10 occurrences), removing punctuation, normalizing Unicode symbols.",
"Evaluation Tasks To compare word embeddings, we need a common yardstick.",
"It is difficult to define an inherent quality to word embeddings.",
"Instead, we put them through several evaluation tasks that can leverage word embeddings and standardize their formulations as binary classification tasks.",
"To boil the comparisons down to the essences of word embeddings (which is our central focus), we rely on standardized techniques so as to attribute as much of the differences as possible to the word embeddings.",
"We use logistic regression for classification, and represent a text snippet (e.g., a sentence) in the feature space as the average of the word embeddings of tokens in the snippet (ignoring out-ofvocabulary tokens).",
"The evaluation metric is the average accuracy from 10-fold cross validation.",
"There are three evaluation tasks of varying degrees of hypothetical subjectivity, as outlined below.",
"Each may involve multiple datasets.",
"Sentiment Classification Task This task classifies a sentence into either positive or negative.",
"We use two groups of datasets as follows.",
"The first group consists of 24 datasets from UCSD Amazon product data 3 corresponding to various product categories.",
"Each review has a rating from 1 to 5, which is transformed into positive (ratings 4 or 5) or negative (ratings 1 or 2) class.",
"For each dataset respectively, we sample 5000 sentences each from the positive and negative reviews.",
"Note that these sentences used for this evaluation task have not participated in the generation of word embeddings.",
"Due to space constraint, in most cases we present the average accuracy across the datasets, but where appropriate we enumerate the results for each dataset.",
"The second is Cornell's sentence polarity dataset v1.0 4 (Pang and Lee, 2005) , made up of 5331 each of positive and negative sentences from Rotten Tomatoes movie reviews.",
"The inclusion of this out-of-domain evaluation dataset is useful for examining whether the performance of word embeddings from the Subjective Corpus on the first group above may inadvertently be affected by indomain advantage arising from its Amazon origin.",
"Subjectivity Classification Task This task classifies a sentence into subjective or objective.",
"The dataset is Cornell's subjectivity dataset v1.0 5 , consisting of 5000 subjective sentences derived from Rotten Tomatoes (RT) reviews and 5000 objective sentences derived from IMDB plot summaries (Pang and Lee, 2004) .",
"This task is probably less sensitive to the subjectivity within word embeddings than sentiment classification, as determining whether a sentence is subjective or objective should ideally be an objective undertaking.",
"Topic Classification Task We use the 20 Newsgroups dataset 6 (\"bydate\" version), whereby the newsgroups are organized into six subject matter groupings.",
"We extract the message body and split them into sentences.",
"Each group's sentences then form the in-topic class, and we randomly sample an equivalent number of sentences from the remaining newsgroups to form the out-of-topic class.",
"This results in six datasets, each corresponding to a binary classification task.",
"In most cases, we present the average results, and where appropriate we enumerate the results for each dataset.",
"Hypothetically, this task is the least affected by the subjectivity within word embeddings.",
"Comparative Analyses of Subjective vs.",
"Objective Corpora We conduct a series of comparative analyses under various setups.",
"For each, we compare the performance in the evaluation tasks when using the Objective Corpus and the Subjective Corpus.",
"Table 1 shows the results for this series of analyses.",
"Initial Condition Setup I seeks to answer whether there is any difference between word embeddings derived from the Objective Corpus and the Subjective Corpus.",
"The word embeddings were trained on the whole data respectively.",
"Table 1 shows the corpus statistics and classification accuracies.",
"Evidently, the Subjective word embeddings outperform the Objective word embeddings on all the evaluation tasks.",
"The margins are largest for sentiment classification (86.5% vs. 81.5% or +5% Amazon, and 78.2% vs. 75.4% or +2.8% on Rotten Tomatoes or RT).",
"For subjectivity and topic classifications, the differences are smaller.",
"As earlier hypothesized, the sentiment classification task is more sensitive to subjectivity within word embeddings than the other tasks.",
"Therefore, training word embeddings on a subjective corpus may confer an advantage for such tasks.",
"On the other hand, the corpus statistics show a substantial difference in corpus size, which could be an alternative explanation for the outperformance by the Subjective Corpus if the larger corpus contains more informative distributional statistics.",
"Controlling for Corpus Size In Setup II, we keep the number of sentences in both corpora the same, by randomly downsampling sentences in the Subjective Corpus.",
"This procedure consequently reduces the number of types and tokens (see Table 1 , Setup II, Corpus Statistics).",
"Note that the number of tokens in the Subjective corpus is now fewer than in the Objective, the latter suffers no change.",
"Yet, even after a dramatic reduction in size, the Subjective embeddings still outperform the Objective significantly on both datasets of the sentiment classification task (+4% on Amazon and +2.5% on RT), while showing similar performance on subjectivity and topic classifications.",
"This bolsters the earlier observation that sentiment classification is more sensitive to subjectivity.",
"While there is a small effect due to corpus size difference, the gap in performance between Subjective and Objective embeddings on sentiment classification is still significant and cannot be explained away by the corpus size alone.",
"Controlling for Vocabulary While the Subjective Corpus has a much smaller vocabulary (i.e., # types), we turn a critical eye on whether its apparent advantage lies in having access to special word types that do not exist in the Objective Corpus.",
"In Setup III, we keep the training vocabulary the same for both, removing the types that are Table 2 : Top words of misclassified sentences present in one corpus but not in the other, so that out-of-vocabulary words are ignored in the training phase.",
"Table 1 , Setup III, shows significant reduction in types for both corpora.",
"Yet, the outperformance by the Subjective embeddings on the sentiment classification task still stands (+3.8% on Amazon and +2.3% on RT).",
"Moreover, it is so for both Amazon and Rotten Tomatoes datasets, implying that it is not due to close in-domain similarity between the corpora used for training the word embeddings and the classification tasks.",
"Significant Words To get more insights on the difference between the Subjective and Objective corpora, we analyze the mistakes word embeddings make on the development folds.",
"At this point we focus on the sentiment classification task and specifically on the Amazon data, which indicates the largest performance differences in the controlled experiments (see Table 1 , Setup III).",
"As words are still the main unit of information in distributional word embeddings, we extract words strongly associated with misclassified sentences.",
"We employed log-odds ratio with informative Dirichlet prior method (Monroe et al., 2008) to quantify this association.",
"It is used to contrast the words in misclassified vs. correctly classified sentences, and accounts for the variance of words and their prior counts taken from a large corpus.",
"Table 2 shows the top 25 words most associated with the misclassified sentences, sorted by their association scores.",
"On average 50% of the mistakes overlap for both word embeddings, therefore, some of the words are included in both lists.",
"40 − 44% of these words carry positive or negative sentiment connotations in general (see the underlined words in Table 2 ), while other words like return or send may carry sentiment connotation in e-commerce context.",
"We check if a word carries sentiment connotation using sentiment lexicon compiled by Hu and Liu (2004) , including 6789 words along with positive or negative labels.",
"We also observe linguistic negations (i.e., not, Don't).",
"For instance, the word most associated with the Objective-specific mistakes (excluding the Subjective misclassified sentences) is not, which suggests that perhaps Subjective word embedding accommodates better understanding of linguistic negations, which may partially explain the difference.",
"However, our methodology as outlined in Section 2.2 permits exchangeable word order and is not intended to analyze structural interaction between words.",
"We focus on further analysis of sentiment words, leaving linguistic negations in word embeddings for future investigation.",
"Controlling for Sentiment Words To control for the \"amount\" of sentiment in the Subjective and Objective corpora, we use sentiment lexicon compiled by Hu and Liu (2004) .",
"For each corpus, we create two subcorpora: With Sentiment contains only the sentences with at least one word from the sentiment lexicon, while Without Sentiment is the complement.",
"We match the corpora on the number of sentences, downsampling the larger corpus, train word embeddings on each subcorpus, and proceed with the classification experiments.",
"Table 3 shows the results, including that of random word embeddings for reference.",
"Sentiment lexicon has a significant impact on the performance of sentiment and subjectivity classifications, and a smaller impact on topic classification.",
"Without sentiment, the Subjective embeddings prove more robust, still outperforming the Objective on sentiment classification, while the Objective performs close to random word embeddings on Amazon .",
"In summary, evidences from the series of controlled experiments support the existence of some X-factor to the Subjective embeddings, which confers superior performance in subjectivity-sensitive tasks such as sentiment classification.",
"To leverage the consequential sentiment information, we propose a family of methods, called SentiVec, for training distributional word embeddings that are infused with information on the sentiment polarity of words.",
"The methods are built upon Word2Vec optimization algorithm and make use of available lexical sentiment resources such as SentiWordNet (Baccianella et al., 2010) , sentiment lexicon by Hu and Liu (2004) , and etc.",
"SentiVec seeks to satisfy two objectives, namely context prediction and lexical category prediction: log L = log L word2vec (W ; C) + λ log L lex (W, L), (5) where L word2vec (W ; C) is the Skip-gram objective as in (4) ; L lex (W, L) is a lexical objective for corpus W and lexical resource L; and λ is a tradeoff parameter.",
"Lexical resource L = {X i } n i=1 comprises of n word sets, each X i contains words of the same category.",
"For sentiment classification, we consider positive and negative word categories.",
"Logistic SentiVec Logistic SentiVec admits lexical resource in the form of two disjoint word sets, L = {X 1 , X 2 }, X 1 ∩ X 2 = ∅.",
"The objective is to tell apart which word set of L word w belongs to: log L lex (W, L) (6) = w∈X 1 log P(w ∈ X 1 ) + w∈X 2 log P(w ∈ X 2 ).",
"We further tie these probabilities together, and cast the objective as a logistic regression problem: P(w ∈ X 1 ) = 1 − P(w ∈ X 2 ) = σ(v w · τ ), (7) where v w is a word embedding and τ is a direction vector.",
"Since word embeddings are generally invariant to scaling and rotation when used as downstream feature representations, τ can be chosen randomly and fixed during training.",
"We experiment with randomly sampled unit length directions.",
"For simplicity, we also scale embedding v w to its unit length when computing v w · τ , which now equals to cosine similarity between v w and τ .",
"When v w is completely aligned with τ , the cosine similarity between them is 1, which maximizes P(w ∈ X 1 ) and favors words in X 1 .",
"When v w is opposite to τ , the cosine similarity equals to −1, which maximizes P(w ∈ X 2 ) and predicts vectors from X 2 .",
"Orthogonal vectors have cosine similarity of 0, which makes both w ∈ X 1 and w ∈ X 2 equally probable.",
"Optimizing (6) makes the corresponding word embeddings of X 1 and X 2 gravitate to the opposite semispaces and simulates clustering effect for the words of the same category, while the Word2Vec objective prevents words from collapsing to the same directions.",
"Optimization The objective in (6) permits simple stochastic gradient ascent optimization and can be combined with negative sampling procedure for Skip-gram in (5).",
"The gradient for unnormalized embedding v w is solved as follows: log L [w∈X 1 ] (D, L) v wi = (log P (x ∈ X 1 )) v wi = 1 v w 2 σ − v w · τ v w τ i v w − v wi v w · τ v w (8) The optimization equation for v w , when w ∈ X 2 , can be derived analogously.",
"Spherical SentiVec Spherical SentiVec extends Logistic SentiVec by dealing with any number of lexical categories, L = {X i } n i=1 .",
"As such, the lexical objective takes on generic form: log L lex (W, L) = n i=1 w∈X i log P (w ∈ X i ), (9) Each P (w ∈ X i ) defines embedding generating process.",
"We assume each length-normalized v w for w of L is generated w.r.t.",
"a mixture model of von Mises-Fisher (vMF) distributions.",
"vMF is a probability distribution on a multidimensional sphere, characterized by parameters µ (mean direction) and κ (concentration parameter).",
"Sampled points are concentrated around µ; the greater the κ, the closer the sampled points are to µ.",
"We consider only unimodal vMF distributions, restricting concentration parameters to be strictly positive.",
"Hereby, each X i ∈ L is assigned to vMF distribution parameters (µ i , κ i ) and the membership probabilities are defined as follows: P(w ∈ X i ) = P (v w ; µ i , κ i ) = 1 Z κ i e κ i µ i ·vw , (10) where Z κ is the normalization factor.",
"The Spherical SentiVec lexical objective forces words of every X i ∈ L to gravitate towards and concentrate around their direction mean µ i .",
"As in Logistic SentiVec, it simulates clustering effect for the words of the same set.",
"In comparison to the direction vector of Logistic SentiVec, mean directions of Spherical SentiVec when fixed can substantially influence word embeddings training and must be carefully selected.",
"We optimize the mean directions along with the word embeddings using alternating procedure resembling K-means clustering algorithm.",
"For simplicity, we keep concentration parameters tied, κ 1 = κ 2 = ... = κ n = κ, and treat κ as a hyperparameter of this algorithm.",
"Optimization We derive optimization procedure for updating word embeddings assuming fixed direction means.",
"Like Logistic SentiVec, Spherical SentiVec can be combined with the negative sampling procedure of Skip-gram.",
"The gradient for unnormalized word embedding v w is solved by the following equation: log L [w∈X i ] (W, L) v wj = κi µij vw − vwj vw ·µ i vw vw 2 (11) Once word embedding v w (w ∈ X i ) is updated, we revise direction mean µ i w.r.t.",
"maximum likelihood estimator: µi = w∈X i vw w∈X i vw .",
"(12) Updating the direction means in such a way ensures that the lexical objective is non-decreasing.",
"Assuming the stochastic optimization procedure for L word2vec complies with the same nondecreasing property, the proposed alternating procedure converges.",
"Related Work There have been considerable research on improving the quality of distributional word embeddings.",
"Bolukbasi et al.",
"(2016) seek to debias word embeddings from gender stereotypes.",
"Rothe and Schütze (2017) incorporate WordNet lexeme and synset information.",
"Mrkšic et al.",
"(2016) encode antonym-synonym relations.",
"Liu et al.",
"(2015) encode ordinal relations such as hypernym and hyponym.",
"Kiela et al.",
"(2015) augment Skip-gram to enforce lexical similarity or relatedness constraints, Bollegala et al.",
"(2016) modify GloVe optimization procedure for the same purpose.",
"Faruqui et al.",
"(2015) employ semantic relations of PPDB, WordNet, FrameNet to retrofit word embeddings for various prediction tasks.",
"We use this Retrofitting method 7 as a baseline.",
"Socher et al.",
"(2011) derive multi-word embeddings for sentiment distribution prediction, while we focus on lexical distributional analysis.",
"Maas et al.",
"(2011) and Tang et al.",
"(2016) use documentlevel sentiment annotations to fit word embeddings, but document annotation might not always be available for distributional analysis on neutral corpora such as Wikipedia.",
"SentiVec relies on simple sentiment lexicon instead.",
"Refining (Yu et al., 2018) aligns the sentiment scores taken from lexical resource and the cosine similarity scores of corresponding word embeddings.",
"The method generally requires fine-grained sentiment scores for the words, which may not be available in some settings.",
"We use Refining as a baseline and adopt coarse-grained sentiment lexicon for this method.",
"Villegas et al.",
"(2016) compare various distributional word embeddings arising from the same corpus for sentiment classification, whereas we focus on the differentiation in input corpora and propose novel sentiment-infused word embeddings.",
"Experiments The objective of experiments is to study the efficacy of Logistic SentiVec and Spherical SentiVec word embeddings on the aforementioned text classification tasks.",
"One natural baseline is Word2Vec, as SentiVec subsumes its context prediction objective, while further incorporating lexical category prediction.",
"We include two other baselines that can leverage the same lexical resource but in manners different from SentiVec, namely: Retrofitting (Faruqui et al., 2015) and Refining (Yu et al., 2018) .",
"For these methods, we generate their word embeddings based on Setup III (see Section 3).",
"All the methods were run multiple times with various hyperparameters, optimized via grid-search; for each we present the best performing setting.",
"First, we discuss the sentiment classification task.",
"Table 4 shows the unfolded results for the 24 classification datasets of Amazon, as well as for Rotten Tomatoes.",
"For each classification dataset (row), and for the Objective and Subjective embedding corpora respectively, the best word embedding methods are shown in bold.",
"An asterisk indicates statistically significant 8 results at 5% in comparison to Word2Vec.",
"Both SentiVec variants outperform Word2Vec in the vast majority of the cases.",
"The degree of outperformance is higher for the Objective than the Subjective word embeddings.",
"This is a reasonable trend given our previous findings in Section 3.",
"As the Objective Corpus encodes less information than the Subjective Corpus for sentiment classification, the former is more likely to benefit from the infusion of sentiment information from additional lexical resources.",
"Note that the sentiment infusion into the word embeddings comes from separate lexical resources, and does not involve any sentiment classification label.",
"SentiVec also outperforms the two baselines that benefit from the same lexical resources.",
"Retrofitting does not improve upon Word2Vec, with the two embeddings essentially indistinguishable (the difference is only noticeable at the second decimal point).",
"Refining makes the word embeddings perform worse on the sentiment classification task.",
"One possible explanation is that Refining normally requires fine-grained labeled lexicon, where the words are scored w.r.t.",
"the sentiment scale, whereas we use sentiment lexicon of two labels (i.e., positive or negative).",
"SentiVec accepts coarse-grained sentiment lexicons, and potentially could be extended to deal with fine-grained labels.",
"As previously alluded to, topic and subjectivity classifications are less sensitive to the subjectivity within word embeddings than sentiment classification.",
"One therefore would not expect much, if any, performance gain from infusion of sentiment information.",
"However, such infusion should not subtract or harm the quality of word embeddings either.",
"Table 5 shows that the unfolded results for topic classification on the six datasets, and the result for subjectivity classification are similar across methods.",
"Neither the SentiVec variants, nor Retrofitting and Refining, change the subjectivity and topic classification capabilities much, which means that the used sentiment lexicon is targeted only at the sentiment subspace of embeddings.",
"Illustrative Changes in Embeddings To give more insights on the difference between SentiVec and Word2Vec, we show \"flower\" diagrams in Figure 1 for Logistic SentiVec and Figure 2 for Spherical SentiVec.",
"Each is associated with a reference word (e.g., good for Figure 1a) , and indicates relative changes in cosine distances between the reference word and the testing words surrounding the \"flower\".",
"Every testing word is associated with a \"petal\" or black axis extending from the center of the circle.",
"The \"petal\" length is proportional to the relative distance change in two word embeddings: κ = Word2Vec embeddings correspondingly.",
"If the distance remains unchanged (κ = 1), then the \"petal\" points at the circumference; if the reference and testing words are closer in the SentiVec embedding than they are in Word2Vec (κ < 1), the \"petal\" lies inside the circle; when the distance increases (κ > 1), the \"petal\" goes beyond the circle.",
"The diagrams are presented for Objective Embeddings 9 .",
"We use three reference words: good (positive), bad (negative), time (neutral); as well as three groups of testing words: green for words randomly sampled from positive lexicon (Sector I-II), red for words randomly sampled from negative lexicon (Sector II-III), and gray for frequent neutral common nouns (Sector III-I).",
"Figure 1 shows changes produced by Logistic SentiVec.",
"For the positive reference word (Figure 1a) , the average distance to the green words is shortened, whereas the distance to the red words increases.",
"The reverse is observed for the negative reference word (Figure 1b ).",
"This observation complies with the lexical objective (7) of Logistic SentiVec, which aims to separate the words of two different classes.",
"Note that the gray words suffer only moderate change with respect to positive and negative reference words.",
"For the neutral reference word (Figure 1c ), the distances are only moderately affected across all testing groups.",
"Figure 2 shows that Spherical SentiVec tends to make embeddings more compact than Logistic SentiVec.",
"As the former's lexical objective (9) is designed for clustering, but not for separation, we look at the comparative strength of the clustering effect on the testing words.",
"For the positive reference word (Figure 2a ), the largest clustering effect is achieved for the green words.",
"For the negative reference word (Figure 2b) , as expected, the red words are affected the most.",
"The gray words suffer the least change for all the reference words.",
"In summary, SentiVec effectively provides an advantage for subjectivity-sensitive task such as sentiment classification, while not harming the performance of other text classification tasks.",
"Conclusion We explore the differences between objective and subjective corpora for generating word embeddings, and find that there is indeed a difference in the embeddings' classification task performances.",
"Identifying the presence of sentiment words as one key factor for the difference, we propose a novel method SentiVec to train word embeddings that are infused with the sentiment polarity of words derived from a separate sentiment lexicon.",
"We further identify two lexical objectives: Logistic SentiVec and Spherical SentiVec.",
"The proposed word embeddings show improvements in sentiment classification, while maintaining their performance on subjectivity and topic classifications."
]
} | {
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"3",
"4.1",
"4.2",
"5",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Data and Methodology",
"Generating Word Embeddings",
"Evaluation Tasks",
"Comparative Analyses of Subjective vs. Objective Corpora",
"Logistic SentiVec",
"Spherical SentiVec",
"Related Work",
"Experiments",
"Conclusion"
]
} | GEM-SciDuet-train-134#paper-1359#slide-12 | SentiVec Infusing Sentiment | Predicts context words as in
Negative: waste, junk, horrible, defective,
Positive: love, great, recommend, easy, | Predicts context words as in
Negative: waste, junk, horrible, defective,
Positive: love, great, recommend, easy, | [] |
GEM-SciDuet-train-134#paper-1359#slide-13 | 1359 | Searching for the X-Factor: Exploring Corpus Subjectivity for Word Embeddings | We explore the notion of subjectivity, and hypothesize that word embeddings learnt from input corpora of varying levels of subjectivity behave differently on natural language processing tasks such as classifying a sentence by sentiment, subjectivity, or topic. Through systematic comparative analyses, we establish this to be the case indeed. Moreover, based on the discovery of the outsized role that sentiment words play on subjectivity-sensitive tasks such as sentiment classification, we develop a novel word embedding SentiVec which is infused with sentiment information from a lexical resource, and is shown to outperform baselines on such tasks. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226
],
"paper_content_text": [
"Introduction Distributional analysis methods such as Word2Vec (Mikolov et al., 2013) and GloVe (Pennington et al., 2014) have been critical for the success of many large-scale natural language processing (NLP) applications (Collobert et al., 2011; Socher et al., 2013; Goldberg, 2016) .",
"These methods employ distributional hypothesis (i.e., words used in the same contexts tend to have similar meaning) to derive distributional meaning via context prediction tasks and produce dense word embeddings.",
"While there have been active and ongoing research on improving word embedding methods (see Section 5), there is a relative dearth of study on the impact that an input corpus may have on the quality of the word embeddings.",
"The previous preoccupation centers around corpus size, i.e., a larger corpus is perceived to be richer in statistical information.",
"For instance, popular corpora include Wikipedia, Common Crawl, and Google News.",
"We postulate that there may be variations across corpora owing to factors that affect language use.",
"Intuitively, the many things we write (a work email, a product review, an academic publication, etc.)",
"may each involve certain stylistic, syntactic, and lexical choices, resulting in meaningfully different distributions of word cooccurrences.",
"Consequently, such factors may be encoded in the word embeddings, and input corpora may be differentially informative towards various NLP tasks.",
"In this work, we are interested in the notion of subjectivity.",
"Some NLP tasks, such as sentiment classification, revolve around subjective expressions of likes or dislikes.",
"Others, such as topic classification, revolve around more objective elements of whether a document belongs to a topic (e.g., science, politics).",
"Our central hypothesis is that word embeddings learnt from input corpora of contrasting levels of subjectivity perform differently when classifying sentences by sentiment, subjectivity, or topic.",
"As the first contribution, we outline an experimental scheme to explore this hypothesis in Section 2, and conduct a series of controlled experiments in Section 3 establishing that there exists a meaningful difference between word embeddings derived from objective vs. subjective corpora.",
"We further systematically investigate factors that could potentially explain the differences.",
"Upon discovering from the investigation that sentiment words play a particularly important role in subjectivity-sensitive NLP tasks, such as sentiment classification, as the second contribution, in Section 4 we develop SentiVec, a novel word embedding method infused with information from lexical resources such as a sentiment lexicon.",
"We further identify two alternative lexical objectives: Logistic SentiVec based on discriminative logistic regression, and Spherical SentiVec based on soft clustering effect of von Mises-Fisher distributions.",
"In Section 6, the proposed word embeddings show evident improvements on sentiment classification, as compared to the base model Word2Vec and other baselines using the same lexical resource.",
"Data and Methodology We lay out the methodology for generating word embeddings of contrasting subjectivity, whose effects are tested on several text classification tasks.",
"Generating Word Embeddings As it is difficult to precisely quantify the degree of subjectivity of a corpus, we resort to generating word embeddings from two corpora that contrast sharply in subjectivity, referring to them as the Objective Corpus and the Subjective Corpus.",
"Objective Corpus As virtually all contents are written by humans, an absolutely objective corpus (in the philosophical sense) may prove elusive.",
"There are however exemplars where, by construction, a corpus aspires to be as objective as possible, and probably achieves that in practical terms.",
"We postulate that one such corpus is Wikipedia.",
"Its list of policies and guidelines 1 , assiduously enforced by an editorial team, specify that an article must be written from a neutral point of view, which among other things means \"representing fairly, proportionately, and, as far as possible, without editorial bias, all of the significant views that have been published by reliable sources on a topic.\".",
"Moreover, it is a common resource for training distributional word embeddings and adopted widely by the research community to solve various NLP problems.",
"Hence, in this study, we use Wikipedia as the Objective Corpus.",
"Subjective Corpus By extension, one may then deem a corpus subjective if its content does not at least meet Wikipedia's neutral point of view requirement.",
"In other words, if the content is replete with personal feelings and opinions.",
"We posit that product reviews would be one such corpus.",
"For instance, Amazon's Community Guideline 2 states that \"Amazon values diverse opinions\", and that \"Content you submit should be relevant and based on your own honest opinions and experience.\".",
"Reviews consist of expressive content written by customers, and may not strive for the neutrality of an encyclopedia.",
"We rely on a 1 https://en.wikipedia.org/wiki/ Wikipedia:List_of_policies_and_ guidelines 2 https://www.amazon.com/gp/help/ customer/display.html?nodeId=201929730 large corpus of Amazon reviews from various categories (e.g., electronics, jewelry, books, and etc.)",
"(McAuley et al., 2015) as the Subjective Corpus.",
"Word Embeddings For the comparative analysis in Section 3, we employ Word2Vec (reviewed below) to generate word embeddings from each corpus.",
"Later on in Section 4, we will propose a new word embedding method called SentiVec.",
"For Word2Vec, we use the Skip-gram model to train distributional word embeddings on the Objective Corpus and the Subjective Corpus respectively.",
"Skip-gram aims to find word embeddings that are useful for predicting nearby words.",
"The objective is to maximize the context probability: log L(W ; C) = w∈W w ∈C(w) log P(w |w), (1) where W is an input corpus and C(w) is the context of token w. The probability of context word w , given observed word w is defined via softmax: P(w |w) = exp (v w · vw) ŵ∈V exp (vŵ · vw) , (2) where v w and v w are corresponding embeddings and V is the corpus vocabulary.",
"Though theoretically sound, the formulation is computationally impractical and requires tractable approximation.",
"Mikolov et al.",
"(2013) propose two efficient procedures to optimize (1): Hierarchical Softmax and Negative Sampling (NS).",
"In this work we focus on the widely adopted NS.",
"The intuition is that a \"good\" model should be able to differentiate observed data from noise.",
"The differentiation task is defined using logistic regression; the goal is to tell apart real context-word pair (w , w) from randomly generated noise pair (ŵ, w).",
"Formally, log L [w',w] = log σ (v w · vw) + k i=1 log σ (−vŵ i · vw), (3) where σ( · ) is a sigmoid function, and {ŵ i } k i=1 are negative samples.",
"Summing up all the contextword pairs, we derive the NS Skip-gram objective: log L word2vec (W ; C) = w∈W w ∈C(w) log L [w',w] .",
"(4) Training word embeddings with Skip-gram, we keep the same hyperparameters across all the runs: 300 dimensions for embeddings, k = 5 negative samples, and window of 5 tokens.",
"The Objective and Subjective corpora undergo the same preprocessing, i.e., discarding short sentences (< 5 tokens) and rare words (< 10 occurrences), removing punctuation, normalizing Unicode symbols.",
"Evaluation Tasks To compare word embeddings, we need a common yardstick.",
"It is difficult to define an inherent quality to word embeddings.",
"Instead, we put them through several evaluation tasks that can leverage word embeddings and standardize their formulations as binary classification tasks.",
"To boil the comparisons down to the essences of word embeddings (which is our central focus), we rely on standardized techniques so as to attribute as much of the differences as possible to the word embeddings.",
"We use logistic regression for classification, and represent a text snippet (e.g., a sentence) in the feature space as the average of the word embeddings of tokens in the snippet (ignoring out-ofvocabulary tokens).",
"The evaluation metric is the average accuracy from 10-fold cross validation.",
"There are three evaluation tasks of varying degrees of hypothetical subjectivity, as outlined below.",
"Each may involve multiple datasets.",
"Sentiment Classification Task This task classifies a sentence into either positive or negative.",
"We use two groups of datasets as follows.",
"The first group consists of 24 datasets from UCSD Amazon product data 3 corresponding to various product categories.",
"Each review has a rating from 1 to 5, which is transformed into positive (ratings 4 or 5) or negative (ratings 1 or 2) class.",
"For each dataset respectively, we sample 5000 sentences each from the positive and negative reviews.",
"Note that these sentences used for this evaluation task have not participated in the generation of word embeddings.",
"Due to space constraint, in most cases we present the average accuracy across the datasets, but where appropriate we enumerate the results for each dataset.",
"The second is Cornell's sentence polarity dataset v1.0 4 (Pang and Lee, 2005) , made up of 5331 each of positive and negative sentences from Rotten Tomatoes movie reviews.",
"The inclusion of this out-of-domain evaluation dataset is useful for examining whether the performance of word embeddings from the Subjective Corpus on the first group above may inadvertently be affected by indomain advantage arising from its Amazon origin.",
"Subjectivity Classification Task This task classifies a sentence into subjective or objective.",
"The dataset is Cornell's subjectivity dataset v1.0 5 , consisting of 5000 subjective sentences derived from Rotten Tomatoes (RT) reviews and 5000 objective sentences derived from IMDB plot summaries (Pang and Lee, 2004) .",
"This task is probably less sensitive to the subjectivity within word embeddings than sentiment classification, as determining whether a sentence is subjective or objective should ideally be an objective undertaking.",
"Topic Classification Task We use the 20 Newsgroups dataset 6 (\"bydate\" version), whereby the newsgroups are organized into six subject matter groupings.",
"We extract the message body and split them into sentences.",
"Each group's sentences then form the in-topic class, and we randomly sample an equivalent number of sentences from the remaining newsgroups to form the out-of-topic class.",
"This results in six datasets, each corresponding to a binary classification task.",
"In most cases, we present the average results, and where appropriate we enumerate the results for each dataset.",
"Hypothetically, this task is the least affected by the subjectivity within word embeddings.",
"Comparative Analyses of Subjective vs.",
"Objective Corpora We conduct a series of comparative analyses under various setups.",
"For each, we compare the performance in the evaluation tasks when using the Objective Corpus and the Subjective Corpus.",
"Table 1 shows the results for this series of analyses.",
"Initial Condition Setup I seeks to answer whether there is any difference between word embeddings derived from the Objective Corpus and the Subjective Corpus.",
"The word embeddings were trained on the whole data respectively.",
"Table 1 shows the corpus statistics and classification accuracies.",
"Evidently, the Subjective word embeddings outperform the Objective word embeddings on all the evaluation tasks.",
"The margins are largest for sentiment classification (86.5% vs. 81.5% or +5% Amazon, and 78.2% vs. 75.4% or +2.8% on Rotten Tomatoes or RT).",
"For subjectivity and topic classifications, the differences are smaller.",
"As earlier hypothesized, the sentiment classification task is more sensitive to subjectivity within word embeddings than the other tasks.",
"Therefore, training word embeddings on a subjective corpus may confer an advantage for such tasks.",
"On the other hand, the corpus statistics show a substantial difference in corpus size, which could be an alternative explanation for the outperformance by the Subjective Corpus if the larger corpus contains more informative distributional statistics.",
"Controlling for Corpus Size In Setup II, we keep the number of sentences in both corpora the same, by randomly downsampling sentences in the Subjective Corpus.",
"This procedure consequently reduces the number of types and tokens (see Table 1 , Setup II, Corpus Statistics).",
"Note that the number of tokens in the Subjective corpus is now fewer than in the Objective, the latter suffers no change.",
"Yet, even after a dramatic reduction in size, the Subjective embeddings still outperform the Objective significantly on both datasets of the sentiment classification task (+4% on Amazon and +2.5% on RT), while showing similar performance on subjectivity and topic classifications.",
"This bolsters the earlier observation that sentiment classification is more sensitive to subjectivity.",
"While there is a small effect due to corpus size difference, the gap in performance between Subjective and Objective embeddings on sentiment classification is still significant and cannot be explained away by the corpus size alone.",
"Controlling for Vocabulary While the Subjective Corpus has a much smaller vocabulary (i.e., # types), we turn a critical eye on whether its apparent advantage lies in having access to special word types that do not exist in the Objective Corpus.",
"In Setup III, we keep the training vocabulary the same for both, removing the types that are Table 2 : Top words of misclassified sentences present in one corpus but not in the other, so that out-of-vocabulary words are ignored in the training phase.",
"Table 1 , Setup III, shows significant reduction in types for both corpora.",
"Yet, the outperformance by the Subjective embeddings on the sentiment classification task still stands (+3.8% on Amazon and +2.3% on RT).",
"Moreover, it is so for both Amazon and Rotten Tomatoes datasets, implying that it is not due to close in-domain similarity between the corpora used for training the word embeddings and the classification tasks.",
"Significant Words To get more insights on the difference between the Subjective and Objective corpora, we analyze the mistakes word embeddings make on the development folds.",
"At this point we focus on the sentiment classification task and specifically on the Amazon data, which indicates the largest performance differences in the controlled experiments (see Table 1 , Setup III).",
"As words are still the main unit of information in distributional word embeddings, we extract words strongly associated with misclassified sentences.",
"We employed log-odds ratio with informative Dirichlet prior method (Monroe et al., 2008) to quantify this association.",
"It is used to contrast the words in misclassified vs. correctly classified sentences, and accounts for the variance of words and their prior counts taken from a large corpus.",
"Table 2 shows the top 25 words most associated with the misclassified sentences, sorted by their association scores.",
"On average 50% of the mistakes overlap for both word embeddings, therefore, some of the words are included in both lists.",
"40 − 44% of these words carry positive or negative sentiment connotations in general (see the underlined words in Table 2 ), while other words like return or send may carry sentiment connotation in e-commerce context.",
"We check if a word carries sentiment connotation using sentiment lexicon compiled by Hu and Liu (2004) , including 6789 words along with positive or negative labels.",
"We also observe linguistic negations (i.e., not, Don't).",
"For instance, the word most associated with the Objective-specific mistakes (excluding the Subjective misclassified sentences) is not, which suggests that perhaps Subjective word embedding accommodates better understanding of linguistic negations, which may partially explain the difference.",
"However, our methodology as outlined in Section 2.2 permits exchangeable word order and is not intended to analyze structural interaction between words.",
"We focus on further analysis of sentiment words, leaving linguistic negations in word embeddings for future investigation.",
"Controlling for Sentiment Words To control for the \"amount\" of sentiment in the Subjective and Objective corpora, we use sentiment lexicon compiled by Hu and Liu (2004) .",
"For each corpus, we create two subcorpora: With Sentiment contains only the sentences with at least one word from the sentiment lexicon, while Without Sentiment is the complement.",
"We match the corpora on the number of sentences, downsampling the larger corpus, train word embeddings on each subcorpus, and proceed with the classification experiments.",
"Table 3 shows the results, including that of random word embeddings for reference.",
"Sentiment lexicon has a significant impact on the performance of sentiment and subjectivity classifications, and a smaller impact on topic classification.",
"Without sentiment, the Subjective embeddings prove more robust, still outperforming the Objective on sentiment classification, while the Objective performs close to random word embeddings on Amazon .",
"In summary, evidences from the series of controlled experiments support the existence of some X-factor to the Subjective embeddings, which confers superior performance in subjectivity-sensitive tasks such as sentiment classification.",
"To leverage the consequential sentiment information, we propose a family of methods, called SentiVec, for training distributional word embeddings that are infused with information on the sentiment polarity of words.",
"The methods are built upon Word2Vec optimization algorithm and make use of available lexical sentiment resources such as SentiWordNet (Baccianella et al., 2010) , sentiment lexicon by Hu and Liu (2004) , and etc.",
"SentiVec seeks to satisfy two objectives, namely context prediction and lexical category prediction: log L = log L word2vec (W ; C) + λ log L lex (W, L), (5) where L word2vec (W ; C) is the Skip-gram objective as in (4) ; L lex (W, L) is a lexical objective for corpus W and lexical resource L; and λ is a tradeoff parameter.",
"Lexical resource L = {X i } n i=1 comprises of n word sets, each X i contains words of the same category.",
"For sentiment classification, we consider positive and negative word categories.",
"Logistic SentiVec Logistic SentiVec admits lexical resource in the form of two disjoint word sets, L = {X 1 , X 2 }, X 1 ∩ X 2 = ∅.",
"The objective is to tell apart which word set of L word w belongs to: log L lex (W, L) (6) = w∈X 1 log P(w ∈ X 1 ) + w∈X 2 log P(w ∈ X 2 ).",
"We further tie these probabilities together, and cast the objective as a logistic regression problem: P(w ∈ X 1 ) = 1 − P(w ∈ X 2 ) = σ(v w · τ ), (7) where v w is a word embedding and τ is a direction vector.",
"Since word embeddings are generally invariant to scaling and rotation when used as downstream feature representations, τ can be chosen randomly and fixed during training.",
"We experiment with randomly sampled unit length directions.",
"For simplicity, we also scale embedding v w to its unit length when computing v w · τ , which now equals to cosine similarity between v w and τ .",
"When v w is completely aligned with τ , the cosine similarity between them is 1, which maximizes P(w ∈ X 1 ) and favors words in X 1 .",
"When v w is opposite to τ , the cosine similarity equals to −1, which maximizes P(w ∈ X 2 ) and predicts vectors from X 2 .",
"Orthogonal vectors have cosine similarity of 0, which makes both w ∈ X 1 and w ∈ X 2 equally probable.",
"Optimizing (6) makes the corresponding word embeddings of X 1 and X 2 gravitate to the opposite semispaces and simulates clustering effect for the words of the same category, while the Word2Vec objective prevents words from collapsing to the same directions.",
"Optimization The objective in (6) permits simple stochastic gradient ascent optimization and can be combined with negative sampling procedure for Skip-gram in (5).",
"The gradient for unnormalized embedding v w is solved as follows: log L [w∈X 1 ] (D, L) v wi = (log P (x ∈ X 1 )) v wi = 1 v w 2 σ − v w · τ v w τ i v w − v wi v w · τ v w (8) The optimization equation for v w , when w ∈ X 2 , can be derived analogously.",
"Spherical SentiVec Spherical SentiVec extends Logistic SentiVec by dealing with any number of lexical categories, L = {X i } n i=1 .",
"As such, the lexical objective takes on generic form: log L lex (W, L) = n i=1 w∈X i log P (w ∈ X i ), (9) Each P (w ∈ X i ) defines embedding generating process.",
"We assume each length-normalized v w for w of L is generated w.r.t.",
"a mixture model of von Mises-Fisher (vMF) distributions.",
"vMF is a probability distribution on a multidimensional sphere, characterized by parameters µ (mean direction) and κ (concentration parameter).",
"Sampled points are concentrated around µ; the greater the κ, the closer the sampled points are to µ.",
"We consider only unimodal vMF distributions, restricting concentration parameters to be strictly positive.",
"Hereby, each X i ∈ L is assigned to vMF distribution parameters (µ i , κ i ) and the membership probabilities are defined as follows: P(w ∈ X i ) = P (v w ; µ i , κ i ) = 1 Z κ i e κ i µ i ·vw , (10) where Z κ is the normalization factor.",
"The Spherical SentiVec lexical objective forces words of every X i ∈ L to gravitate towards and concentrate around their direction mean µ i .",
"As in Logistic SentiVec, it simulates clustering effect for the words of the same set.",
"In comparison to the direction vector of Logistic SentiVec, mean directions of Spherical SentiVec when fixed can substantially influence word embeddings training and must be carefully selected.",
"We optimize the mean directions along with the word embeddings using alternating procedure resembling K-means clustering algorithm.",
"For simplicity, we keep concentration parameters tied, κ 1 = κ 2 = ... = κ n = κ, and treat κ as a hyperparameter of this algorithm.",
"Optimization We derive optimization procedure for updating word embeddings assuming fixed direction means.",
"Like Logistic SentiVec, Spherical SentiVec can be combined with the negative sampling procedure of Skip-gram.",
"The gradient for unnormalized word embedding v w is solved by the following equation: log L [w∈X i ] (W, L) v wj = κi µij vw − vwj vw ·µ i vw vw 2 (11) Once word embedding v w (w ∈ X i ) is updated, we revise direction mean µ i w.r.t.",
"maximum likelihood estimator: µi = w∈X i vw w∈X i vw .",
"(12) Updating the direction means in such a way ensures that the lexical objective is non-decreasing.",
"Assuming the stochastic optimization procedure for L word2vec complies with the same nondecreasing property, the proposed alternating procedure converges.",
"Related Work There have been considerable research on improving the quality of distributional word embeddings.",
"Bolukbasi et al.",
"(2016) seek to debias word embeddings from gender stereotypes.",
"Rothe and Schütze (2017) incorporate WordNet lexeme and synset information.",
"Mrkšic et al.",
"(2016) encode antonym-synonym relations.",
"Liu et al.",
"(2015) encode ordinal relations such as hypernym and hyponym.",
"Kiela et al.",
"(2015) augment Skip-gram to enforce lexical similarity or relatedness constraints, Bollegala et al.",
"(2016) modify GloVe optimization procedure for the same purpose.",
"Faruqui et al.",
"(2015) employ semantic relations of PPDB, WordNet, FrameNet to retrofit word embeddings for various prediction tasks.",
"We use this Retrofitting method 7 as a baseline.",
"Socher et al.",
"(2011) derive multi-word embeddings for sentiment distribution prediction, while we focus on lexical distributional analysis.",
"Maas et al.",
"(2011) and Tang et al.",
"(2016) use documentlevel sentiment annotations to fit word embeddings, but document annotation might not always be available for distributional analysis on neutral corpora such as Wikipedia.",
"SentiVec relies on simple sentiment lexicon instead.",
"Refining (Yu et al., 2018) aligns the sentiment scores taken from lexical resource and the cosine similarity scores of corresponding word embeddings.",
"The method generally requires fine-grained sentiment scores for the words, which may not be available in some settings.",
"We use Refining as a baseline and adopt coarse-grained sentiment lexicon for this method.",
"Villegas et al.",
"(2016) compare various distributional word embeddings arising from the same corpus for sentiment classification, whereas we focus on the differentiation in input corpora and propose novel sentiment-infused word embeddings.",
"Experiments The objective of experiments is to study the efficacy of Logistic SentiVec and Spherical SentiVec word embeddings on the aforementioned text classification tasks.",
"One natural baseline is Word2Vec, as SentiVec subsumes its context prediction objective, while further incorporating lexical category prediction.",
"We include two other baselines that can leverage the same lexical resource but in manners different from SentiVec, namely: Retrofitting (Faruqui et al., 2015) and Refining (Yu et al., 2018) .",
"For these methods, we generate their word embeddings based on Setup III (see Section 3).",
"All the methods were run multiple times with various hyperparameters, optimized via grid-search; for each we present the best performing setting.",
"First, we discuss the sentiment classification task.",
"Table 4 shows the unfolded results for the 24 classification datasets of Amazon, as well as for Rotten Tomatoes.",
"For each classification dataset (row), and for the Objective and Subjective embedding corpora respectively, the best word embedding methods are shown in bold.",
"An asterisk indicates statistically significant 8 results at 5% in comparison to Word2Vec.",
"Both SentiVec variants outperform Word2Vec in the vast majority of the cases.",
"The degree of outperformance is higher for the Objective than the Subjective word embeddings.",
"This is a reasonable trend given our previous findings in Section 3.",
"As the Objective Corpus encodes less information than the Subjective Corpus for sentiment classification, the former is more likely to benefit from the infusion of sentiment information from additional lexical resources.",
"Note that the sentiment infusion into the word embeddings comes from separate lexical resources, and does not involve any sentiment classification label.",
"SentiVec also outperforms the two baselines that benefit from the same lexical resources.",
"Retrofitting does not improve upon Word2Vec, with the two embeddings essentially indistinguishable (the difference is only noticeable at the second decimal point).",
"Refining makes the word embeddings perform worse on the sentiment classification task.",
"One possible explanation is that Refining normally requires fine-grained labeled lexicon, where the words are scored w.r.t.",
"the sentiment scale, whereas we use sentiment lexicon of two labels (i.e., positive or negative).",
"SentiVec accepts coarse-grained sentiment lexicons, and potentially could be extended to deal with fine-grained labels.",
"As previously alluded to, topic and subjectivity classifications are less sensitive to the subjectivity within word embeddings than sentiment classification.",
"One therefore would not expect much, if any, performance gain from infusion of sentiment information.",
"However, such infusion should not subtract or harm the quality of word embeddings either.",
"Table 5 shows that the unfolded results for topic classification on the six datasets, and the result for subjectivity classification are similar across methods.",
"Neither the SentiVec variants, nor Retrofitting and Refining, change the subjectivity and topic classification capabilities much, which means that the used sentiment lexicon is targeted only at the sentiment subspace of embeddings.",
"Illustrative Changes in Embeddings To give more insights on the difference between SentiVec and Word2Vec, we show \"flower\" diagrams in Figure 1 for Logistic SentiVec and Figure 2 for Spherical SentiVec.",
"Each is associated with a reference word (e.g., good for Figure 1a) , and indicates relative changes in cosine distances between the reference word and the testing words surrounding the \"flower\".",
"Every testing word is associated with a \"petal\" or black axis extending from the center of the circle.",
"The \"petal\" length is proportional to the relative distance change in two word embeddings: κ = Word2Vec embeddings correspondingly.",
"If the distance remains unchanged (κ = 1), then the \"petal\" points at the circumference; if the reference and testing words are closer in the SentiVec embedding than they are in Word2Vec (κ < 1), the \"petal\" lies inside the circle; when the distance increases (κ > 1), the \"petal\" goes beyond the circle.",
"The diagrams are presented for Objective Embeddings 9 .",
"We use three reference words: good (positive), bad (negative), time (neutral); as well as three groups of testing words: green for words randomly sampled from positive lexicon (Sector I-II), red for words randomly sampled from negative lexicon (Sector II-III), and gray for frequent neutral common nouns (Sector III-I).",
"Figure 1 shows changes produced by Logistic SentiVec.",
"For the positive reference word (Figure 1a) , the average distance to the green words is shortened, whereas the distance to the red words increases.",
"The reverse is observed for the negative reference word (Figure 1b ).",
"This observation complies with the lexical objective (7) of Logistic SentiVec, which aims to separate the words of two different classes.",
"Note that the gray words suffer only moderate change with respect to positive and negative reference words.",
"For the neutral reference word (Figure 1c ), the distances are only moderately affected across all testing groups.",
"Figure 2 shows that Spherical SentiVec tends to make embeddings more compact than Logistic SentiVec.",
"As the former's lexical objective (9) is designed for clustering, but not for separation, we look at the comparative strength of the clustering effect on the testing words.",
"For the positive reference word (Figure 2a ), the largest clustering effect is achieved for the green words.",
"For the negative reference word (Figure 2b) , as expected, the red words are affected the most.",
"The gray words suffer the least change for all the reference words.",
"In summary, SentiVec effectively provides an advantage for subjectivity-sensitive task such as sentiment classification, while not harming the performance of other text classification tasks.",
"Conclusion We explore the differences between objective and subjective corpora for generating word embeddings, and find that there is indeed a difference in the embeddings' classification task performances.",
"Identifying the presence of sentiment words as one key factor for the difference, we propose a novel method SentiVec to train word embeddings that are infused with the sentiment polarity of words derived from a separate sentiment lexicon.",
"We further identify two lexical objectives: Logistic SentiVec and Spherical SentiVec.",
"The proposed word embeddings show improvements in sentiment classification, while maintaining their performance on subjectivity and topic classifications."
]
} | {
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"3",
"4.1",
"4.2",
"5",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Data and Methodology",
"Generating Word Embeddings",
"Evaluation Tasks",
"Comparative Analyses of Subjective vs. Objective Corpora",
"Logistic SentiVec",
"Spherical SentiVec",
"Related Work",
"Experiments",
"Conclusion"
]
} | GEM-SciDuet-train-134#paper-1359#slide-13 | Logistic SentiVec | This camera is good for high quality
good good (good, camera)
good = 1 good
Random Noise (good, frog) (good, duck) | This camera is good for high quality
good good (good, camera)
good = 1 good
Random Noise (good, frog) (good, duck) | [] |
GEM-SciDuet-train-134#paper-1359#slide-14 | 1359 | Searching for the X-Factor: Exploring Corpus Subjectivity for Word Embeddings | We explore the notion of subjectivity, and hypothesize that word embeddings learnt from input corpora of varying levels of subjectivity behave differently on natural language processing tasks such as classifying a sentence by sentiment, subjectivity, or topic. Through systematic comparative analyses, we establish this to be the case indeed. Moreover, based on the discovery of the outsized role that sentiment words play on subjectivity-sensitive tasks such as sentiment classification, we develop a novel word embedding SentiVec which is infused with sentiment information from a lexical resource, and is shown to outperform baselines on such tasks. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226
],
"paper_content_text": [
"Introduction Distributional analysis methods such as Word2Vec (Mikolov et al., 2013) and GloVe (Pennington et al., 2014) have been critical for the success of many large-scale natural language processing (NLP) applications (Collobert et al., 2011; Socher et al., 2013; Goldberg, 2016) .",
"These methods employ distributional hypothesis (i.e., words used in the same contexts tend to have similar meaning) to derive distributional meaning via context prediction tasks and produce dense word embeddings.",
"While there have been active and ongoing research on improving word embedding methods (see Section 5), there is a relative dearth of study on the impact that an input corpus may have on the quality of the word embeddings.",
"The previous preoccupation centers around corpus size, i.e., a larger corpus is perceived to be richer in statistical information.",
"For instance, popular corpora include Wikipedia, Common Crawl, and Google News.",
"We postulate that there may be variations across corpora owing to factors that affect language use.",
"Intuitively, the many things we write (a work email, a product review, an academic publication, etc.)",
"may each involve certain stylistic, syntactic, and lexical choices, resulting in meaningfully different distributions of word cooccurrences.",
"Consequently, such factors may be encoded in the word embeddings, and input corpora may be differentially informative towards various NLP tasks.",
"In this work, we are interested in the notion of subjectivity.",
"Some NLP tasks, such as sentiment classification, revolve around subjective expressions of likes or dislikes.",
"Others, such as topic classification, revolve around more objective elements of whether a document belongs to a topic (e.g., science, politics).",
"Our central hypothesis is that word embeddings learnt from input corpora of contrasting levels of subjectivity perform differently when classifying sentences by sentiment, subjectivity, or topic.",
"As the first contribution, we outline an experimental scheme to explore this hypothesis in Section 2, and conduct a series of controlled experiments in Section 3 establishing that there exists a meaningful difference between word embeddings derived from objective vs. subjective corpora.",
"We further systematically investigate factors that could potentially explain the differences.",
"Upon discovering from the investigation that sentiment words play a particularly important role in subjectivity-sensitive NLP tasks, such as sentiment classification, as the second contribution, in Section 4 we develop SentiVec, a novel word embedding method infused with information from lexical resources such as a sentiment lexicon.",
"We further identify two alternative lexical objectives: Logistic SentiVec based on discriminative logistic regression, and Spherical SentiVec based on soft clustering effect of von Mises-Fisher distributions.",
"In Section 6, the proposed word embeddings show evident improvements on sentiment classification, as compared to the base model Word2Vec and other baselines using the same lexical resource.",
"Data and Methodology We lay out the methodology for generating word embeddings of contrasting subjectivity, whose effects are tested on several text classification tasks.",
"Generating Word Embeddings As it is difficult to precisely quantify the degree of subjectivity of a corpus, we resort to generating word embeddings from two corpora that contrast sharply in subjectivity, referring to them as the Objective Corpus and the Subjective Corpus.",
"Objective Corpus As virtually all contents are written by humans, an absolutely objective corpus (in the philosophical sense) may prove elusive.",
"There are however exemplars where, by construction, a corpus aspires to be as objective as possible, and probably achieves that in practical terms.",
"We postulate that one such corpus is Wikipedia.",
"Its list of policies and guidelines 1 , assiduously enforced by an editorial team, specify that an article must be written from a neutral point of view, which among other things means \"representing fairly, proportionately, and, as far as possible, without editorial bias, all of the significant views that have been published by reliable sources on a topic.\".",
"Moreover, it is a common resource for training distributional word embeddings and adopted widely by the research community to solve various NLP problems.",
"Hence, in this study, we use Wikipedia as the Objective Corpus.",
"Subjective Corpus By extension, one may then deem a corpus subjective if its content does not at least meet Wikipedia's neutral point of view requirement.",
"In other words, if the content is replete with personal feelings and opinions.",
"We posit that product reviews would be one such corpus.",
"For instance, Amazon's Community Guideline 2 states that \"Amazon values diverse opinions\", and that \"Content you submit should be relevant and based on your own honest opinions and experience.\".",
"Reviews consist of expressive content written by customers, and may not strive for the neutrality of an encyclopedia.",
"We rely on a 1 https://en.wikipedia.org/wiki/ Wikipedia:List_of_policies_and_ guidelines 2 https://www.amazon.com/gp/help/ customer/display.html?nodeId=201929730 large corpus of Amazon reviews from various categories (e.g., electronics, jewelry, books, and etc.)",
"(McAuley et al., 2015) as the Subjective Corpus.",
"Word Embeddings For the comparative analysis in Section 3, we employ Word2Vec (reviewed below) to generate word embeddings from each corpus.",
"Later on in Section 4, we will propose a new word embedding method called SentiVec.",
"For Word2Vec, we use the Skip-gram model to train distributional word embeddings on the Objective Corpus and the Subjective Corpus respectively.",
"Skip-gram aims to find word embeddings that are useful for predicting nearby words.",
"The objective is to maximize the context probability: log L(W ; C) = w∈W w ∈C(w) log P(w |w), (1) where W is an input corpus and C(w) is the context of token w. The probability of context word w , given observed word w is defined via softmax: P(w |w) = exp (v w · vw) ŵ∈V exp (vŵ · vw) , (2) where v w and v w are corresponding embeddings and V is the corpus vocabulary.",
"Though theoretically sound, the formulation is computationally impractical and requires tractable approximation.",
"Mikolov et al.",
"(2013) propose two efficient procedures to optimize (1): Hierarchical Softmax and Negative Sampling (NS).",
"In this work we focus on the widely adopted NS.",
"The intuition is that a \"good\" model should be able to differentiate observed data from noise.",
"The differentiation task is defined using logistic regression; the goal is to tell apart real context-word pair (w , w) from randomly generated noise pair (ŵ, w).",
"Formally, log L [w',w] = log σ (v w · vw) + k i=1 log σ (−vŵ i · vw), (3) where σ( · ) is a sigmoid function, and {ŵ i } k i=1 are negative samples.",
"Summing up all the contextword pairs, we derive the NS Skip-gram objective: log L word2vec (W ; C) = w∈W w ∈C(w) log L [w',w] .",
"(4) Training word embeddings with Skip-gram, we keep the same hyperparameters across all the runs: 300 dimensions for embeddings, k = 5 negative samples, and window of 5 tokens.",
"The Objective and Subjective corpora undergo the same preprocessing, i.e., discarding short sentences (< 5 tokens) and rare words (< 10 occurrences), removing punctuation, normalizing Unicode symbols.",
"Evaluation Tasks To compare word embeddings, we need a common yardstick.",
"It is difficult to define an inherent quality to word embeddings.",
"Instead, we put them through several evaluation tasks that can leverage word embeddings and standardize their formulations as binary classification tasks.",
"To boil the comparisons down to the essences of word embeddings (which is our central focus), we rely on standardized techniques so as to attribute as much of the differences as possible to the word embeddings.",
"We use logistic regression for classification, and represent a text snippet (e.g., a sentence) in the feature space as the average of the word embeddings of tokens in the snippet (ignoring out-ofvocabulary tokens).",
"The evaluation metric is the average accuracy from 10-fold cross validation.",
"There are three evaluation tasks of varying degrees of hypothetical subjectivity, as outlined below.",
"Each may involve multiple datasets.",
"Sentiment Classification Task This task classifies a sentence into either positive or negative.",
"We use two groups of datasets as follows.",
"The first group consists of 24 datasets from UCSD Amazon product data 3 corresponding to various product categories.",
"Each review has a rating from 1 to 5, which is transformed into positive (ratings 4 or 5) or negative (ratings 1 or 2) class.",
"For each dataset respectively, we sample 5000 sentences each from the positive and negative reviews.",
"Note that these sentences used for this evaluation task have not participated in the generation of word embeddings.",
"Due to space constraint, in most cases we present the average accuracy across the datasets, but where appropriate we enumerate the results for each dataset.",
"The second is Cornell's sentence polarity dataset v1.0 4 (Pang and Lee, 2005) , made up of 5331 each of positive and negative sentences from Rotten Tomatoes movie reviews.",
"The inclusion of this out-of-domain evaluation dataset is useful for examining whether the performance of word embeddings from the Subjective Corpus on the first group above may inadvertently be affected by indomain advantage arising from its Amazon origin.",
"Subjectivity Classification Task This task classifies a sentence into subjective or objective.",
"The dataset is Cornell's subjectivity dataset v1.0 5 , consisting of 5000 subjective sentences derived from Rotten Tomatoes (RT) reviews and 5000 objective sentences derived from IMDB plot summaries (Pang and Lee, 2004) .",
"This task is probably less sensitive to the subjectivity within word embeddings than sentiment classification, as determining whether a sentence is subjective or objective should ideally be an objective undertaking.",
"Topic Classification Task We use the 20 Newsgroups dataset 6 (\"bydate\" version), whereby the newsgroups are organized into six subject matter groupings.",
"We extract the message body and split them into sentences.",
"Each group's sentences then form the in-topic class, and we randomly sample an equivalent number of sentences from the remaining newsgroups to form the out-of-topic class.",
"This results in six datasets, each corresponding to a binary classification task.",
"In most cases, we present the average results, and where appropriate we enumerate the results for each dataset.",
"Hypothetically, this task is the least affected by the subjectivity within word embeddings.",
"Comparative Analyses of Subjective vs.",
"Objective Corpora We conduct a series of comparative analyses under various setups.",
"For each, we compare the performance in the evaluation tasks when using the Objective Corpus and the Subjective Corpus.",
"Table 1 shows the results for this series of analyses.",
"Initial Condition Setup I seeks to answer whether there is any difference between word embeddings derived from the Objective Corpus and the Subjective Corpus.",
"The word embeddings were trained on the whole data respectively.",
"Table 1 shows the corpus statistics and classification accuracies.",
"Evidently, the Subjective word embeddings outperform the Objective word embeddings on all the evaluation tasks.",
"The margins are largest for sentiment classification (86.5% vs. 81.5% or +5% Amazon, and 78.2% vs. 75.4% or +2.8% on Rotten Tomatoes or RT).",
"For subjectivity and topic classifications, the differences are smaller.",
"As earlier hypothesized, the sentiment classification task is more sensitive to subjectivity within word embeddings than the other tasks.",
"Therefore, training word embeddings on a subjective corpus may confer an advantage for such tasks.",
"On the other hand, the corpus statistics show a substantial difference in corpus size, which could be an alternative explanation for the outperformance by the Subjective Corpus if the larger corpus contains more informative distributional statistics.",
"Controlling for Corpus Size In Setup II, we keep the number of sentences in both corpora the same, by randomly downsampling sentences in the Subjective Corpus.",
"This procedure consequently reduces the number of types and tokens (see Table 1 , Setup II, Corpus Statistics).",
"Note that the number of tokens in the Subjective corpus is now fewer than in the Objective, the latter suffers no change.",
"Yet, even after a dramatic reduction in size, the Subjective embeddings still outperform the Objective significantly on both datasets of the sentiment classification task (+4% on Amazon and +2.5% on RT), while showing similar performance on subjectivity and topic classifications.",
"This bolsters the earlier observation that sentiment classification is more sensitive to subjectivity.",
"While there is a small effect due to corpus size difference, the gap in performance between Subjective and Objective embeddings on sentiment classification is still significant and cannot be explained away by the corpus size alone.",
"Controlling for Vocabulary While the Subjective Corpus has a much smaller vocabulary (i.e., # types), we turn a critical eye on whether its apparent advantage lies in having access to special word types that do not exist in the Objective Corpus.",
"In Setup III, we keep the training vocabulary the same for both, removing the types that are Table 2 : Top words of misclassified sentences present in one corpus but not in the other, so that out-of-vocabulary words are ignored in the training phase.",
"Table 1 , Setup III, shows significant reduction in types for both corpora.",
"Yet, the outperformance by the Subjective embeddings on the sentiment classification task still stands (+3.8% on Amazon and +2.3% on RT).",
"Moreover, it is so for both Amazon and Rotten Tomatoes datasets, implying that it is not due to close in-domain similarity between the corpora used for training the word embeddings and the classification tasks.",
"Significant Words To get more insights on the difference between the Subjective and Objective corpora, we analyze the mistakes word embeddings make on the development folds.",
"At this point we focus on the sentiment classification task and specifically on the Amazon data, which indicates the largest performance differences in the controlled experiments (see Table 1 , Setup III).",
"As words are still the main unit of information in distributional word embeddings, we extract words strongly associated with misclassified sentences.",
"We employed log-odds ratio with informative Dirichlet prior method (Monroe et al., 2008) to quantify this association.",
"It is used to contrast the words in misclassified vs. correctly classified sentences, and accounts for the variance of words and their prior counts taken from a large corpus.",
"Table 2 shows the top 25 words most associated with the misclassified sentences, sorted by their association scores.",
"On average 50% of the mistakes overlap for both word embeddings, therefore, some of the words are included in both lists.",
"40 − 44% of these words carry positive or negative sentiment connotations in general (see the underlined words in Table 2 ), while other words like return or send may carry sentiment connotation in e-commerce context.",
"We check if a word carries sentiment connotation using sentiment lexicon compiled by Hu and Liu (2004) , including 6789 words along with positive or negative labels.",
"We also observe linguistic negations (i.e., not, Don't).",
"For instance, the word most associated with the Objective-specific mistakes (excluding the Subjective misclassified sentences) is not, which suggests that perhaps Subjective word embedding accommodates better understanding of linguistic negations, which may partially explain the difference.",
"However, our methodology as outlined in Section 2.2 permits exchangeable word order and is not intended to analyze structural interaction between words.",
"We focus on further analysis of sentiment words, leaving linguistic negations in word embeddings for future investigation.",
"Controlling for Sentiment Words To control for the \"amount\" of sentiment in the Subjective and Objective corpora, we use sentiment lexicon compiled by Hu and Liu (2004) .",
"For each corpus, we create two subcorpora: With Sentiment contains only the sentences with at least one word from the sentiment lexicon, while Without Sentiment is the complement.",
"We match the corpora on the number of sentences, downsampling the larger corpus, train word embeddings on each subcorpus, and proceed with the classification experiments.",
"Table 3 shows the results, including that of random word embeddings for reference.",
"Sentiment lexicon has a significant impact on the performance of sentiment and subjectivity classifications, and a smaller impact on topic classification.",
"Without sentiment, the Subjective embeddings prove more robust, still outperforming the Objective on sentiment classification, while the Objective performs close to random word embeddings on Amazon .",
"In summary, evidences from the series of controlled experiments support the existence of some X-factor to the Subjective embeddings, which confers superior performance in subjectivity-sensitive tasks such as sentiment classification.",
"To leverage the consequential sentiment information, we propose a family of methods, called SentiVec, for training distributional word embeddings that are infused with information on the sentiment polarity of words.",
"The methods are built upon Word2Vec optimization algorithm and make use of available lexical sentiment resources such as SentiWordNet (Baccianella et al., 2010) , sentiment lexicon by Hu and Liu (2004) , and etc.",
"SentiVec seeks to satisfy two objectives, namely context prediction and lexical category prediction: log L = log L word2vec (W ; C) + λ log L lex (W, L), (5) where L word2vec (W ; C) is the Skip-gram objective as in (4) ; L lex (W, L) is a lexical objective for corpus W and lexical resource L; and λ is a tradeoff parameter.",
"Lexical resource L = {X i } n i=1 comprises of n word sets, each X i contains words of the same category.",
"For sentiment classification, we consider positive and negative word categories.",
"Logistic SentiVec Logistic SentiVec admits lexical resource in the form of two disjoint word sets, L = {X 1 , X 2 }, X 1 ∩ X 2 = ∅.",
"The objective is to tell apart which word set of L word w belongs to: log L lex (W, L) (6) = w∈X 1 log P(w ∈ X 1 ) + w∈X 2 log P(w ∈ X 2 ).",
"We further tie these probabilities together, and cast the objective as a logistic regression problem: P(w ∈ X 1 ) = 1 − P(w ∈ X 2 ) = σ(v w · τ ), (7) where v w is a word embedding and τ is a direction vector.",
"Since word embeddings are generally invariant to scaling and rotation when used as downstream feature representations, τ can be chosen randomly and fixed during training.",
"We experiment with randomly sampled unit length directions.",
"For simplicity, we also scale embedding v w to its unit length when computing v w · τ , which now equals to cosine similarity between v w and τ .",
"When v w is completely aligned with τ , the cosine similarity between them is 1, which maximizes P(w ∈ X 1 ) and favors words in X 1 .",
"When v w is opposite to τ , the cosine similarity equals to −1, which maximizes P(w ∈ X 2 ) and predicts vectors from X 2 .",
"Orthogonal vectors have cosine similarity of 0, which makes both w ∈ X 1 and w ∈ X 2 equally probable.",
"Optimizing (6) makes the corresponding word embeddings of X 1 and X 2 gravitate to the opposite semispaces and simulates clustering effect for the words of the same category, while the Word2Vec objective prevents words from collapsing to the same directions.",
"Optimization The objective in (6) permits simple stochastic gradient ascent optimization and can be combined with negative sampling procedure for Skip-gram in (5).",
"The gradient for unnormalized embedding v w is solved as follows: log L [w∈X 1 ] (D, L) v wi = (log P (x ∈ X 1 )) v wi = 1 v w 2 σ − v w · τ v w τ i v w − v wi v w · τ v w (8) The optimization equation for v w , when w ∈ X 2 , can be derived analogously.",
"Spherical SentiVec Spherical SentiVec extends Logistic SentiVec by dealing with any number of lexical categories, L = {X i } n i=1 .",
"As such, the lexical objective takes on generic form: log L lex (W, L) = n i=1 w∈X i log P (w ∈ X i ), (9) Each P (w ∈ X i ) defines embedding generating process.",
"We assume each length-normalized v w for w of L is generated w.r.t.",
"a mixture model of von Mises-Fisher (vMF) distributions.",
"vMF is a probability distribution on a multidimensional sphere, characterized by parameters µ (mean direction) and κ (concentration parameter).",
"Sampled points are concentrated around µ; the greater the κ, the closer the sampled points are to µ.",
"We consider only unimodal vMF distributions, restricting concentration parameters to be strictly positive.",
"Hereby, each X i ∈ L is assigned to vMF distribution parameters (µ i , κ i ) and the membership probabilities are defined as follows: P(w ∈ X i ) = P (v w ; µ i , κ i ) = 1 Z κ i e κ i µ i ·vw , (10) where Z κ is the normalization factor.",
"The Spherical SentiVec lexical objective forces words of every X i ∈ L to gravitate towards and concentrate around their direction mean µ i .",
"As in Logistic SentiVec, it simulates clustering effect for the words of the same set.",
"In comparison to the direction vector of Logistic SentiVec, mean directions of Spherical SentiVec when fixed can substantially influence word embeddings training and must be carefully selected.",
"We optimize the mean directions along with the word embeddings using alternating procedure resembling K-means clustering algorithm.",
"For simplicity, we keep concentration parameters tied, κ 1 = κ 2 = ... = κ n = κ, and treat κ as a hyperparameter of this algorithm.",
"Optimization We derive optimization procedure for updating word embeddings assuming fixed direction means.",
"Like Logistic SentiVec, Spherical SentiVec can be combined with the negative sampling procedure of Skip-gram.",
"The gradient for unnormalized word embedding v w is solved by the following equation: log L [w∈X i ] (W, L) v wj = κi µij vw − vwj vw ·µ i vw vw 2 (11) Once word embedding v w (w ∈ X i ) is updated, we revise direction mean µ i w.r.t.",
"maximum likelihood estimator: µi = w∈X i vw w∈X i vw .",
"(12) Updating the direction means in such a way ensures that the lexical objective is non-decreasing.",
"Assuming the stochastic optimization procedure for L word2vec complies with the same nondecreasing property, the proposed alternating procedure converges.",
"Related Work There have been considerable research on improving the quality of distributional word embeddings.",
"Bolukbasi et al.",
"(2016) seek to debias word embeddings from gender stereotypes.",
"Rothe and Schütze (2017) incorporate WordNet lexeme and synset information.",
"Mrkšic et al.",
"(2016) encode antonym-synonym relations.",
"Liu et al.",
"(2015) encode ordinal relations such as hypernym and hyponym.",
"Kiela et al.",
"(2015) augment Skip-gram to enforce lexical similarity or relatedness constraints, Bollegala et al.",
"(2016) modify GloVe optimization procedure for the same purpose.",
"Faruqui et al.",
"(2015) employ semantic relations of PPDB, WordNet, FrameNet to retrofit word embeddings for various prediction tasks.",
"We use this Retrofitting method 7 as a baseline.",
"Socher et al.",
"(2011) derive multi-word embeddings for sentiment distribution prediction, while we focus on lexical distributional analysis.",
"Maas et al.",
"(2011) and Tang et al.",
"(2016) use documentlevel sentiment annotations to fit word embeddings, but document annotation might not always be available for distributional analysis on neutral corpora such as Wikipedia.",
"SentiVec relies on simple sentiment lexicon instead.",
"Refining (Yu et al., 2018) aligns the sentiment scores taken from lexical resource and the cosine similarity scores of corresponding word embeddings.",
"The method generally requires fine-grained sentiment scores for the words, which may not be available in some settings.",
"We use Refining as a baseline and adopt coarse-grained sentiment lexicon for this method.",
"Villegas et al.",
"(2016) compare various distributional word embeddings arising from the same corpus for sentiment classification, whereas we focus on the differentiation in input corpora and propose novel sentiment-infused word embeddings.",
"Experiments The objective of experiments is to study the efficacy of Logistic SentiVec and Spherical SentiVec word embeddings on the aforementioned text classification tasks.",
"One natural baseline is Word2Vec, as SentiVec subsumes its context prediction objective, while further incorporating lexical category prediction.",
"We include two other baselines that can leverage the same lexical resource but in manners different from SentiVec, namely: Retrofitting (Faruqui et al., 2015) and Refining (Yu et al., 2018) .",
"For these methods, we generate their word embeddings based on Setup III (see Section 3).",
"All the methods were run multiple times with various hyperparameters, optimized via grid-search; for each we present the best performing setting.",
"First, we discuss the sentiment classification task.",
"Table 4 shows the unfolded results for the 24 classification datasets of Amazon, as well as for Rotten Tomatoes.",
"For each classification dataset (row), and for the Objective and Subjective embedding corpora respectively, the best word embedding methods are shown in bold.",
"An asterisk indicates statistically significant 8 results at 5% in comparison to Word2Vec.",
"Both SentiVec variants outperform Word2Vec in the vast majority of the cases.",
"The degree of outperformance is higher for the Objective than the Subjective word embeddings.",
"This is a reasonable trend given our previous findings in Section 3.",
"As the Objective Corpus encodes less information than the Subjective Corpus for sentiment classification, the former is more likely to benefit from the infusion of sentiment information from additional lexical resources.",
"Note that the sentiment infusion into the word embeddings comes from separate lexical resources, and does not involve any sentiment classification label.",
"SentiVec also outperforms the two baselines that benefit from the same lexical resources.",
"Retrofitting does not improve upon Word2Vec, with the two embeddings essentially indistinguishable (the difference is only noticeable at the second decimal point).",
"Refining makes the word embeddings perform worse on the sentiment classification task.",
"One possible explanation is that Refining normally requires fine-grained labeled lexicon, where the words are scored w.r.t.",
"the sentiment scale, whereas we use sentiment lexicon of two labels (i.e., positive or negative).",
"SentiVec accepts coarse-grained sentiment lexicons, and potentially could be extended to deal with fine-grained labels.",
"As previously alluded to, topic and subjectivity classifications are less sensitive to the subjectivity within word embeddings than sentiment classification.",
"One therefore would not expect much, if any, performance gain from infusion of sentiment information.",
"However, such infusion should not subtract or harm the quality of word embeddings either.",
"Table 5 shows that the unfolded results for topic classification on the six datasets, and the result for subjectivity classification are similar across methods.",
"Neither the SentiVec variants, nor Retrofitting and Refining, change the subjectivity and topic classification capabilities much, which means that the used sentiment lexicon is targeted only at the sentiment subspace of embeddings.",
"Illustrative Changes in Embeddings To give more insights on the difference between SentiVec and Word2Vec, we show \"flower\" diagrams in Figure 1 for Logistic SentiVec and Figure 2 for Spherical SentiVec.",
"Each is associated with a reference word (e.g., good for Figure 1a) , and indicates relative changes in cosine distances between the reference word and the testing words surrounding the \"flower\".",
"Every testing word is associated with a \"petal\" or black axis extending from the center of the circle.",
"The \"petal\" length is proportional to the relative distance change in two word embeddings: κ = Word2Vec embeddings correspondingly.",
"If the distance remains unchanged (κ = 1), then the \"petal\" points at the circumference; if the reference and testing words are closer in the SentiVec embedding than they are in Word2Vec (κ < 1), the \"petal\" lies inside the circle; when the distance increases (κ > 1), the \"petal\" goes beyond the circle.",
"The diagrams are presented for Objective Embeddings 9 .",
"We use three reference words: good (positive), bad (negative), time (neutral); as well as three groups of testing words: green for words randomly sampled from positive lexicon (Sector I-II), red for words randomly sampled from negative lexicon (Sector II-III), and gray for frequent neutral common nouns (Sector III-I).",
"Figure 1 shows changes produced by Logistic SentiVec.",
"For the positive reference word (Figure 1a) , the average distance to the green words is shortened, whereas the distance to the red words increases.",
"The reverse is observed for the negative reference word (Figure 1b ).",
"This observation complies with the lexical objective (7) of Logistic SentiVec, which aims to separate the words of two different classes.",
"Note that the gray words suffer only moderate change with respect to positive and negative reference words.",
"For the neutral reference word (Figure 1c ), the distances are only moderately affected across all testing groups.",
"Figure 2 shows that Spherical SentiVec tends to make embeddings more compact than Logistic SentiVec.",
"As the former's lexical objective (9) is designed for clustering, but not for separation, we look at the comparative strength of the clustering effect on the testing words.",
"For the positive reference word (Figure 2a ), the largest clustering effect is achieved for the green words.",
"For the negative reference word (Figure 2b) , as expected, the red words are affected the most.",
"The gray words suffer the least change for all the reference words.",
"In summary, SentiVec effectively provides an advantage for subjectivity-sensitive task such as sentiment classification, while not harming the performance of other text classification tasks.",
"Conclusion We explore the differences between objective and subjective corpora for generating word embeddings, and find that there is indeed a difference in the embeddings' classification task performances.",
"Identifying the presence of sentiment words as one key factor for the difference, we propose a novel method SentiVec to train word embeddings that are infused with the sentiment polarity of words derived from a separate sentiment lexicon.",
"We further identify two lexical objectives: Logistic SentiVec and Spherical SentiVec.",
"The proposed word embeddings show improvements in sentiment classification, while maintaining their performance on subjectivity and topic classifications."
]
} | {
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"3",
"4.1",
"4.2",
"5",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Data and Methodology",
"Generating Word Embeddings",
"Evaluation Tasks",
"Comparative Analyses of Subjective vs. Objective Corpora",
"Logistic SentiVec",
"Spherical SentiVec",
"Related Work",
"Experiments",
"Conclusion"
]
} | GEM-SciDuet-train-134#paper-1359#slide-14 | Spherical SentiVec | Positive Words Negative Words | Positive Words Negative Words | [] |
GEM-SciDuet-train-134#paper-1359#slide-15 | 1359 | Searching for the X-Factor: Exploring Corpus Subjectivity for Word Embeddings | We explore the notion of subjectivity, and hypothesize that word embeddings learnt from input corpora of varying levels of subjectivity behave differently on natural language processing tasks such as classifying a sentence by sentiment, subjectivity, or topic. Through systematic comparative analyses, we establish this to be the case indeed. Moreover, based on the discovery of the outsized role that sentiment words play on subjectivity-sensitive tasks such as sentiment classification, we develop a novel word embedding SentiVec which is infused with sentiment information from a lexical resource, and is shown to outperform baselines on such tasks. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226
],
"paper_content_text": [
"Introduction Distributional analysis methods such as Word2Vec (Mikolov et al., 2013) and GloVe (Pennington et al., 2014) have been critical for the success of many large-scale natural language processing (NLP) applications (Collobert et al., 2011; Socher et al., 2013; Goldberg, 2016) .",
"These methods employ distributional hypothesis (i.e., words used in the same contexts tend to have similar meaning) to derive distributional meaning via context prediction tasks and produce dense word embeddings.",
"While there have been active and ongoing research on improving word embedding methods (see Section 5), there is a relative dearth of study on the impact that an input corpus may have on the quality of the word embeddings.",
"The previous preoccupation centers around corpus size, i.e., a larger corpus is perceived to be richer in statistical information.",
"For instance, popular corpora include Wikipedia, Common Crawl, and Google News.",
"We postulate that there may be variations across corpora owing to factors that affect language use.",
"Intuitively, the many things we write (a work email, a product review, an academic publication, etc.)",
"may each involve certain stylistic, syntactic, and lexical choices, resulting in meaningfully different distributions of word cooccurrences.",
"Consequently, such factors may be encoded in the word embeddings, and input corpora may be differentially informative towards various NLP tasks.",
"In this work, we are interested in the notion of subjectivity.",
"Some NLP tasks, such as sentiment classification, revolve around subjective expressions of likes or dislikes.",
"Others, such as topic classification, revolve around more objective elements of whether a document belongs to a topic (e.g., science, politics).",
"Our central hypothesis is that word embeddings learnt from input corpora of contrasting levels of subjectivity perform differently when classifying sentences by sentiment, subjectivity, or topic.",
"As the first contribution, we outline an experimental scheme to explore this hypothesis in Section 2, and conduct a series of controlled experiments in Section 3 establishing that there exists a meaningful difference between word embeddings derived from objective vs. subjective corpora.",
"We further systematically investigate factors that could potentially explain the differences.",
"Upon discovering from the investigation that sentiment words play a particularly important role in subjectivity-sensitive NLP tasks, such as sentiment classification, as the second contribution, in Section 4 we develop SentiVec, a novel word embedding method infused with information from lexical resources such as a sentiment lexicon.",
"We further identify two alternative lexical objectives: Logistic SentiVec based on discriminative logistic regression, and Spherical SentiVec based on soft clustering effect of von Mises-Fisher distributions.",
"In Section 6, the proposed word embeddings show evident improvements on sentiment classification, as compared to the base model Word2Vec and other baselines using the same lexical resource.",
"Data and Methodology We lay out the methodology for generating word embeddings of contrasting subjectivity, whose effects are tested on several text classification tasks.",
"Generating Word Embeddings As it is difficult to precisely quantify the degree of subjectivity of a corpus, we resort to generating word embeddings from two corpora that contrast sharply in subjectivity, referring to them as the Objective Corpus and the Subjective Corpus.",
"Objective Corpus As virtually all contents are written by humans, an absolutely objective corpus (in the philosophical sense) may prove elusive.",
"There are however exemplars where, by construction, a corpus aspires to be as objective as possible, and probably achieves that in practical terms.",
"We postulate that one such corpus is Wikipedia.",
"Its list of policies and guidelines 1 , assiduously enforced by an editorial team, specify that an article must be written from a neutral point of view, which among other things means \"representing fairly, proportionately, and, as far as possible, without editorial bias, all of the significant views that have been published by reliable sources on a topic.\".",
"Moreover, it is a common resource for training distributional word embeddings and adopted widely by the research community to solve various NLP problems.",
"Hence, in this study, we use Wikipedia as the Objective Corpus.",
"Subjective Corpus By extension, one may then deem a corpus subjective if its content does not at least meet Wikipedia's neutral point of view requirement.",
"In other words, if the content is replete with personal feelings and opinions.",
"We posit that product reviews would be one such corpus.",
"For instance, Amazon's Community Guideline 2 states that \"Amazon values diverse opinions\", and that \"Content you submit should be relevant and based on your own honest opinions and experience.\".",
"Reviews consist of expressive content written by customers, and may not strive for the neutrality of an encyclopedia.",
"We rely on a 1 https://en.wikipedia.org/wiki/ Wikipedia:List_of_policies_and_ guidelines 2 https://www.amazon.com/gp/help/ customer/display.html?nodeId=201929730 large corpus of Amazon reviews from various categories (e.g., electronics, jewelry, books, and etc.)",
"(McAuley et al., 2015) as the Subjective Corpus.",
"Word Embeddings For the comparative analysis in Section 3, we employ Word2Vec (reviewed below) to generate word embeddings from each corpus.",
"Later on in Section 4, we will propose a new word embedding method called SentiVec.",
"For Word2Vec, we use the Skip-gram model to train distributional word embeddings on the Objective Corpus and the Subjective Corpus respectively.",
"Skip-gram aims to find word embeddings that are useful for predicting nearby words.",
"The objective is to maximize the context probability: log L(W ; C) = w∈W w ∈C(w) log P(w |w), (1) where W is an input corpus and C(w) is the context of token w. The probability of context word w , given observed word w is defined via softmax: P(w |w) = exp (v w · vw) ŵ∈V exp (vŵ · vw) , (2) where v w and v w are corresponding embeddings and V is the corpus vocabulary.",
"Though theoretically sound, the formulation is computationally impractical and requires tractable approximation.",
"Mikolov et al.",
"(2013) propose two efficient procedures to optimize (1): Hierarchical Softmax and Negative Sampling (NS).",
"In this work we focus on the widely adopted NS.",
"The intuition is that a \"good\" model should be able to differentiate observed data from noise.",
"The differentiation task is defined using logistic regression; the goal is to tell apart real context-word pair (w , w) from randomly generated noise pair (ŵ, w).",
"Formally, log L [w',w] = log σ (v w · vw) + k i=1 log σ (−vŵ i · vw), (3) where σ( · ) is a sigmoid function, and {ŵ i } k i=1 are negative samples.",
"Summing up all the contextword pairs, we derive the NS Skip-gram objective: log L word2vec (W ; C) = w∈W w ∈C(w) log L [w',w] .",
"(4) Training word embeddings with Skip-gram, we keep the same hyperparameters across all the runs: 300 dimensions for embeddings, k = 5 negative samples, and window of 5 tokens.",
"The Objective and Subjective corpora undergo the same preprocessing, i.e., discarding short sentences (< 5 tokens) and rare words (< 10 occurrences), removing punctuation, normalizing Unicode symbols.",
"Evaluation Tasks To compare word embeddings, we need a common yardstick.",
"It is difficult to define an inherent quality to word embeddings.",
"Instead, we put them through several evaluation tasks that can leverage word embeddings and standardize their formulations as binary classification tasks.",
"To boil the comparisons down to the essences of word embeddings (which is our central focus), we rely on standardized techniques so as to attribute as much of the differences as possible to the word embeddings.",
"We use logistic regression for classification, and represent a text snippet (e.g., a sentence) in the feature space as the average of the word embeddings of tokens in the snippet (ignoring out-ofvocabulary tokens).",
"The evaluation metric is the average accuracy from 10-fold cross validation.",
"There are three evaluation tasks of varying degrees of hypothetical subjectivity, as outlined below.",
"Each may involve multiple datasets.",
"Sentiment Classification Task This task classifies a sentence into either positive or negative.",
"We use two groups of datasets as follows.",
"The first group consists of 24 datasets from UCSD Amazon product data 3 corresponding to various product categories.",
"Each review has a rating from 1 to 5, which is transformed into positive (ratings 4 or 5) or negative (ratings 1 or 2) class.",
"For each dataset respectively, we sample 5000 sentences each from the positive and negative reviews.",
"Note that these sentences used for this evaluation task have not participated in the generation of word embeddings.",
"Due to space constraint, in most cases we present the average accuracy across the datasets, but where appropriate we enumerate the results for each dataset.",
"The second is Cornell's sentence polarity dataset v1.0 4 (Pang and Lee, 2005) , made up of 5331 each of positive and negative sentences from Rotten Tomatoes movie reviews.",
"The inclusion of this out-of-domain evaluation dataset is useful for examining whether the performance of word embeddings from the Subjective Corpus on the first group above may inadvertently be affected by indomain advantage arising from its Amazon origin.",
"Subjectivity Classification Task This task classifies a sentence into subjective or objective.",
"The dataset is Cornell's subjectivity dataset v1.0 5 , consisting of 5000 subjective sentences derived from Rotten Tomatoes (RT) reviews and 5000 objective sentences derived from IMDB plot summaries (Pang and Lee, 2004) .",
"This task is probably less sensitive to the subjectivity within word embeddings than sentiment classification, as determining whether a sentence is subjective or objective should ideally be an objective undertaking.",
"Topic Classification Task We use the 20 Newsgroups dataset 6 (\"bydate\" version), whereby the newsgroups are organized into six subject matter groupings.",
"We extract the message body and split them into sentences.",
"Each group's sentences then form the in-topic class, and we randomly sample an equivalent number of sentences from the remaining newsgroups to form the out-of-topic class.",
"This results in six datasets, each corresponding to a binary classification task.",
"In most cases, we present the average results, and where appropriate we enumerate the results for each dataset.",
"Hypothetically, this task is the least affected by the subjectivity within word embeddings.",
"Comparative Analyses of Subjective vs.",
"Objective Corpora We conduct a series of comparative analyses under various setups.",
"For each, we compare the performance in the evaluation tasks when using the Objective Corpus and the Subjective Corpus.",
"Table 1 shows the results for this series of analyses.",
"Initial Condition Setup I seeks to answer whether there is any difference between word embeddings derived from the Objective Corpus and the Subjective Corpus.",
"The word embeddings were trained on the whole data respectively.",
"Table 1 shows the corpus statistics and classification accuracies.",
"Evidently, the Subjective word embeddings outperform the Objective word embeddings on all the evaluation tasks.",
"The margins are largest for sentiment classification (86.5% vs. 81.5% or +5% Amazon, and 78.2% vs. 75.4% or +2.8% on Rotten Tomatoes or RT).",
"For subjectivity and topic classifications, the differences are smaller.",
"As earlier hypothesized, the sentiment classification task is more sensitive to subjectivity within word embeddings than the other tasks.",
"Therefore, training word embeddings on a subjective corpus may confer an advantage for such tasks.",
"On the other hand, the corpus statistics show a substantial difference in corpus size, which could be an alternative explanation for the outperformance by the Subjective Corpus if the larger corpus contains more informative distributional statistics.",
"Controlling for Corpus Size In Setup II, we keep the number of sentences in both corpora the same, by randomly downsampling sentences in the Subjective Corpus.",
"This procedure consequently reduces the number of types and tokens (see Table 1 , Setup II, Corpus Statistics).",
"Note that the number of tokens in the Subjective corpus is now fewer than in the Objective, the latter suffers no change.",
"Yet, even after a dramatic reduction in size, the Subjective embeddings still outperform the Objective significantly on both datasets of the sentiment classification task (+4% on Amazon and +2.5% on RT), while showing similar performance on subjectivity and topic classifications.",
"This bolsters the earlier observation that sentiment classification is more sensitive to subjectivity.",
"While there is a small effect due to corpus size difference, the gap in performance between Subjective and Objective embeddings on sentiment classification is still significant and cannot be explained away by the corpus size alone.",
"Controlling for Vocabulary While the Subjective Corpus has a much smaller vocabulary (i.e., # types), we turn a critical eye on whether its apparent advantage lies in having access to special word types that do not exist in the Objective Corpus.",
"In Setup III, we keep the training vocabulary the same for both, removing the types that are Table 2 : Top words of misclassified sentences present in one corpus but not in the other, so that out-of-vocabulary words are ignored in the training phase.",
"Table 1 , Setup III, shows significant reduction in types for both corpora.",
"Yet, the outperformance by the Subjective embeddings on the sentiment classification task still stands (+3.8% on Amazon and +2.3% on RT).",
"Moreover, it is so for both Amazon and Rotten Tomatoes datasets, implying that it is not due to close in-domain similarity between the corpora used for training the word embeddings and the classification tasks.",
"Significant Words To get more insights on the difference between the Subjective and Objective corpora, we analyze the mistakes word embeddings make on the development folds.",
"At this point we focus on the sentiment classification task and specifically on the Amazon data, which indicates the largest performance differences in the controlled experiments (see Table 1 , Setup III).",
"As words are still the main unit of information in distributional word embeddings, we extract words strongly associated with misclassified sentences.",
"We employed log-odds ratio with informative Dirichlet prior method (Monroe et al., 2008) to quantify this association.",
"It is used to contrast the words in misclassified vs. correctly classified sentences, and accounts for the variance of words and their prior counts taken from a large corpus.",
"Table 2 shows the top 25 words most associated with the misclassified sentences, sorted by their association scores.",
"On average 50% of the mistakes overlap for both word embeddings, therefore, some of the words are included in both lists.",
"40 − 44% of these words carry positive or negative sentiment connotations in general (see the underlined words in Table 2 ), while other words like return or send may carry sentiment connotation in e-commerce context.",
"We check if a word carries sentiment connotation using sentiment lexicon compiled by Hu and Liu (2004) , including 6789 words along with positive or negative labels.",
"We also observe linguistic negations (i.e., not, Don't).",
"For instance, the word most associated with the Objective-specific mistakes (excluding the Subjective misclassified sentences) is not, which suggests that perhaps Subjective word embedding accommodates better understanding of linguistic negations, which may partially explain the difference.",
"However, our methodology as outlined in Section 2.2 permits exchangeable word order and is not intended to analyze structural interaction between words.",
"We focus on further analysis of sentiment words, leaving linguistic negations in word embeddings for future investigation.",
"Controlling for Sentiment Words To control for the \"amount\" of sentiment in the Subjective and Objective corpora, we use sentiment lexicon compiled by Hu and Liu (2004) .",
"For each corpus, we create two subcorpora: With Sentiment contains only the sentences with at least one word from the sentiment lexicon, while Without Sentiment is the complement.",
"We match the corpora on the number of sentences, downsampling the larger corpus, train word embeddings on each subcorpus, and proceed with the classification experiments.",
"Table 3 shows the results, including that of random word embeddings for reference.",
"Sentiment lexicon has a significant impact on the performance of sentiment and subjectivity classifications, and a smaller impact on topic classification.",
"Without sentiment, the Subjective embeddings prove more robust, still outperforming the Objective on sentiment classification, while the Objective performs close to random word embeddings on Amazon .",
"In summary, evidences from the series of controlled experiments support the existence of some X-factor to the Subjective embeddings, which confers superior performance in subjectivity-sensitive tasks such as sentiment classification.",
"To leverage the consequential sentiment information, we propose a family of methods, called SentiVec, for training distributional word embeddings that are infused with information on the sentiment polarity of words.",
"The methods are built upon Word2Vec optimization algorithm and make use of available lexical sentiment resources such as SentiWordNet (Baccianella et al., 2010) , sentiment lexicon by Hu and Liu (2004) , and etc.",
"SentiVec seeks to satisfy two objectives, namely context prediction and lexical category prediction: log L = log L word2vec (W ; C) + λ log L lex (W, L), (5) where L word2vec (W ; C) is the Skip-gram objective as in (4) ; L lex (W, L) is a lexical objective for corpus W and lexical resource L; and λ is a tradeoff parameter.",
"Lexical resource L = {X i } n i=1 comprises of n word sets, each X i contains words of the same category.",
"For sentiment classification, we consider positive and negative word categories.",
"Logistic SentiVec Logistic SentiVec admits lexical resource in the form of two disjoint word sets, L = {X 1 , X 2 }, X 1 ∩ X 2 = ∅.",
"The objective is to tell apart which word set of L word w belongs to: log L lex (W, L) (6) = w∈X 1 log P(w ∈ X 1 ) + w∈X 2 log P(w ∈ X 2 ).",
"We further tie these probabilities together, and cast the objective as a logistic regression problem: P(w ∈ X 1 ) = 1 − P(w ∈ X 2 ) = σ(v w · τ ), (7) where v w is a word embedding and τ is a direction vector.",
"Since word embeddings are generally invariant to scaling and rotation when used as downstream feature representations, τ can be chosen randomly and fixed during training.",
"We experiment with randomly sampled unit length directions.",
"For simplicity, we also scale embedding v w to its unit length when computing v w · τ , which now equals to cosine similarity between v w and τ .",
"When v w is completely aligned with τ , the cosine similarity between them is 1, which maximizes P(w ∈ X 1 ) and favors words in X 1 .",
"When v w is opposite to τ , the cosine similarity equals to −1, which maximizes P(w ∈ X 2 ) and predicts vectors from X 2 .",
"Orthogonal vectors have cosine similarity of 0, which makes both w ∈ X 1 and w ∈ X 2 equally probable.",
"Optimizing (6) makes the corresponding word embeddings of X 1 and X 2 gravitate to the opposite semispaces and simulates clustering effect for the words of the same category, while the Word2Vec objective prevents words from collapsing to the same directions.",
"Optimization The objective in (6) permits simple stochastic gradient ascent optimization and can be combined with negative sampling procedure for Skip-gram in (5).",
"The gradient for unnormalized embedding v w is solved as follows: log L [w∈X 1 ] (D, L) v wi = (log P (x ∈ X 1 )) v wi = 1 v w 2 σ − v w · τ v w τ i v w − v wi v w · τ v w (8) The optimization equation for v w , when w ∈ X 2 , can be derived analogously.",
"Spherical SentiVec Spherical SentiVec extends Logistic SentiVec by dealing with any number of lexical categories, L = {X i } n i=1 .",
"As such, the lexical objective takes on generic form: log L lex (W, L) = n i=1 w∈X i log P (w ∈ X i ), (9) Each P (w ∈ X i ) defines embedding generating process.",
"We assume each length-normalized v w for w of L is generated w.r.t.",
"a mixture model of von Mises-Fisher (vMF) distributions.",
"vMF is a probability distribution on a multidimensional sphere, characterized by parameters µ (mean direction) and κ (concentration parameter).",
"Sampled points are concentrated around µ; the greater the κ, the closer the sampled points are to µ.",
"We consider only unimodal vMF distributions, restricting concentration parameters to be strictly positive.",
"Hereby, each X i ∈ L is assigned to vMF distribution parameters (µ i , κ i ) and the membership probabilities are defined as follows: P(w ∈ X i ) = P (v w ; µ i , κ i ) = 1 Z κ i e κ i µ i ·vw , (10) where Z κ is the normalization factor.",
"The Spherical SentiVec lexical objective forces words of every X i ∈ L to gravitate towards and concentrate around their direction mean µ i .",
"As in Logistic SentiVec, it simulates clustering effect for the words of the same set.",
"In comparison to the direction vector of Logistic SentiVec, mean directions of Spherical SentiVec when fixed can substantially influence word embeddings training and must be carefully selected.",
"We optimize the mean directions along with the word embeddings using alternating procedure resembling K-means clustering algorithm.",
"For simplicity, we keep concentration parameters tied, κ 1 = κ 2 = ... = κ n = κ, and treat κ as a hyperparameter of this algorithm.",
"Optimization We derive optimization procedure for updating word embeddings assuming fixed direction means.",
"Like Logistic SentiVec, Spherical SentiVec can be combined with the negative sampling procedure of Skip-gram.",
"The gradient for unnormalized word embedding v w is solved by the following equation: log L [w∈X i ] (W, L) v wj = κi µij vw − vwj vw ·µ i vw vw 2 (11) Once word embedding v w (w ∈ X i ) is updated, we revise direction mean µ i w.r.t.",
"maximum likelihood estimator: µi = w∈X i vw w∈X i vw .",
"(12) Updating the direction means in such a way ensures that the lexical objective is non-decreasing.",
"Assuming the stochastic optimization procedure for L word2vec complies with the same nondecreasing property, the proposed alternating procedure converges.",
"Related Work There have been considerable research on improving the quality of distributional word embeddings.",
"Bolukbasi et al.",
"(2016) seek to debias word embeddings from gender stereotypes.",
"Rothe and Schütze (2017) incorporate WordNet lexeme and synset information.",
"Mrkšic et al.",
"(2016) encode antonym-synonym relations.",
"Liu et al.",
"(2015) encode ordinal relations such as hypernym and hyponym.",
"Kiela et al.",
"(2015) augment Skip-gram to enforce lexical similarity or relatedness constraints, Bollegala et al.",
"(2016) modify GloVe optimization procedure for the same purpose.",
"Faruqui et al.",
"(2015) employ semantic relations of PPDB, WordNet, FrameNet to retrofit word embeddings for various prediction tasks.",
"We use this Retrofitting method 7 as a baseline.",
"Socher et al.",
"(2011) derive multi-word embeddings for sentiment distribution prediction, while we focus on lexical distributional analysis.",
"Maas et al.",
"(2011) and Tang et al.",
"(2016) use documentlevel sentiment annotations to fit word embeddings, but document annotation might not always be available for distributional analysis on neutral corpora such as Wikipedia.",
"SentiVec relies on simple sentiment lexicon instead.",
"Refining (Yu et al., 2018) aligns the sentiment scores taken from lexical resource and the cosine similarity scores of corresponding word embeddings.",
"The method generally requires fine-grained sentiment scores for the words, which may not be available in some settings.",
"We use Refining as a baseline and adopt coarse-grained sentiment lexicon for this method.",
"Villegas et al.",
"(2016) compare various distributional word embeddings arising from the same corpus for sentiment classification, whereas we focus on the differentiation in input corpora and propose novel sentiment-infused word embeddings.",
"Experiments The objective of experiments is to study the efficacy of Logistic SentiVec and Spherical SentiVec word embeddings on the aforementioned text classification tasks.",
"One natural baseline is Word2Vec, as SentiVec subsumes its context prediction objective, while further incorporating lexical category prediction.",
"We include two other baselines that can leverage the same lexical resource but in manners different from SentiVec, namely: Retrofitting (Faruqui et al., 2015) and Refining (Yu et al., 2018) .",
"For these methods, we generate their word embeddings based on Setup III (see Section 3).",
"All the methods were run multiple times with various hyperparameters, optimized via grid-search; for each we present the best performing setting.",
"First, we discuss the sentiment classification task.",
"Table 4 shows the unfolded results for the 24 classification datasets of Amazon, as well as for Rotten Tomatoes.",
"For each classification dataset (row), and for the Objective and Subjective embedding corpora respectively, the best word embedding methods are shown in bold.",
"An asterisk indicates statistically significant 8 results at 5% in comparison to Word2Vec.",
"Both SentiVec variants outperform Word2Vec in the vast majority of the cases.",
"The degree of outperformance is higher for the Objective than the Subjective word embeddings.",
"This is a reasonable trend given our previous findings in Section 3.",
"As the Objective Corpus encodes less information than the Subjective Corpus for sentiment classification, the former is more likely to benefit from the infusion of sentiment information from additional lexical resources.",
"Note that the sentiment infusion into the word embeddings comes from separate lexical resources, and does not involve any sentiment classification label.",
"SentiVec also outperforms the two baselines that benefit from the same lexical resources.",
"Retrofitting does not improve upon Word2Vec, with the two embeddings essentially indistinguishable (the difference is only noticeable at the second decimal point).",
"Refining makes the word embeddings perform worse on the sentiment classification task.",
"One possible explanation is that Refining normally requires fine-grained labeled lexicon, where the words are scored w.r.t.",
"the sentiment scale, whereas we use sentiment lexicon of two labels (i.e., positive or negative).",
"SentiVec accepts coarse-grained sentiment lexicons, and potentially could be extended to deal with fine-grained labels.",
"As previously alluded to, topic and subjectivity classifications are less sensitive to the subjectivity within word embeddings than sentiment classification.",
"One therefore would not expect much, if any, performance gain from infusion of sentiment information.",
"However, such infusion should not subtract or harm the quality of word embeddings either.",
"Table 5 shows that the unfolded results for topic classification on the six datasets, and the result for subjectivity classification are similar across methods.",
"Neither the SentiVec variants, nor Retrofitting and Refining, change the subjectivity and topic classification capabilities much, which means that the used sentiment lexicon is targeted only at the sentiment subspace of embeddings.",
"Illustrative Changes in Embeddings To give more insights on the difference between SentiVec and Word2Vec, we show \"flower\" diagrams in Figure 1 for Logistic SentiVec and Figure 2 for Spherical SentiVec.",
"Each is associated with a reference word (e.g., good for Figure 1a) , and indicates relative changes in cosine distances between the reference word and the testing words surrounding the \"flower\".",
"Every testing word is associated with a \"petal\" or black axis extending from the center of the circle.",
"The \"petal\" length is proportional to the relative distance change in two word embeddings: κ = Word2Vec embeddings correspondingly.",
"If the distance remains unchanged (κ = 1), then the \"petal\" points at the circumference; if the reference and testing words are closer in the SentiVec embedding than they are in Word2Vec (κ < 1), the \"petal\" lies inside the circle; when the distance increases (κ > 1), the \"petal\" goes beyond the circle.",
"The diagrams are presented for Objective Embeddings 9 .",
"We use three reference words: good (positive), bad (negative), time (neutral); as well as three groups of testing words: green for words randomly sampled from positive lexicon (Sector I-II), red for words randomly sampled from negative lexicon (Sector II-III), and gray for frequent neutral common nouns (Sector III-I).",
"Figure 1 shows changes produced by Logistic SentiVec.",
"For the positive reference word (Figure 1a) , the average distance to the green words is shortened, whereas the distance to the red words increases.",
"The reverse is observed for the negative reference word (Figure 1b ).",
"This observation complies with the lexical objective (7) of Logistic SentiVec, which aims to separate the words of two different classes.",
"Note that the gray words suffer only moderate change with respect to positive and negative reference words.",
"For the neutral reference word (Figure 1c ), the distances are only moderately affected across all testing groups.",
"Figure 2 shows that Spherical SentiVec tends to make embeddings more compact than Logistic SentiVec.",
"As the former's lexical objective (9) is designed for clustering, but not for separation, we look at the comparative strength of the clustering effect on the testing words.",
"For the positive reference word (Figure 2a ), the largest clustering effect is achieved for the green words.",
"For the negative reference word (Figure 2b) , as expected, the red words are affected the most.",
"The gray words suffer the least change for all the reference words.",
"In summary, SentiVec effectively provides an advantage for subjectivity-sensitive task such as sentiment classification, while not harming the performance of other text classification tasks.",
"Conclusion We explore the differences between objective and subjective corpora for generating word embeddings, and find that there is indeed a difference in the embeddings' classification task performances.",
"Identifying the presence of sentiment words as one key factor for the difference, we propose a novel method SentiVec to train word embeddings that are infused with the sentiment polarity of words derived from a separate sentiment lexicon.",
"We further identify two lexical objectives: Logistic SentiVec and Spherical SentiVec.",
"The proposed word embeddings show improvements in sentiment classification, while maintaining their performance on subjectivity and topic classifications."
]
} | {
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"3",
"4.1",
"4.2",
"5",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Data and Methodology",
"Generating Word Embeddings",
"Evaluation Tasks",
"Comparative Analyses of Subjective vs. Objective Corpora",
"Logistic SentiVec",
"Spherical SentiVec",
"Related Work",
"Experiments",
"Conclusion"
]
} | GEM-SciDuet-train-134#paper-1359#slide-15 | Changes in Similarity | Target Word: Good Target Word: Bad | Target Word: Good Target Word: Bad | [] |
GEM-SciDuet-train-134#paper-1359#slide-16 | 1359 | Searching for the X-Factor: Exploring Corpus Subjectivity for Word Embeddings | We explore the notion of subjectivity, and hypothesize that word embeddings learnt from input corpora of varying levels of subjectivity behave differently on natural language processing tasks such as classifying a sentence by sentiment, subjectivity, or topic. Through systematic comparative analyses, we establish this to be the case indeed. Moreover, based on the discovery of the outsized role that sentiment words play on subjectivity-sensitive tasks such as sentiment classification, we develop a novel word embedding SentiVec which is infused with sentiment information from a lexical resource, and is shown to outperform baselines on such tasks. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192,
193,
194,
195,
196,
197,
198,
199,
200,
201,
202,
203,
204,
205,
206,
207,
208,
209,
210,
211,
212,
213,
214,
215,
216,
217,
218,
219,
220,
221,
222,
223,
224,
225,
226
],
"paper_content_text": [
"Introduction Distributional analysis methods such as Word2Vec (Mikolov et al., 2013) and GloVe (Pennington et al., 2014) have been critical for the success of many large-scale natural language processing (NLP) applications (Collobert et al., 2011; Socher et al., 2013; Goldberg, 2016) .",
"These methods employ distributional hypothesis (i.e., words used in the same contexts tend to have similar meaning) to derive distributional meaning via context prediction tasks and produce dense word embeddings.",
"While there have been active and ongoing research on improving word embedding methods (see Section 5), there is a relative dearth of study on the impact that an input corpus may have on the quality of the word embeddings.",
"The previous preoccupation centers around corpus size, i.e., a larger corpus is perceived to be richer in statistical information.",
"For instance, popular corpora include Wikipedia, Common Crawl, and Google News.",
"We postulate that there may be variations across corpora owing to factors that affect language use.",
"Intuitively, the many things we write (a work email, a product review, an academic publication, etc.)",
"may each involve certain stylistic, syntactic, and lexical choices, resulting in meaningfully different distributions of word cooccurrences.",
"Consequently, such factors may be encoded in the word embeddings, and input corpora may be differentially informative towards various NLP tasks.",
"In this work, we are interested in the notion of subjectivity.",
"Some NLP tasks, such as sentiment classification, revolve around subjective expressions of likes or dislikes.",
"Others, such as topic classification, revolve around more objective elements of whether a document belongs to a topic (e.g., science, politics).",
"Our central hypothesis is that word embeddings learnt from input corpora of contrasting levels of subjectivity perform differently when classifying sentences by sentiment, subjectivity, or topic.",
"As the first contribution, we outline an experimental scheme to explore this hypothesis in Section 2, and conduct a series of controlled experiments in Section 3 establishing that there exists a meaningful difference between word embeddings derived from objective vs. subjective corpora.",
"We further systematically investigate factors that could potentially explain the differences.",
"Upon discovering from the investigation that sentiment words play a particularly important role in subjectivity-sensitive NLP tasks, such as sentiment classification, as the second contribution, in Section 4 we develop SentiVec, a novel word embedding method infused with information from lexical resources such as a sentiment lexicon.",
"We further identify two alternative lexical objectives: Logistic SentiVec based on discriminative logistic regression, and Spherical SentiVec based on soft clustering effect of von Mises-Fisher distributions.",
"In Section 6, the proposed word embeddings show evident improvements on sentiment classification, as compared to the base model Word2Vec and other baselines using the same lexical resource.",
"Data and Methodology We lay out the methodology for generating word embeddings of contrasting subjectivity, whose effects are tested on several text classification tasks.",
"Generating Word Embeddings As it is difficult to precisely quantify the degree of subjectivity of a corpus, we resort to generating word embeddings from two corpora that contrast sharply in subjectivity, referring to them as the Objective Corpus and the Subjective Corpus.",
"Objective Corpus As virtually all contents are written by humans, an absolutely objective corpus (in the philosophical sense) may prove elusive.",
"There are however exemplars where, by construction, a corpus aspires to be as objective as possible, and probably achieves that in practical terms.",
"We postulate that one such corpus is Wikipedia.",
"Its list of policies and guidelines 1 , assiduously enforced by an editorial team, specify that an article must be written from a neutral point of view, which among other things means \"representing fairly, proportionately, and, as far as possible, without editorial bias, all of the significant views that have been published by reliable sources on a topic.\".",
"Moreover, it is a common resource for training distributional word embeddings and adopted widely by the research community to solve various NLP problems.",
"Hence, in this study, we use Wikipedia as the Objective Corpus.",
"Subjective Corpus By extension, one may then deem a corpus subjective if its content does not at least meet Wikipedia's neutral point of view requirement.",
"In other words, if the content is replete with personal feelings and opinions.",
"We posit that product reviews would be one such corpus.",
"For instance, Amazon's Community Guideline 2 states that \"Amazon values diverse opinions\", and that \"Content you submit should be relevant and based on your own honest opinions and experience.\".",
"Reviews consist of expressive content written by customers, and may not strive for the neutrality of an encyclopedia.",
"We rely on a 1 https://en.wikipedia.org/wiki/ Wikipedia:List_of_policies_and_ guidelines 2 https://www.amazon.com/gp/help/ customer/display.html?nodeId=201929730 large corpus of Amazon reviews from various categories (e.g., electronics, jewelry, books, and etc.)",
"(McAuley et al., 2015) as the Subjective Corpus.",
"Word Embeddings For the comparative analysis in Section 3, we employ Word2Vec (reviewed below) to generate word embeddings from each corpus.",
"Later on in Section 4, we will propose a new word embedding method called SentiVec.",
"For Word2Vec, we use the Skip-gram model to train distributional word embeddings on the Objective Corpus and the Subjective Corpus respectively.",
"Skip-gram aims to find word embeddings that are useful for predicting nearby words.",
"The objective is to maximize the context probability: log L(W ; C) = w∈W w ∈C(w) log P(w |w), (1) where W is an input corpus and C(w) is the context of token w. The probability of context word w , given observed word w is defined via softmax: P(w |w) = exp (v w · vw) ŵ∈V exp (vŵ · vw) , (2) where v w and v w are corresponding embeddings and V is the corpus vocabulary.",
"Though theoretically sound, the formulation is computationally impractical and requires tractable approximation.",
"Mikolov et al.",
"(2013) propose two efficient procedures to optimize (1): Hierarchical Softmax and Negative Sampling (NS).",
"In this work we focus on the widely adopted NS.",
"The intuition is that a \"good\" model should be able to differentiate observed data from noise.",
"The differentiation task is defined using logistic regression; the goal is to tell apart real context-word pair (w , w) from randomly generated noise pair (ŵ, w).",
"Formally, log L [w',w] = log σ (v w · vw) + k i=1 log σ (−vŵ i · vw), (3) where σ( · ) is a sigmoid function, and {ŵ i } k i=1 are negative samples.",
"Summing up all the contextword pairs, we derive the NS Skip-gram objective: log L word2vec (W ; C) = w∈W w ∈C(w) log L [w',w] .",
"(4) Training word embeddings with Skip-gram, we keep the same hyperparameters across all the runs: 300 dimensions for embeddings, k = 5 negative samples, and window of 5 tokens.",
"The Objective and Subjective corpora undergo the same preprocessing, i.e., discarding short sentences (< 5 tokens) and rare words (< 10 occurrences), removing punctuation, normalizing Unicode symbols.",
"Evaluation Tasks To compare word embeddings, we need a common yardstick.",
"It is difficult to define an inherent quality to word embeddings.",
"Instead, we put them through several evaluation tasks that can leverage word embeddings and standardize their formulations as binary classification tasks.",
"To boil the comparisons down to the essences of word embeddings (which is our central focus), we rely on standardized techniques so as to attribute as much of the differences as possible to the word embeddings.",
"We use logistic regression for classification, and represent a text snippet (e.g., a sentence) in the feature space as the average of the word embeddings of tokens in the snippet (ignoring out-ofvocabulary tokens).",
"The evaluation metric is the average accuracy from 10-fold cross validation.",
"There are three evaluation tasks of varying degrees of hypothetical subjectivity, as outlined below.",
"Each may involve multiple datasets.",
"Sentiment Classification Task This task classifies a sentence into either positive or negative.",
"We use two groups of datasets as follows.",
"The first group consists of 24 datasets from UCSD Amazon product data 3 corresponding to various product categories.",
"Each review has a rating from 1 to 5, which is transformed into positive (ratings 4 or 5) or negative (ratings 1 or 2) class.",
"For each dataset respectively, we sample 5000 sentences each from the positive and negative reviews.",
"Note that these sentences used for this evaluation task have not participated in the generation of word embeddings.",
"Due to space constraint, in most cases we present the average accuracy across the datasets, but where appropriate we enumerate the results for each dataset.",
"The second is Cornell's sentence polarity dataset v1.0 4 (Pang and Lee, 2005) , made up of 5331 each of positive and negative sentences from Rotten Tomatoes movie reviews.",
"The inclusion of this out-of-domain evaluation dataset is useful for examining whether the performance of word embeddings from the Subjective Corpus on the first group above may inadvertently be affected by indomain advantage arising from its Amazon origin.",
"Subjectivity Classification Task This task classifies a sentence into subjective or objective.",
"The dataset is Cornell's subjectivity dataset v1.0 5 , consisting of 5000 subjective sentences derived from Rotten Tomatoes (RT) reviews and 5000 objective sentences derived from IMDB plot summaries (Pang and Lee, 2004) .",
"This task is probably less sensitive to the subjectivity within word embeddings than sentiment classification, as determining whether a sentence is subjective or objective should ideally be an objective undertaking.",
"Topic Classification Task We use the 20 Newsgroups dataset 6 (\"bydate\" version), whereby the newsgroups are organized into six subject matter groupings.",
"We extract the message body and split them into sentences.",
"Each group's sentences then form the in-topic class, and we randomly sample an equivalent number of sentences from the remaining newsgroups to form the out-of-topic class.",
"This results in six datasets, each corresponding to a binary classification task.",
"In most cases, we present the average results, and where appropriate we enumerate the results for each dataset.",
"Hypothetically, this task is the least affected by the subjectivity within word embeddings.",
"Comparative Analyses of Subjective vs.",
"Objective Corpora We conduct a series of comparative analyses under various setups.",
"For each, we compare the performance in the evaluation tasks when using the Objective Corpus and the Subjective Corpus.",
"Table 1 shows the results for this series of analyses.",
"Initial Condition Setup I seeks to answer whether there is any difference between word embeddings derived from the Objective Corpus and the Subjective Corpus.",
"The word embeddings were trained on the whole data respectively.",
"Table 1 shows the corpus statistics and classification accuracies.",
"Evidently, the Subjective word embeddings outperform the Objective word embeddings on all the evaluation tasks.",
"The margins are largest for sentiment classification (86.5% vs. 81.5% or +5% Amazon, and 78.2% vs. 75.4% or +2.8% on Rotten Tomatoes or RT).",
"For subjectivity and topic classifications, the differences are smaller.",
"As earlier hypothesized, the sentiment classification task is more sensitive to subjectivity within word embeddings than the other tasks.",
"Therefore, training word embeddings on a subjective corpus may confer an advantage for such tasks.",
"On the other hand, the corpus statistics show a substantial difference in corpus size, which could be an alternative explanation for the outperformance by the Subjective Corpus if the larger corpus contains more informative distributional statistics.",
"Controlling for Corpus Size In Setup II, we keep the number of sentences in both corpora the same, by randomly downsampling sentences in the Subjective Corpus.",
"This procedure consequently reduces the number of types and tokens (see Table 1 , Setup II, Corpus Statistics).",
"Note that the number of tokens in the Subjective corpus is now fewer than in the Objective, the latter suffers no change.",
"Yet, even after a dramatic reduction in size, the Subjective embeddings still outperform the Objective significantly on both datasets of the sentiment classification task (+4% on Amazon and +2.5% on RT), while showing similar performance on subjectivity and topic classifications.",
"This bolsters the earlier observation that sentiment classification is more sensitive to subjectivity.",
"While there is a small effect due to corpus size difference, the gap in performance between Subjective and Objective embeddings on sentiment classification is still significant and cannot be explained away by the corpus size alone.",
"Controlling for Vocabulary While the Subjective Corpus has a much smaller vocabulary (i.e., # types), we turn a critical eye on whether its apparent advantage lies in having access to special word types that do not exist in the Objective Corpus.",
"In Setup III, we keep the training vocabulary the same for both, removing the types that are Table 2 : Top words of misclassified sentences present in one corpus but not in the other, so that out-of-vocabulary words are ignored in the training phase.",
"Table 1 , Setup III, shows significant reduction in types for both corpora.",
"Yet, the outperformance by the Subjective embeddings on the sentiment classification task still stands (+3.8% on Amazon and +2.3% on RT).",
"Moreover, it is so for both Amazon and Rotten Tomatoes datasets, implying that it is not due to close in-domain similarity between the corpora used for training the word embeddings and the classification tasks.",
"Significant Words To get more insights on the difference between the Subjective and Objective corpora, we analyze the mistakes word embeddings make on the development folds.",
"At this point we focus on the sentiment classification task and specifically on the Amazon data, which indicates the largest performance differences in the controlled experiments (see Table 1 , Setup III).",
"As words are still the main unit of information in distributional word embeddings, we extract words strongly associated with misclassified sentences.",
"We employed log-odds ratio with informative Dirichlet prior method (Monroe et al., 2008) to quantify this association.",
"It is used to contrast the words in misclassified vs. correctly classified sentences, and accounts for the variance of words and their prior counts taken from a large corpus.",
"Table 2 shows the top 25 words most associated with the misclassified sentences, sorted by their association scores.",
"On average 50% of the mistakes overlap for both word embeddings, therefore, some of the words are included in both lists.",
"40 − 44% of these words carry positive or negative sentiment connotations in general (see the underlined words in Table 2 ), while other words like return or send may carry sentiment connotation in e-commerce context.",
"We check if a word carries sentiment connotation using sentiment lexicon compiled by Hu and Liu (2004) , including 6789 words along with positive or negative labels.",
"We also observe linguistic negations (i.e., not, Don't).",
"For instance, the word most associated with the Objective-specific mistakes (excluding the Subjective misclassified sentences) is not, which suggests that perhaps Subjective word embedding accommodates better understanding of linguistic negations, which may partially explain the difference.",
"However, our methodology as outlined in Section 2.2 permits exchangeable word order and is not intended to analyze structural interaction between words.",
"We focus on further analysis of sentiment words, leaving linguistic negations in word embeddings for future investigation.",
"Controlling for Sentiment Words To control for the \"amount\" of sentiment in the Subjective and Objective corpora, we use sentiment lexicon compiled by Hu and Liu (2004) .",
"For each corpus, we create two subcorpora: With Sentiment contains only the sentences with at least one word from the sentiment lexicon, while Without Sentiment is the complement.",
"We match the corpora on the number of sentences, downsampling the larger corpus, train word embeddings on each subcorpus, and proceed with the classification experiments.",
"Table 3 shows the results, including that of random word embeddings for reference.",
"Sentiment lexicon has a significant impact on the performance of sentiment and subjectivity classifications, and a smaller impact on topic classification.",
"Without sentiment, the Subjective embeddings prove more robust, still outperforming the Objective on sentiment classification, while the Objective performs close to random word embeddings on Amazon .",
"In summary, evidences from the series of controlled experiments support the existence of some X-factor to the Subjective embeddings, which confers superior performance in subjectivity-sensitive tasks such as sentiment classification.",
"To leverage the consequential sentiment information, we propose a family of methods, called SentiVec, for training distributional word embeddings that are infused with information on the sentiment polarity of words.",
"The methods are built upon Word2Vec optimization algorithm and make use of available lexical sentiment resources such as SentiWordNet (Baccianella et al., 2010) , sentiment lexicon by Hu and Liu (2004) , and etc.",
"SentiVec seeks to satisfy two objectives, namely context prediction and lexical category prediction: log L = log L word2vec (W ; C) + λ log L lex (W, L), (5) where L word2vec (W ; C) is the Skip-gram objective as in (4) ; L lex (W, L) is a lexical objective for corpus W and lexical resource L; and λ is a tradeoff parameter.",
"Lexical resource L = {X i } n i=1 comprises of n word sets, each X i contains words of the same category.",
"For sentiment classification, we consider positive and negative word categories.",
"Logistic SentiVec Logistic SentiVec admits lexical resource in the form of two disjoint word sets, L = {X 1 , X 2 }, X 1 ∩ X 2 = ∅.",
"The objective is to tell apart which word set of L word w belongs to: log L lex (W, L) (6) = w∈X 1 log P(w ∈ X 1 ) + w∈X 2 log P(w ∈ X 2 ).",
"We further tie these probabilities together, and cast the objective as a logistic regression problem: P(w ∈ X 1 ) = 1 − P(w ∈ X 2 ) = σ(v w · τ ), (7) where v w is a word embedding and τ is a direction vector.",
"Since word embeddings are generally invariant to scaling and rotation when used as downstream feature representations, τ can be chosen randomly and fixed during training.",
"We experiment with randomly sampled unit length directions.",
"For simplicity, we also scale embedding v w to its unit length when computing v w · τ , which now equals to cosine similarity between v w and τ .",
"When v w is completely aligned with τ , the cosine similarity between them is 1, which maximizes P(w ∈ X 1 ) and favors words in X 1 .",
"When v w is opposite to τ , the cosine similarity equals to −1, which maximizes P(w ∈ X 2 ) and predicts vectors from X 2 .",
"Orthogonal vectors have cosine similarity of 0, which makes both w ∈ X 1 and w ∈ X 2 equally probable.",
"Optimizing (6) makes the corresponding word embeddings of X 1 and X 2 gravitate to the opposite semispaces and simulates clustering effect for the words of the same category, while the Word2Vec objective prevents words from collapsing to the same directions.",
"Optimization The objective in (6) permits simple stochastic gradient ascent optimization and can be combined with negative sampling procedure for Skip-gram in (5).",
"The gradient for unnormalized embedding v w is solved as follows: log L [w∈X 1 ] (D, L) v wi = (log P (x ∈ X 1 )) v wi = 1 v w 2 σ − v w · τ v w τ i v w − v wi v w · τ v w (8) The optimization equation for v w , when w ∈ X 2 , can be derived analogously.",
"Spherical SentiVec Spherical SentiVec extends Logistic SentiVec by dealing with any number of lexical categories, L = {X i } n i=1 .",
"As such, the lexical objective takes on generic form: log L lex (W, L) = n i=1 w∈X i log P (w ∈ X i ), (9) Each P (w ∈ X i ) defines embedding generating process.",
"We assume each length-normalized v w for w of L is generated w.r.t.",
"a mixture model of von Mises-Fisher (vMF) distributions.",
"vMF is a probability distribution on a multidimensional sphere, characterized by parameters µ (mean direction) and κ (concentration parameter).",
"Sampled points are concentrated around µ; the greater the κ, the closer the sampled points are to µ.",
"We consider only unimodal vMF distributions, restricting concentration parameters to be strictly positive.",
"Hereby, each X i ∈ L is assigned to vMF distribution parameters (µ i , κ i ) and the membership probabilities are defined as follows: P(w ∈ X i ) = P (v w ; µ i , κ i ) = 1 Z κ i e κ i µ i ·vw , (10) where Z κ is the normalization factor.",
"The Spherical SentiVec lexical objective forces words of every X i ∈ L to gravitate towards and concentrate around their direction mean µ i .",
"As in Logistic SentiVec, it simulates clustering effect for the words of the same set.",
"In comparison to the direction vector of Logistic SentiVec, mean directions of Spherical SentiVec when fixed can substantially influence word embeddings training and must be carefully selected.",
"We optimize the mean directions along with the word embeddings using alternating procedure resembling K-means clustering algorithm.",
"For simplicity, we keep concentration parameters tied, κ 1 = κ 2 = ... = κ n = κ, and treat κ as a hyperparameter of this algorithm.",
"Optimization We derive optimization procedure for updating word embeddings assuming fixed direction means.",
"Like Logistic SentiVec, Spherical SentiVec can be combined with the negative sampling procedure of Skip-gram.",
"The gradient for unnormalized word embedding v w is solved by the following equation: log L [w∈X i ] (W, L) v wj = κi µij vw − vwj vw ·µ i vw vw 2 (11) Once word embedding v w (w ∈ X i ) is updated, we revise direction mean µ i w.r.t.",
"maximum likelihood estimator: µi = w∈X i vw w∈X i vw .",
"(12) Updating the direction means in such a way ensures that the lexical objective is non-decreasing.",
"Assuming the stochastic optimization procedure for L word2vec complies with the same nondecreasing property, the proposed alternating procedure converges.",
"Related Work There have been considerable research on improving the quality of distributional word embeddings.",
"Bolukbasi et al.",
"(2016) seek to debias word embeddings from gender stereotypes.",
"Rothe and Schütze (2017) incorporate WordNet lexeme and synset information.",
"Mrkšic et al.",
"(2016) encode antonym-synonym relations.",
"Liu et al.",
"(2015) encode ordinal relations such as hypernym and hyponym.",
"Kiela et al.",
"(2015) augment Skip-gram to enforce lexical similarity or relatedness constraints, Bollegala et al.",
"(2016) modify GloVe optimization procedure for the same purpose.",
"Faruqui et al.",
"(2015) employ semantic relations of PPDB, WordNet, FrameNet to retrofit word embeddings for various prediction tasks.",
"We use this Retrofitting method 7 as a baseline.",
"Socher et al.",
"(2011) derive multi-word embeddings for sentiment distribution prediction, while we focus on lexical distributional analysis.",
"Maas et al.",
"(2011) and Tang et al.",
"(2016) use documentlevel sentiment annotations to fit word embeddings, but document annotation might not always be available for distributional analysis on neutral corpora such as Wikipedia.",
"SentiVec relies on simple sentiment lexicon instead.",
"Refining (Yu et al., 2018) aligns the sentiment scores taken from lexical resource and the cosine similarity scores of corresponding word embeddings.",
"The method generally requires fine-grained sentiment scores for the words, which may not be available in some settings.",
"We use Refining as a baseline and adopt coarse-grained sentiment lexicon for this method.",
"Villegas et al.",
"(2016) compare various distributional word embeddings arising from the same corpus for sentiment classification, whereas we focus on the differentiation in input corpora and propose novel sentiment-infused word embeddings.",
"Experiments The objective of experiments is to study the efficacy of Logistic SentiVec and Spherical SentiVec word embeddings on the aforementioned text classification tasks.",
"One natural baseline is Word2Vec, as SentiVec subsumes its context prediction objective, while further incorporating lexical category prediction.",
"We include two other baselines that can leverage the same lexical resource but in manners different from SentiVec, namely: Retrofitting (Faruqui et al., 2015) and Refining (Yu et al., 2018) .",
"For these methods, we generate their word embeddings based on Setup III (see Section 3).",
"All the methods were run multiple times with various hyperparameters, optimized via grid-search; for each we present the best performing setting.",
"First, we discuss the sentiment classification task.",
"Table 4 shows the unfolded results for the 24 classification datasets of Amazon, as well as for Rotten Tomatoes.",
"For each classification dataset (row), and for the Objective and Subjective embedding corpora respectively, the best word embedding methods are shown in bold.",
"An asterisk indicates statistically significant 8 results at 5% in comparison to Word2Vec.",
"Both SentiVec variants outperform Word2Vec in the vast majority of the cases.",
"The degree of outperformance is higher for the Objective than the Subjective word embeddings.",
"This is a reasonable trend given our previous findings in Section 3.",
"As the Objective Corpus encodes less information than the Subjective Corpus for sentiment classification, the former is more likely to benefit from the infusion of sentiment information from additional lexical resources.",
"Note that the sentiment infusion into the word embeddings comes from separate lexical resources, and does not involve any sentiment classification label.",
"SentiVec also outperforms the two baselines that benefit from the same lexical resources.",
"Retrofitting does not improve upon Word2Vec, with the two embeddings essentially indistinguishable (the difference is only noticeable at the second decimal point).",
"Refining makes the word embeddings perform worse on the sentiment classification task.",
"One possible explanation is that Refining normally requires fine-grained labeled lexicon, where the words are scored w.r.t.",
"the sentiment scale, whereas we use sentiment lexicon of two labels (i.e., positive or negative).",
"SentiVec accepts coarse-grained sentiment lexicons, and potentially could be extended to deal with fine-grained labels.",
"As previously alluded to, topic and subjectivity classifications are less sensitive to the subjectivity within word embeddings than sentiment classification.",
"One therefore would not expect much, if any, performance gain from infusion of sentiment information.",
"However, such infusion should not subtract or harm the quality of word embeddings either.",
"Table 5 shows that the unfolded results for topic classification on the six datasets, and the result for subjectivity classification are similar across methods.",
"Neither the SentiVec variants, nor Retrofitting and Refining, change the subjectivity and topic classification capabilities much, which means that the used sentiment lexicon is targeted only at the sentiment subspace of embeddings.",
"Illustrative Changes in Embeddings To give more insights on the difference between SentiVec and Word2Vec, we show \"flower\" diagrams in Figure 1 for Logistic SentiVec and Figure 2 for Spherical SentiVec.",
"Each is associated with a reference word (e.g., good for Figure 1a) , and indicates relative changes in cosine distances between the reference word and the testing words surrounding the \"flower\".",
"Every testing word is associated with a \"petal\" or black axis extending from the center of the circle.",
"The \"petal\" length is proportional to the relative distance change in two word embeddings: κ = Word2Vec embeddings correspondingly.",
"If the distance remains unchanged (κ = 1), then the \"petal\" points at the circumference; if the reference and testing words are closer in the SentiVec embedding than they are in Word2Vec (κ < 1), the \"petal\" lies inside the circle; when the distance increases (κ > 1), the \"petal\" goes beyond the circle.",
"The diagrams are presented for Objective Embeddings 9 .",
"We use three reference words: good (positive), bad (negative), time (neutral); as well as three groups of testing words: green for words randomly sampled from positive lexicon (Sector I-II), red for words randomly sampled from negative lexicon (Sector II-III), and gray for frequent neutral common nouns (Sector III-I).",
"Figure 1 shows changes produced by Logistic SentiVec.",
"For the positive reference word (Figure 1a) , the average distance to the green words is shortened, whereas the distance to the red words increases.",
"The reverse is observed for the negative reference word (Figure 1b ).",
"This observation complies with the lexical objective (7) of Logistic SentiVec, which aims to separate the words of two different classes.",
"Note that the gray words suffer only moderate change with respect to positive and negative reference words.",
"For the neutral reference word (Figure 1c ), the distances are only moderately affected across all testing groups.",
"Figure 2 shows that Spherical SentiVec tends to make embeddings more compact than Logistic SentiVec.",
"As the former's lexical objective (9) is designed for clustering, but not for separation, we look at the comparative strength of the clustering effect on the testing words.",
"For the positive reference word (Figure 2a ), the largest clustering effect is achieved for the green words.",
"For the negative reference word (Figure 2b) , as expected, the red words are affected the most.",
"The gray words suffer the least change for all the reference words.",
"In summary, SentiVec effectively provides an advantage for subjectivity-sensitive task such as sentiment classification, while not harming the performance of other text classification tasks.",
"Conclusion We explore the differences between objective and subjective corpora for generating word embeddings, and find that there is indeed a difference in the embeddings' classification task performances.",
"Identifying the presence of sentiment words as one key factor for the difference, we propose a novel method SentiVec to train word embeddings that are infused with the sentiment polarity of words derived from a separate sentiment lexicon.",
"We further identify two lexical objectives: Logistic SentiVec and Spherical SentiVec.",
"The proposed word embeddings show improvements in sentiment classification, while maintaining their performance on subjectivity and topic classifications."
]
} | {
"paper_header_number": [
"1",
"2",
"2.1",
"2.2",
"3",
"4.1",
"4.2",
"5",
"6",
"7"
],
"paper_header_content": [
"Introduction",
"Data and Methodology",
"Generating Word Embeddings",
"Evaluation Tasks",
"Comparative Analyses of Subjective vs. Objective Corpora",
"Logistic SentiVec",
"Spherical SentiVec",
"Related Work",
"Experiments",
"Conclusion"
]
} | GEM-SciDuet-train-134#paper-1359#slide-16 | Conclusion | Explored effects of corpus subjectivity for word embeddings
SentiVec, a method for infusing lexical information into word embeddings
Sentiment-infused SentiVec embeddings space facilitate better sentiment-related similarity
Pre-trained Word Embeddings & Code: https://sentivec.preferred.ai/ | Explored effects of corpus subjectivity for word embeddings
SentiVec, a method for infusing lexical information into word embeddings
Sentiment-infused SentiVec embeddings space facilitate better sentiment-related similarity
Pre-trained Word Embeddings & Code: https://sentivec.preferred.ai/ | [] |
GEM-SciDuet-train-135#paper-1364#slide-0 | 1364 | Rumor Detection on Twitter with Tree-structured Recursive Neural Networks | Automatic rumor detection is technically very challenging. In this work, we try to learn discriminative features from tweets content by following their non-sequential propagation structure and generate more powerful representations for identifying different type of rumors. We propose two recursive neural models based on a bottom-up and a top-down tree-structured neural networks for rumor representation learning and classification, which naturally conform to the propagation layout of tweets. Results on two public Twitter datasets demonstrate that our recursive neural models 1) achieve much better performance than state-of-the-art approaches; 2) demonstrate superior capacity on detecting rumors at very early stage. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173
],
"paper_content_text": [
"Introduction Rumors have always been a social disease.",
"In recent years, it has become unprecedentedly convenient for the \"evil-doers\" to create and disseminate rumors in massive scale with low cost thanks to the popularity of social media outlets on Twitter, Facebook, etc.",
"The worst effect of false rumors could be devastating to individual and/or society.",
"Research pertaining rumors spans multiple disciplines, such as philosophy and humanities (Di-Fonzo and Bordia, 2007; Donovan, 2007) , social psychology (Allport and Postman, 1965; Jaeger et al., 1980; Rosnow and Foster, 2005) , political studies (Allport and Postman, 1946; Berinsky, 2017) , management science (DiFonzo et al., 1994; Kimmel, 2004) and recently computer science and artificial intelligence (Qazvinian et al., 2011; Ratkiewicz et al., 2011; Castillo et al., 2011; Hannak et al., 2014; Zhao et al., 2015; Ma et al., 2015) .",
"Rumor is commonly defined as information that emerge and spread among people whose truth value is unverified or intentionally false (Di-Fonzo and Bordia, 2007; Qazvinian et al., 2011) .",
"Analysis shows that people tend to stop spreading a rumor if it is known as false (Zubiaga et al., 2016b) .",
"However, identifying such misinformation is non-trivial and needs investigative journalism to fact check the suspected claim, which is labor-intensive and time-consuming.",
"The proliferation of social media makes it worse due to the ever-increasing information load and dynamics.",
"Therefore, it is necessary to develop automatic and assistant approaches to facilitate real-time rumor tracking and debunking.",
"For automating rumor detection, most of the previous studies focused on text mining from sequential microblog streams using supervised models based on feature engineering (Castillo et al., 2011; Kwon et al., 2013; Liu et al., 2015; Ma et al., 2015) , and more recently deep neural models (Ma et al., 2016; Chen et al., 2017; Ruchansky et al., 2017) .",
"These methods largely ignore or oversimplify the structural information associated with message propagation which however has been shown conducive to provide useful clues for identifying rumors.",
"Kernel-based method (Wu et al., 2015; Ma et al., 2017) was thus proposed to model the structure as propagation trees in order to differentiate rumorous and non-rumorous claims by comparing their tree-based similarities.",
"But such kind of approach cannot directly classify a tree without pairwise comparison with all other trees imposing unnecessary overhead, and it also cannot automatically learn any high-level feature representations out of the noisy surface features.",
"In this paper, we present a neural rumor detection approach based on recursive neural networks (RvNN) to bridge the content semantics and propagation clues.",
"RvNN and its variants were originally used to compose phrase or sentence representation for syntactic and semantic parsing (Socher et al., 2011 (Socher et al., , 2012 .",
"Unlike parsing, the input into our model is a propagation tree rooted from a source post rather than the parse tree of an individual sentence, and each tree node is a responsive post instead of an individual words.",
"The content semantics of posts and the responsive relationship among them can be jointly captured via the recursive feature learning process along the tree structure.",
"So, why can such neural model do better for the task?",
"Analysis has generally found that Twitter could \"self-correct\" some inaccurate information as users share opinions, conjectures and evidences (Zubiaga et al., 2017) .",
"To illustrate our intuition, Figure 1 exemplifies the propagation trees of two rumors in our dataset, one being false and the other being true 1 .",
"Structure-insensitive methods basically relying on the relative ratio of different stances in the text cannot do well when such clue is unclear like this example.",
"However, it can be seen that when a post denies the false rumor, it tends to spark supportive or affirmative replies confirming the denial; in contrast, denial to a true rumor tends to trigger question or denial in its replies.",
"This observation may suggest a more general hypothesis that the repliers tend to disagree with (or question) who support a false rumor or deny a true rumor, and also they tend to agree with who deny a false rumor or support a true rumor.",
"Meanwhile, a reply, rather than directly responding to the source tweet (i.e., the root), is usually responsive to its immediate ancestor (Lukasik et al., 2016; Zubiaga et al., 2016a) , suggesting obvious local characteristic of the interaction.",
"The recursive network naturally models such structures for learning to capture the rumor indicative signals and enhance the representation by recursively aggregating the signals from different branches.",
"To this end, we extend the standard RvNN into two variants, i.e., a bottom-up (BU) model and a top-down (TD) model, which represent the propagation tree structure from different angles, in order to visit the nodes and combine their representations following distinct directions.",
"The important merit of such architecture is that the node features can be selectively refined by the recursion given the connection and direction of all paths of the 1 False (true) rumor means the veracity of the rumorous claim is false (true).",
"Figure 1 : Propagation trees of two rumorous source tweets.",
"Nodes may express stances on their parent as commenting, supporting, questioning or denying.",
"The edge arrow indicates the direction from a response to its responded node, and the polarity is marked as '+' ('-') for support (denial).",
"The same node color indicates the same stance on the veracity of root node (i.e., source tweet).",
"tree.",
"As a result, it can be expected that the discriminative signals are better embedded into the learned representations.",
"We evaluate our proposed approach based on two public Twitter datasets.",
"The results show that our method outperforms strong rumor detection baselines with large margin and also demonstrate much higher effectiveness for detection at early stage of propagation, which is promising for realtime intervention and debunking.",
"Our contributions are summarized as follows in three folds: • This is the first study that deeply integrates both structure and content semantics based on tree-structured recursive neural networks for detecting rumors from microblog posts.",
"• We propose two variants of RvNN models based on bottom-up and top-down tree structures to generate better integrated representations for a claim by capturing both structural and textural properties signaling rumors.",
"• Our experiments based on real-world Twitter datasets achieve superior improvements over state-of-the-art baselines on both rumor classification and early detection tasks.",
"We make the source codes in our experiments publicly accessible 2 .",
"Related Work Most previous automatic approaches for rumor detection (Castillo et al., 2011; Yang et al., 2012; Liu et al., 2015) intended to learn a supervised classifier by utilizing a wide range of features crafted from post contents, user profiles and propagation patterns.",
"Subsequent studies were then conducted to engineer new features such as those representing rumor diffusion and cascades (Friggeri et al., 2014; Hannak et al., 2014) characterized by comments with links to debunking websites.",
"Kwon et al.",
"(2013) introduced a time-series-fitting model based on the volume of tweets over time.",
"Ma et al.",
"(2015) extended their model with more chronological social context features.",
"These approaches typically require heavy preprocessing and feature engineering.",
"Zhao et al.",
"(2015) alleviated the engineering effort by using a set of regular expressions (such as \"really?",
"\", \"not true\", etc) to find questing and denying tweets, but the approach was oversimplified and suffered from very low recall.",
"Ma et al.",
"(2016) used recurrent neural networks (RNN) to learn automatically the representations from tweets content based on time series.",
"Recently, they studied to mutually reinforce stance detection and rumor classification in a neural multi-task learning framework (Ma et al., 2018) .",
"However, the approaches cannot embed features reflecting how the posts are propagated and requires careful data segmentation to prepare for time sequence.",
"Some kernel-based methods were exploited to model the propagation structure.",
"Wu et al.",
"(2015) proposed a hybrid SVM classifier which combines a RBF kernel and a random-walk-based graph kernel to capture both flat and propagation patterns for detecting rumors on Sina Weibo.",
"Ma et al.",
"(2017) used tree kernel to capture the similarity of propagation trees by counting their similar substructures in order to identify different types of rumors on Twitter.",
"Compared to their studies, our model can learn the useful features via a more natural and general approach, i.e., the tree-structured neural network, to jointly generate representations from both structure and content.",
"RvNN has demonstrated state-of-the-art performances in a variety of tasks, e.g., images segmentation (Socher et al., 2011) , phrase representation from word vectors (Socher et al., 2012) , and sentiment classification in sentences (Socher et al., 2013) .",
"More recently, a deep RvNN was proposed to model the compositionality in natural language for fine-grained sentiment classification by stacking multiple recursive layers (Irsoy and Cardie, 2014) .",
"In order to avoid gradient vanishing, some studies integrated Long Short Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) to RvNN Tai et al., 2015) .",
"Mou et al.",
"(2015) used a convolutional network over tree structures for syntactic tree parsing of natural language sentences.",
"Problem Statement We define a Twitter rumor detection dataset as a set of claims C = {C 1 , C 2 , · · · , C |C| }, where each claim C i corresponds to a source tweet r i which consists of ideally all its relevant responsive tweets in chronological order, i.e., C i = {r i , x i1 , x i2 , · · · , x im } where each x i * is a responsive tweet of the root r i .",
"Note that although the tweets are notated sequentially, there are connections among them based on their reply or repost relationships, which can form a propagation tree structure (Wu et al., 2015; Ma et al., 2017) with r i being the root node.",
"We formulate this task as a supervised classification problem, which learns a classifier f from labeled claims, that is f : C i → Y i , where Y i takes one of the four finer-grained classes: non-rumor, false rumor, true rumor, and unverified rumor that are introduced in the literature (Ma et al., 2017; Zubiaga et al., 2016b ).",
"An important issue of the tree structure is concerned about the direction of edges, which can result in two different architectures of the model: 1) a bottom-up tree; 2) a top-down tree, which are defined as follows: • Bottom-up tree takes the similar shape as shown in Figure 1 , where responsive nodes always point to their responded nodes and leaf nodes not having any response are laid out at the furthest level.",
"We represent a tree as T i = V i , E i , where V i = C i which con- sists of all relevant posts as nodes, and E i denotes a set of all directed links, where for any u, v ∈ V i , u ← v exists if v responses to u.",
"This structure is similar to a citation network where a response mimics a reference.",
"• Top-down tree naturally conforms to the direction of information propagation, in which a link u → v means the information flows from u to v and v sees it and provides a response to u.",
"This structure reverses bottomup tree and simulates how information cas- cades from a source tweet, i.e., the root, to all its receivers, i.e., the decedents, which is similar as (Wu et al., 2015; Ma et al., 2017) .",
"RvNN-based Rumor Detection The core idea of our method is to strengthen the high-level representation of tree nodes by the recursion following the propagation structure over different branches in the tree.",
"For instance, the responsive nodes confirming or supporting a node (e.g., \"I agree\", \"be right\", etc) can further reinforce the stance of that node while denial or questioning responses (e.g., \"disagree, \"really?!)",
"otherwise weaken its stance.",
"Compared to the kernelbased method using propagation tree (Wu et al., 2015; Ma et al., 2017) , our method does not need pairwise comparison among large number of subtrees, and can learn much stronger representation of content following the response structure.",
"In this section, we will describe our extension to the standard RvNN for modeling rumor detection based on the bottom-up and top-down architectures presented in Section 3.",
"Standard Recursive Neural Networks RvNN is a type of tree-structured neural networks.",
"The original version of RvNN utilized binarized sentence parse trees (Socher et al., 2012) , in which the representation associated with each node of a parse tree is computed from its direct children.",
"The overall structure of the standard RvNN is illustrated as the right side of Figure 2 , corresponding to the input parse tree at the left side.",
"Leaf nodes are the words in an input sentence, each represented by a low-dimensional word embedding.",
"Non-leaf nodes are sentence constituents, computed by recursion based on the presentations of child nodes.",
"Let p be the feature vector of a parent node whose children are c 1 and c 2 , the representation of the parent is computed by p = f (W ·[c 1 ; c 2 ]+b), where f (·) is the activation function with W and b as parameters.",
"This computation is done recursively over all tree nodes; the learned hidden vectors of the nodes can then be used for various classification tasks.",
"Bottom-up RvNN The core idea of bottom-up model is to generate a feature vector for each subtree by recursively visiting every node from the leaves at the bottom to the root at the top.",
"In this way, the subtrees with similar contexts, such as those subtrees having a denial parent and a set of supportive children, will be projected into the proximity in the representation space.",
"And thus such local rumor indicative features are aggregated along different branches into some global representation of the whole tree.",
"For this purpose, we make a natural extension to the original RvNN.",
"The overall structure of our proposed bottom-up model is illustrated in Figure 3(b) , taking a bottom-up tree (see Figure 3 (a)) as input.",
"Different from the standard RvNN, the input of each node in the bottom-up model is a post represented as a vector of words in the vocabulary in terms of tf idf values.",
"Here, every node has an input vector, and the number of children of nodes varies significantly 3 .",
"In rumor detection, long short-term memory (LSTM) (Hochreiter and Schmidhuber, 1997) and gated recurrent units (GRU) (Cho et al., 2014) were used to learn textual representation, which adopts memory units to store information over long time steps (Ma et al., 2016) .",
"In this paper, we choose to extend GRU as hidden unit to model long-distance interactions over the tree nodes because it is more efficient due to fewer parameters.",
"Let S(j) denote the set of direct children of the node j.",
"The transition equations of node j in the bottom-up model are formulated as follows: where x j is the original input vector of node j, E denotes the parameter matrix for transforming this input post,x j is the transformed representation of j, [W * , U * ] are the weight connections inside GRU, and h j and h s refer to the hidden state of j and its s-th child.",
"Thus h S denotes the sum of the hidden state of all the children of j assuming that all children are equally important to j.",
"As with the standard GRU, denotes element-wise multiplication; a reset gate r j determines how to combine the current inputx j with the memory of children, and an update gate z j defines how much memory from the children is cascaded into the current node; andh j denotes the candidate activation of the hidden state of the current node.",
"Different from the standard GRU unit, the gating vectors in our variant of GRU are dependent on the states of many child units, allowing our model to incorporate representations from different children.",
"After recursive aggregation from bottom to up, the state of root node (i.e., source tweet) can be regard as the representation of the whole tree which is used for supervised classification.",
"So, an output layer is connected to the root node for predicting the class of the tree using a softmax function: x j = x j E h S = s∈S(j) h s r j = σ (W rxj + U r h S ) z j = σ (W zxj + U z h S ) h j = tanh (W hxj + U h (h S r j )) h j = (1 − z j ) h S + z j h j y = Sof tmax(Vh 0 + b) (2) where h 0 is the learned hidden vector of root node; V and b are the weights and bias in output layer.",
"Top-down RvNN This model is designed to leverage the structure of top-down tree to capture complex propagation patterns for classifying rumorous claims, which is shown in Figure 3 (c).",
"It models how the informa-tion flows from source post to the current node.",
"The idea of this top-down approach is to generate a strengthened feature vector for each post considering its propagation path, where rumor-indicative features are aggregated along the propagation history in the path.",
"For example, if current post agree with its parent's stance which denies the source post, the denial stance from the root node down to the current node on this path should be reinforced.",
"Due to different branches of any non-leaf node, the top-down visit to its subtree nodes is also recursive.",
"However, the nature of top-down tree lends this model different from the bottom-up one.",
"The representation of each node is computed by combining its own input and its parent node instead of its children nodes.",
"This process proceeds recursively from the root node to its children until all leaf nodes are reached.",
"Suppose that the hidden state of a non-leaf node can be passed synchronously to all its child nodes without loss.",
"Then the hidden state h j of a node j can be computed by combining the hidden state h P(j) of its parent node P(j) and its own input vector x j .",
"Therefore, the transition equations of node j can be formulated as a standard GRU: x j = x j E r j = σ W rxj + U r h P(j) z j = σ W zxj + U z h P(j) h j = tanh W hxj + U h (h P(j) r j ) h j = (1 − z j ) h P(j) + z j h j (3) Through the top-down recursion, the learned representations are eventually embedded into the hidden vector of all the leaf nodes.",
"Since the num-ber of leaf nodes varies, the resulting vectors cannot be directly fed into a fixed-size neural layer for output.",
"Therefore, we add a max-pooling layer to take the maximum value of each dimension of the vectors over all the leaf nodes.",
"This can also help capture the most appealing indicative features from all the propagation paths.",
"Based on the pooling result, we finally use a softmax function in the output layer to predict the label of the tree: y = Sof tmax(Vh ∞ + b) (4) where h ∞ is the pooling vector over all leaf nodes, V and b are parameters in the output layer.",
"Although both of the two RvNN models aim to capture the structural properties by recursively visiting all nodes, we can conjecture that the topdown model would be better.",
"The hypothesis is that in the bottom-up case the final output relies on the representation of single root, and its information loss can be larger than the top-down one since in the top-down case the representations embedded into all leaf nodes along different propagation paths can be incorporated via pooling holistically.",
"Model Training The model is trained to minimize the squared error between the probability distributions of the predictions and the ground truth: L(y,ŷ) = N n=1 C c=1 (y c −ŷ c ) 2 + λ||θ|| 2 2 (5) where y c is the ground truth andŷ c is the prediction probability of a class, N is the number of training claims, C is the number of classes, ||.|| 2 is the L 2 regularization term over all model parameters θ, and λ is the trade-off coefficient.",
"During training, all the model parameters are updated using efficient back-propagation through structure (Goller and Kuchler, 1996; Socher et al., 2013) , and the optimization is gradient-based following the Ada-grad update rule (Duchi et al., 2011) to speed up the convergence.",
"We empirically initialize the model parameters with uniform distribution and set the vocabulary size as 5,000, the size of embedding and hidden units as 100.",
"We iterate over all the training examples in each epoch and continue until the loss value converges or the maximum epoch number is met.",
"Experiments and Results Datasets For experimental evaluation, we use two publicly available Twitter datasets released by Ma et al.",
"(2017) , namely Twitter15 and Twitter16 4 , which respectively contains 1,381 and 1,181 propagation trees (see (Ma et al., 2017) for detailed statistics).",
"In each dataset, a group of wide spread source tweets along with their propagation threads, i.e., replies and retweets, are provided in the form of tree structure.",
"Each tree is annotated with one of the four class labels, i.e., non-rumor, false rumor, true rumor and unverified rumor.",
"We remove the retweets from the trees since they do not provide any extra information or evidence contentwise.",
"We build two versions for each tree, one for the bottom-up tree and the other for the top-down tree, by flipping the edges' direction.",
"Experimental Setup We make comprehensive comparisons between our models and some state-of-the-art baselines on rumor classification and early detection tasks.",
"-DTR: Zhao et al.",
"(2015) proposed a Decision-Tree-based Ranking model to identify trending rumors by searching for inquiry phrases.",
"-DTC: The information credibility model using a Decision-Tree Classifier (Castillo et al., 2011) based on manually engineering various statistical features of the tweets.",
"-RFC: The Random Forest Classier using 3 fitting parameters as temporal properties and a set of handcrafted features on user, linguistic and structural properties (Kwon et al., 2013) .",
"-SVM-TS: A linear SVM classifier that uses time-series to model the variation of handcrafted social context features (Ma et al., 2015) .",
"-SVM-BOW: A naive baseline we built by representing text content using bag-of-words and using linear SVM for rumor classification.",
"-SVM-TK and SVM-HK: SVM classifier uses a Tree Kernel (Ma et al., 2017) and that uses a Hybrid Kernel (Wu et al., 2015) , respectively, both of which model propagation structures with kernels.",
"-GRU-RNN: A detection model based on recurrent neural networks (Ma et al., 2016) with GRU units for learning rumor representations by modeling sequential structure of relevant posts.",
"We implement DTC and RFC using Weka 5 , SVM-based models using LibSVM 6 and all neural-network-based models with Theano 7 .",
"We conduct 5-fold cross-validation on the datasets and use accuracy over all the four categories and F1 measure on each class to evaluate the performance of models.",
"Rumor Classification Performance As shown in Table 1 , our proposed models basically yield much better performance than other methods on both datasets via the modeling of interaction structures of posts in the propagation.",
"It is observed that the performance of the 4 baselines in the first group based on handcrafted features is obviously poor, varying between 0.409 and 0.585 in accuracy, indicating that they fail to generalize due to the lack of capacity capturing helpful features.",
"Among these baselines, SVM-TS and RFC perform relatively better because they 5 www.cs.waikato.ac.nz/ml/weka 6 www.csie.ntu.edu.tw/˜cjlin/libsvm 7 deeplearning.net/software/theano use additional temporal traits, but they are still clearly worse than the models not relying on feature engineering.",
"DTR uses a set of regular expressions indicative of stances.",
"However, only 19.6% and 22.2% tweets in the two datasets contain strings covered by these regular expressions, rendering unsatisfactory result.",
"Among the two kernel methods that are based on comparing propagation structures, we observe that SVM-TK is much more effective than SVM-HK.",
"There are two reasons: 1) SVM-HK was originally proposed and experimented on Sina Weibo (Wu et al., 2015) , which may not be generalize well on Twitter.",
"2) SVM-HK loosely couples two separate kernels: a RBF kernel based on handcrafted features, plus a random walk-based kernel which relies on a set of pre-defined keywords for jumping over the nodes probabilistically.",
"This under utilizes the propagation information due to such oversimplified treatment of tree structure.",
"In contrast, SVM-TK is an integrated kernel and can fully utilize the structure by comparing the trees based on both textual and structural similarities.",
"It appears that using bag-of-words is already a decent model evidenced as the fairly good performance of SVM-BOW which is even better than SVM-HK.",
"This is because the features of SVM-HK are handcrafted for binary classification (i.e., non-rumor vs rumor), ignoring the importance of indicative words or units that benefit finer-grained classification which can be captured more effectively by SVM-BOW.",
"The sequential neural model GRU-RNN performs slightly worse than SVM-TK, but much worse than our recursive models.",
"This is because it is a special case of the recursive model where each non-leaf node has only one child.",
"It has to rely on a linear chain as input, which missed out valuable structural information.",
"However, it does learn high-level features from the post content via hidden units of the neural model while SVM-TK cannot which can only evaluates similarities based on the overlapping words among subtrees.",
"Our recursive models are inherently tree-structured and take advantages of representation learning following the propagation structure, thus beats SVM-TK.",
"In the two recursive models, TD-RvNN outperforms BU-RvNN, which indicates that the bottomup model may suffer from larger information loss than the top-down one.",
"This verifies the hypothesis we made in Section 4.3 that the pooling layer For only the non-rumor class, it seems that our method does not perform so well as some featureengineering baselines.",
"This can be explained by the fact that these baselines are trained with additional features such as user information (e.g., profile, verification status, etc) which may contain clues for differentiating non-rumors from rumors.",
"Also, the responses to non-rumors are usually much more diverse with little informative indication, making identification of non-rumors more difficult based on content even with the structure.",
"Early Rumor Detection Performance Detecting rumors at early state of propagation is important so that interventions can be made in a timely manner.",
"We compared different methods in term of different time delays measured by either tweet count received or time elapsed since the source tweet is posted.",
"The performance is evaluated by the accuracy obtained when we incrementally add test data up to the check point given the targeted time delay or tweets volume.",
"Figure 4 shows that the performance of our recursive models climbs more rapidly and starts to supersede the other models at the early stage.",
"Although all the methods are getting to their best per-formance in the end, TD-RvNN and BU-RvNN only need around 8 hours or about 90 tweets to achieve the comparable performance of the best baseline model, i.e., SVM-TK, which needs about 36 hours or around 300 posts, indicating superior early detection performance of our method.",
"Figure 5 shows a sample tree at the early stage of propagation that has been correctly classified as a false rumor by both recursive models.",
"We can see that this false rumor demonstrates typical patterns in subtrees and propagation paths indicative of the falsehood, where a set of responses supporting the parent posts that deny or question the source post are captured by our bottom-up model.",
"Similarly, some patterns of propagation from the root to leaf nodes like \"support→deny→support\" are also seized by our top-down model.",
"In comparison, sequential models may be confused because the supportive key terms such as \"be right\", \"yeah\", \"exactly!\"",
"dominate the responses, and the SVM-TK may miss similar subtrees by just comparing the surface words.",
"Conclusions and Future Work We propose a bottom-up and a top-down treestructured model based on recursive neural networks for rumor detection on Twitter.",
"The inher-ent nature of recursive models allows them using propagation tree to guide the learning of representations from tweets content, such as embedding various indicative signals hidden in the structure, for better identifying rumors.",
"Results on two public Twitter datasets show that our method improves rumor detection performance in very large margins as compared to state-of-the-art baselines.",
"In our future work, we plan to integrate other types of information such as user properties into the structured neural models to further enhance representation learning and detect rumor spreaders at the same time.",
"We also plan to use unsupervised models for the task by exploiting structural information."
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"4",
"4.1",
"4.2",
"4.3",
"4.4",
"5.1",
"5.2",
"5.3",
"5.4",
"6"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Problem Statement",
"RvNN-based Rumor Detection",
"Standard Recursive Neural Networks",
"Bottom-up RvNN",
"Top-down RvNN",
"Model Training",
"Datasets",
"Experimental Setup",
"Rumor Classification Performance",
"Early Rumor Detection Performance",
"Conclusions and Future Work"
]
} | GEM-SciDuet-train-135#paper-1364#slide-0 | Introduction | unverified or deliberately false
How the fake news propagated?
people tend to stop spreading a rumor if it
Previous studies focused on text mining from sequential microblog streams, we
denial want to bridge the content semantics and | unverified or deliberately false
How the fake news propagated?
people tend to stop spreading a rumor if it
Previous studies focused on text mining from sequential microblog streams, we
denial want to bridge the content semantics and | [] |
GEM-SciDuet-train-135#paper-1364#slide-1 | 1364 | Rumor Detection on Twitter with Tree-structured Recursive Neural Networks | Automatic rumor detection is technically very challenging. In this work, we try to learn discriminative features from tweets content by following their non-sequential propagation structure and generate more powerful representations for identifying different type of rumors. We propose two recursive neural models based on a bottom-up and a top-down tree-structured neural networks for rumor representation learning and classification, which naturally conform to the propagation layout of tweets. Results on two public Twitter datasets demonstrate that our recursive neural models 1) achieve much better performance than state-of-the-art approaches; 2) demonstrate superior capacity on detecting rumors at very early stage. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173
],
"paper_content_text": [
"Introduction Rumors have always been a social disease.",
"In recent years, it has become unprecedentedly convenient for the \"evil-doers\" to create and disseminate rumors in massive scale with low cost thanks to the popularity of social media outlets on Twitter, Facebook, etc.",
"The worst effect of false rumors could be devastating to individual and/or society.",
"Research pertaining rumors spans multiple disciplines, such as philosophy and humanities (Di-Fonzo and Bordia, 2007; Donovan, 2007) , social psychology (Allport and Postman, 1965; Jaeger et al., 1980; Rosnow and Foster, 2005) , political studies (Allport and Postman, 1946; Berinsky, 2017) , management science (DiFonzo et al., 1994; Kimmel, 2004) and recently computer science and artificial intelligence (Qazvinian et al., 2011; Ratkiewicz et al., 2011; Castillo et al., 2011; Hannak et al., 2014; Zhao et al., 2015; Ma et al., 2015) .",
"Rumor is commonly defined as information that emerge and spread among people whose truth value is unverified or intentionally false (Di-Fonzo and Bordia, 2007; Qazvinian et al., 2011) .",
"Analysis shows that people tend to stop spreading a rumor if it is known as false (Zubiaga et al., 2016b) .",
"However, identifying such misinformation is non-trivial and needs investigative journalism to fact check the suspected claim, which is labor-intensive and time-consuming.",
"The proliferation of social media makes it worse due to the ever-increasing information load and dynamics.",
"Therefore, it is necessary to develop automatic and assistant approaches to facilitate real-time rumor tracking and debunking.",
"For automating rumor detection, most of the previous studies focused on text mining from sequential microblog streams using supervised models based on feature engineering (Castillo et al., 2011; Kwon et al., 2013; Liu et al., 2015; Ma et al., 2015) , and more recently deep neural models (Ma et al., 2016; Chen et al., 2017; Ruchansky et al., 2017) .",
"These methods largely ignore or oversimplify the structural information associated with message propagation which however has been shown conducive to provide useful clues for identifying rumors.",
"Kernel-based method (Wu et al., 2015; Ma et al., 2017) was thus proposed to model the structure as propagation trees in order to differentiate rumorous and non-rumorous claims by comparing their tree-based similarities.",
"But such kind of approach cannot directly classify a tree without pairwise comparison with all other trees imposing unnecessary overhead, and it also cannot automatically learn any high-level feature representations out of the noisy surface features.",
"In this paper, we present a neural rumor detection approach based on recursive neural networks (RvNN) to bridge the content semantics and propagation clues.",
"RvNN and its variants were originally used to compose phrase or sentence representation for syntactic and semantic parsing (Socher et al., 2011 (Socher et al., , 2012 .",
"Unlike parsing, the input into our model is a propagation tree rooted from a source post rather than the parse tree of an individual sentence, and each tree node is a responsive post instead of an individual words.",
"The content semantics of posts and the responsive relationship among them can be jointly captured via the recursive feature learning process along the tree structure.",
"So, why can such neural model do better for the task?",
"Analysis has generally found that Twitter could \"self-correct\" some inaccurate information as users share opinions, conjectures and evidences (Zubiaga et al., 2017) .",
"To illustrate our intuition, Figure 1 exemplifies the propagation trees of two rumors in our dataset, one being false and the other being true 1 .",
"Structure-insensitive methods basically relying on the relative ratio of different stances in the text cannot do well when such clue is unclear like this example.",
"However, it can be seen that when a post denies the false rumor, it tends to spark supportive or affirmative replies confirming the denial; in contrast, denial to a true rumor tends to trigger question or denial in its replies.",
"This observation may suggest a more general hypothesis that the repliers tend to disagree with (or question) who support a false rumor or deny a true rumor, and also they tend to agree with who deny a false rumor or support a true rumor.",
"Meanwhile, a reply, rather than directly responding to the source tweet (i.e., the root), is usually responsive to its immediate ancestor (Lukasik et al., 2016; Zubiaga et al., 2016a) , suggesting obvious local characteristic of the interaction.",
"The recursive network naturally models such structures for learning to capture the rumor indicative signals and enhance the representation by recursively aggregating the signals from different branches.",
"To this end, we extend the standard RvNN into two variants, i.e., a bottom-up (BU) model and a top-down (TD) model, which represent the propagation tree structure from different angles, in order to visit the nodes and combine their representations following distinct directions.",
"The important merit of such architecture is that the node features can be selectively refined by the recursion given the connection and direction of all paths of the 1 False (true) rumor means the veracity of the rumorous claim is false (true).",
"Figure 1 : Propagation trees of two rumorous source tweets.",
"Nodes may express stances on their parent as commenting, supporting, questioning or denying.",
"The edge arrow indicates the direction from a response to its responded node, and the polarity is marked as '+' ('-') for support (denial).",
"The same node color indicates the same stance on the veracity of root node (i.e., source tweet).",
"tree.",
"As a result, it can be expected that the discriminative signals are better embedded into the learned representations.",
"We evaluate our proposed approach based on two public Twitter datasets.",
"The results show that our method outperforms strong rumor detection baselines with large margin and also demonstrate much higher effectiveness for detection at early stage of propagation, which is promising for realtime intervention and debunking.",
"Our contributions are summarized as follows in three folds: • This is the first study that deeply integrates both structure and content semantics based on tree-structured recursive neural networks for detecting rumors from microblog posts.",
"• We propose two variants of RvNN models based on bottom-up and top-down tree structures to generate better integrated representations for a claim by capturing both structural and textural properties signaling rumors.",
"• Our experiments based on real-world Twitter datasets achieve superior improvements over state-of-the-art baselines on both rumor classification and early detection tasks.",
"We make the source codes in our experiments publicly accessible 2 .",
"Related Work Most previous automatic approaches for rumor detection (Castillo et al., 2011; Yang et al., 2012; Liu et al., 2015) intended to learn a supervised classifier by utilizing a wide range of features crafted from post contents, user profiles and propagation patterns.",
"Subsequent studies were then conducted to engineer new features such as those representing rumor diffusion and cascades (Friggeri et al., 2014; Hannak et al., 2014) characterized by comments with links to debunking websites.",
"Kwon et al.",
"(2013) introduced a time-series-fitting model based on the volume of tweets over time.",
"Ma et al.",
"(2015) extended their model with more chronological social context features.",
"These approaches typically require heavy preprocessing and feature engineering.",
"Zhao et al.",
"(2015) alleviated the engineering effort by using a set of regular expressions (such as \"really?",
"\", \"not true\", etc) to find questing and denying tweets, but the approach was oversimplified and suffered from very low recall.",
"Ma et al.",
"(2016) used recurrent neural networks (RNN) to learn automatically the representations from tweets content based on time series.",
"Recently, they studied to mutually reinforce stance detection and rumor classification in a neural multi-task learning framework (Ma et al., 2018) .",
"However, the approaches cannot embed features reflecting how the posts are propagated and requires careful data segmentation to prepare for time sequence.",
"Some kernel-based methods were exploited to model the propagation structure.",
"Wu et al.",
"(2015) proposed a hybrid SVM classifier which combines a RBF kernel and a random-walk-based graph kernel to capture both flat and propagation patterns for detecting rumors on Sina Weibo.",
"Ma et al.",
"(2017) used tree kernel to capture the similarity of propagation trees by counting their similar substructures in order to identify different types of rumors on Twitter.",
"Compared to their studies, our model can learn the useful features via a more natural and general approach, i.e., the tree-structured neural network, to jointly generate representations from both structure and content.",
"RvNN has demonstrated state-of-the-art performances in a variety of tasks, e.g., images segmentation (Socher et al., 2011) , phrase representation from word vectors (Socher et al., 2012) , and sentiment classification in sentences (Socher et al., 2013) .",
"More recently, a deep RvNN was proposed to model the compositionality in natural language for fine-grained sentiment classification by stacking multiple recursive layers (Irsoy and Cardie, 2014) .",
"In order to avoid gradient vanishing, some studies integrated Long Short Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) to RvNN Tai et al., 2015) .",
"Mou et al.",
"(2015) used a convolutional network over tree structures for syntactic tree parsing of natural language sentences.",
"Problem Statement We define a Twitter rumor detection dataset as a set of claims C = {C 1 , C 2 , · · · , C |C| }, where each claim C i corresponds to a source tweet r i which consists of ideally all its relevant responsive tweets in chronological order, i.e., C i = {r i , x i1 , x i2 , · · · , x im } where each x i * is a responsive tweet of the root r i .",
"Note that although the tweets are notated sequentially, there are connections among them based on their reply or repost relationships, which can form a propagation tree structure (Wu et al., 2015; Ma et al., 2017) with r i being the root node.",
"We formulate this task as a supervised classification problem, which learns a classifier f from labeled claims, that is f : C i → Y i , where Y i takes one of the four finer-grained classes: non-rumor, false rumor, true rumor, and unverified rumor that are introduced in the literature (Ma et al., 2017; Zubiaga et al., 2016b ).",
"An important issue of the tree structure is concerned about the direction of edges, which can result in two different architectures of the model: 1) a bottom-up tree; 2) a top-down tree, which are defined as follows: • Bottom-up tree takes the similar shape as shown in Figure 1 , where responsive nodes always point to their responded nodes and leaf nodes not having any response are laid out at the furthest level.",
"We represent a tree as T i = V i , E i , where V i = C i which con- sists of all relevant posts as nodes, and E i denotes a set of all directed links, where for any u, v ∈ V i , u ← v exists if v responses to u.",
"This structure is similar to a citation network where a response mimics a reference.",
"• Top-down tree naturally conforms to the direction of information propagation, in which a link u → v means the information flows from u to v and v sees it and provides a response to u.",
"This structure reverses bottomup tree and simulates how information cas- cades from a source tweet, i.e., the root, to all its receivers, i.e., the decedents, which is similar as (Wu et al., 2015; Ma et al., 2017) .",
"RvNN-based Rumor Detection The core idea of our method is to strengthen the high-level representation of tree nodes by the recursion following the propagation structure over different branches in the tree.",
"For instance, the responsive nodes confirming or supporting a node (e.g., \"I agree\", \"be right\", etc) can further reinforce the stance of that node while denial or questioning responses (e.g., \"disagree, \"really?!)",
"otherwise weaken its stance.",
"Compared to the kernelbased method using propagation tree (Wu et al., 2015; Ma et al., 2017) , our method does not need pairwise comparison among large number of subtrees, and can learn much stronger representation of content following the response structure.",
"In this section, we will describe our extension to the standard RvNN for modeling rumor detection based on the bottom-up and top-down architectures presented in Section 3.",
"Standard Recursive Neural Networks RvNN is a type of tree-structured neural networks.",
"The original version of RvNN utilized binarized sentence parse trees (Socher et al., 2012) , in which the representation associated with each node of a parse tree is computed from its direct children.",
"The overall structure of the standard RvNN is illustrated as the right side of Figure 2 , corresponding to the input parse tree at the left side.",
"Leaf nodes are the words in an input sentence, each represented by a low-dimensional word embedding.",
"Non-leaf nodes are sentence constituents, computed by recursion based on the presentations of child nodes.",
"Let p be the feature vector of a parent node whose children are c 1 and c 2 , the representation of the parent is computed by p = f (W ·[c 1 ; c 2 ]+b), where f (·) is the activation function with W and b as parameters.",
"This computation is done recursively over all tree nodes; the learned hidden vectors of the nodes can then be used for various classification tasks.",
"Bottom-up RvNN The core idea of bottom-up model is to generate a feature vector for each subtree by recursively visiting every node from the leaves at the bottom to the root at the top.",
"In this way, the subtrees with similar contexts, such as those subtrees having a denial parent and a set of supportive children, will be projected into the proximity in the representation space.",
"And thus such local rumor indicative features are aggregated along different branches into some global representation of the whole tree.",
"For this purpose, we make a natural extension to the original RvNN.",
"The overall structure of our proposed bottom-up model is illustrated in Figure 3(b) , taking a bottom-up tree (see Figure 3 (a)) as input.",
"Different from the standard RvNN, the input of each node in the bottom-up model is a post represented as a vector of words in the vocabulary in terms of tf idf values.",
"Here, every node has an input vector, and the number of children of nodes varies significantly 3 .",
"In rumor detection, long short-term memory (LSTM) (Hochreiter and Schmidhuber, 1997) and gated recurrent units (GRU) (Cho et al., 2014) were used to learn textual representation, which adopts memory units to store information over long time steps (Ma et al., 2016) .",
"In this paper, we choose to extend GRU as hidden unit to model long-distance interactions over the tree nodes because it is more efficient due to fewer parameters.",
"Let S(j) denote the set of direct children of the node j.",
"The transition equations of node j in the bottom-up model are formulated as follows: where x j is the original input vector of node j, E denotes the parameter matrix for transforming this input post,x j is the transformed representation of j, [W * , U * ] are the weight connections inside GRU, and h j and h s refer to the hidden state of j and its s-th child.",
"Thus h S denotes the sum of the hidden state of all the children of j assuming that all children are equally important to j.",
"As with the standard GRU, denotes element-wise multiplication; a reset gate r j determines how to combine the current inputx j with the memory of children, and an update gate z j defines how much memory from the children is cascaded into the current node; andh j denotes the candidate activation of the hidden state of the current node.",
"Different from the standard GRU unit, the gating vectors in our variant of GRU are dependent on the states of many child units, allowing our model to incorporate representations from different children.",
"After recursive aggregation from bottom to up, the state of root node (i.e., source tweet) can be regard as the representation of the whole tree which is used for supervised classification.",
"So, an output layer is connected to the root node for predicting the class of the tree using a softmax function: x j = x j E h S = s∈S(j) h s r j = σ (W rxj + U r h S ) z j = σ (W zxj + U z h S ) h j = tanh (W hxj + U h (h S r j )) h j = (1 − z j ) h S + z j h j y = Sof tmax(Vh 0 + b) (2) where h 0 is the learned hidden vector of root node; V and b are the weights and bias in output layer.",
"Top-down RvNN This model is designed to leverage the structure of top-down tree to capture complex propagation patterns for classifying rumorous claims, which is shown in Figure 3 (c).",
"It models how the informa-tion flows from source post to the current node.",
"The idea of this top-down approach is to generate a strengthened feature vector for each post considering its propagation path, where rumor-indicative features are aggregated along the propagation history in the path.",
"For example, if current post agree with its parent's stance which denies the source post, the denial stance from the root node down to the current node on this path should be reinforced.",
"Due to different branches of any non-leaf node, the top-down visit to its subtree nodes is also recursive.",
"However, the nature of top-down tree lends this model different from the bottom-up one.",
"The representation of each node is computed by combining its own input and its parent node instead of its children nodes.",
"This process proceeds recursively from the root node to its children until all leaf nodes are reached.",
"Suppose that the hidden state of a non-leaf node can be passed synchronously to all its child nodes without loss.",
"Then the hidden state h j of a node j can be computed by combining the hidden state h P(j) of its parent node P(j) and its own input vector x j .",
"Therefore, the transition equations of node j can be formulated as a standard GRU: x j = x j E r j = σ W rxj + U r h P(j) z j = σ W zxj + U z h P(j) h j = tanh W hxj + U h (h P(j) r j ) h j = (1 − z j ) h P(j) + z j h j (3) Through the top-down recursion, the learned representations are eventually embedded into the hidden vector of all the leaf nodes.",
"Since the num-ber of leaf nodes varies, the resulting vectors cannot be directly fed into a fixed-size neural layer for output.",
"Therefore, we add a max-pooling layer to take the maximum value of each dimension of the vectors over all the leaf nodes.",
"This can also help capture the most appealing indicative features from all the propagation paths.",
"Based on the pooling result, we finally use a softmax function in the output layer to predict the label of the tree: y = Sof tmax(Vh ∞ + b) (4) where h ∞ is the pooling vector over all leaf nodes, V and b are parameters in the output layer.",
"Although both of the two RvNN models aim to capture the structural properties by recursively visiting all nodes, we can conjecture that the topdown model would be better.",
"The hypothesis is that in the bottom-up case the final output relies on the representation of single root, and its information loss can be larger than the top-down one since in the top-down case the representations embedded into all leaf nodes along different propagation paths can be incorporated via pooling holistically.",
"Model Training The model is trained to minimize the squared error between the probability distributions of the predictions and the ground truth: L(y,ŷ) = N n=1 C c=1 (y c −ŷ c ) 2 + λ||θ|| 2 2 (5) where y c is the ground truth andŷ c is the prediction probability of a class, N is the number of training claims, C is the number of classes, ||.|| 2 is the L 2 regularization term over all model parameters θ, and λ is the trade-off coefficient.",
"During training, all the model parameters are updated using efficient back-propagation through structure (Goller and Kuchler, 1996; Socher et al., 2013) , and the optimization is gradient-based following the Ada-grad update rule (Duchi et al., 2011) to speed up the convergence.",
"We empirically initialize the model parameters with uniform distribution and set the vocabulary size as 5,000, the size of embedding and hidden units as 100.",
"We iterate over all the training examples in each epoch and continue until the loss value converges or the maximum epoch number is met.",
"Experiments and Results Datasets For experimental evaluation, we use two publicly available Twitter datasets released by Ma et al.",
"(2017) , namely Twitter15 and Twitter16 4 , which respectively contains 1,381 and 1,181 propagation trees (see (Ma et al., 2017) for detailed statistics).",
"In each dataset, a group of wide spread source tweets along with their propagation threads, i.e., replies and retweets, are provided in the form of tree structure.",
"Each tree is annotated with one of the four class labels, i.e., non-rumor, false rumor, true rumor and unverified rumor.",
"We remove the retweets from the trees since they do not provide any extra information or evidence contentwise.",
"We build two versions for each tree, one for the bottom-up tree and the other for the top-down tree, by flipping the edges' direction.",
"Experimental Setup We make comprehensive comparisons between our models and some state-of-the-art baselines on rumor classification and early detection tasks.",
"-DTR: Zhao et al.",
"(2015) proposed a Decision-Tree-based Ranking model to identify trending rumors by searching for inquiry phrases.",
"-DTC: The information credibility model using a Decision-Tree Classifier (Castillo et al., 2011) based on manually engineering various statistical features of the tweets.",
"-RFC: The Random Forest Classier using 3 fitting parameters as temporal properties and a set of handcrafted features on user, linguistic and structural properties (Kwon et al., 2013) .",
"-SVM-TS: A linear SVM classifier that uses time-series to model the variation of handcrafted social context features (Ma et al., 2015) .",
"-SVM-BOW: A naive baseline we built by representing text content using bag-of-words and using linear SVM for rumor classification.",
"-SVM-TK and SVM-HK: SVM classifier uses a Tree Kernel (Ma et al., 2017) and that uses a Hybrid Kernel (Wu et al., 2015) , respectively, both of which model propagation structures with kernels.",
"-GRU-RNN: A detection model based on recurrent neural networks (Ma et al., 2016) with GRU units for learning rumor representations by modeling sequential structure of relevant posts.",
"We implement DTC and RFC using Weka 5 , SVM-based models using LibSVM 6 and all neural-network-based models with Theano 7 .",
"We conduct 5-fold cross-validation on the datasets and use accuracy over all the four categories and F1 measure on each class to evaluate the performance of models.",
"Rumor Classification Performance As shown in Table 1 , our proposed models basically yield much better performance than other methods on both datasets via the modeling of interaction structures of posts in the propagation.",
"It is observed that the performance of the 4 baselines in the first group based on handcrafted features is obviously poor, varying between 0.409 and 0.585 in accuracy, indicating that they fail to generalize due to the lack of capacity capturing helpful features.",
"Among these baselines, SVM-TS and RFC perform relatively better because they 5 www.cs.waikato.ac.nz/ml/weka 6 www.csie.ntu.edu.tw/˜cjlin/libsvm 7 deeplearning.net/software/theano use additional temporal traits, but they are still clearly worse than the models not relying on feature engineering.",
"DTR uses a set of regular expressions indicative of stances.",
"However, only 19.6% and 22.2% tweets in the two datasets contain strings covered by these regular expressions, rendering unsatisfactory result.",
"Among the two kernel methods that are based on comparing propagation structures, we observe that SVM-TK is much more effective than SVM-HK.",
"There are two reasons: 1) SVM-HK was originally proposed and experimented on Sina Weibo (Wu et al., 2015) , which may not be generalize well on Twitter.",
"2) SVM-HK loosely couples two separate kernels: a RBF kernel based on handcrafted features, plus a random walk-based kernel which relies on a set of pre-defined keywords for jumping over the nodes probabilistically.",
"This under utilizes the propagation information due to such oversimplified treatment of tree structure.",
"In contrast, SVM-TK is an integrated kernel and can fully utilize the structure by comparing the trees based on both textual and structural similarities.",
"It appears that using bag-of-words is already a decent model evidenced as the fairly good performance of SVM-BOW which is even better than SVM-HK.",
"This is because the features of SVM-HK are handcrafted for binary classification (i.e., non-rumor vs rumor), ignoring the importance of indicative words or units that benefit finer-grained classification which can be captured more effectively by SVM-BOW.",
"The sequential neural model GRU-RNN performs slightly worse than SVM-TK, but much worse than our recursive models.",
"This is because it is a special case of the recursive model where each non-leaf node has only one child.",
"It has to rely on a linear chain as input, which missed out valuable structural information.",
"However, it does learn high-level features from the post content via hidden units of the neural model while SVM-TK cannot which can only evaluates similarities based on the overlapping words among subtrees.",
"Our recursive models are inherently tree-structured and take advantages of representation learning following the propagation structure, thus beats SVM-TK.",
"In the two recursive models, TD-RvNN outperforms BU-RvNN, which indicates that the bottomup model may suffer from larger information loss than the top-down one.",
"This verifies the hypothesis we made in Section 4.3 that the pooling layer For only the non-rumor class, it seems that our method does not perform so well as some featureengineering baselines.",
"This can be explained by the fact that these baselines are trained with additional features such as user information (e.g., profile, verification status, etc) which may contain clues for differentiating non-rumors from rumors.",
"Also, the responses to non-rumors are usually much more diverse with little informative indication, making identification of non-rumors more difficult based on content even with the structure.",
"Early Rumor Detection Performance Detecting rumors at early state of propagation is important so that interventions can be made in a timely manner.",
"We compared different methods in term of different time delays measured by either tweet count received or time elapsed since the source tweet is posted.",
"The performance is evaluated by the accuracy obtained when we incrementally add test data up to the check point given the targeted time delay or tweets volume.",
"Figure 4 shows that the performance of our recursive models climbs more rapidly and starts to supersede the other models at the early stage.",
"Although all the methods are getting to their best per-formance in the end, TD-RvNN and BU-RvNN only need around 8 hours or about 90 tweets to achieve the comparable performance of the best baseline model, i.e., SVM-TK, which needs about 36 hours or around 300 posts, indicating superior early detection performance of our method.",
"Figure 5 shows a sample tree at the early stage of propagation that has been correctly classified as a false rumor by both recursive models.",
"We can see that this false rumor demonstrates typical patterns in subtrees and propagation paths indicative of the falsehood, where a set of responses supporting the parent posts that deny or question the source post are captured by our bottom-up model.",
"Similarly, some patterns of propagation from the root to leaf nodes like \"support→deny→support\" are also seized by our top-down model.",
"In comparison, sequential models may be confused because the supportive key terms such as \"be right\", \"yeah\", \"exactly!\"",
"dominate the responses, and the SVM-TK may miss similar subtrees by just comparing the surface words.",
"Conclusions and Future Work We propose a bottom-up and a top-down treestructured model based on recursive neural networks for rumor detection on Twitter.",
"The inher-ent nature of recursive models allows them using propagation tree to guide the learning of representations from tweets content, such as embedding various indicative signals hidden in the structure, for better identifying rumors.",
"Results on two public Twitter datasets show that our method improves rumor detection performance in very large margins as compared to state-of-the-art baselines.",
"In our future work, we plan to integrate other types of information such as user properties into the structured neural models to further enhance representation learning and detect rumor spreaders at the same time.",
"We also plan to use unsupervised models for the task by exploiting structural information."
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"4",
"4.1",
"4.2",
"4.3",
"4.4",
"5.1",
"5.2",
"5.3",
"5.4",
"6"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Problem Statement",
"RvNN-based Rumor Detection",
"Standard Recursive Neural Networks",
"Bottom-up RvNN",
"Top-down RvNN",
"Model Training",
"Datasets",
"Experimental Setup",
"Rumor Classification Performance",
"Early Rumor Detection Performance",
"Conclusions and Future Work"
]
} | GEM-SciDuet-train-135#paper-1364#slide-1 | Motivation | We generally are not good at distinguishing rumors
It is crucial to track and debunk rumors early to minimize their harmful effects.
Online fact-checking services have limited topical coverage and long delay.
Existing models use feature engineering over simplistic; or recently deep neural networks ignore propagation structures; Kernel-based method develop based on tree structure but cannot learn high-level feature representations automatically. | We generally are not good at distinguishing rumors
It is crucial to track and debunk rumors early to minimize their harmful effects.
Online fact-checking services have limited topical coverage and long delay.
Existing models use feature engineering over simplistic; or recently deep neural networks ignore propagation structures; Kernel-based method develop based on tree structure but cannot learn high-level feature representations automatically. | [] |
GEM-SciDuet-train-135#paper-1364#slide-2 | 1364 | Rumor Detection on Twitter with Tree-structured Recursive Neural Networks | Automatic rumor detection is technically very challenging. In this work, we try to learn discriminative features from tweets content by following their non-sequential propagation structure and generate more powerful representations for identifying different type of rumors. We propose two recursive neural models based on a bottom-up and a top-down tree-structured neural networks for rumor representation learning and classification, which naturally conform to the propagation layout of tweets. Results on two public Twitter datasets demonstrate that our recursive neural models 1) achieve much better performance than state-of-the-art approaches; 2) demonstrate superior capacity on detecting rumors at very early stage. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173
],
"paper_content_text": [
"Introduction Rumors have always been a social disease.",
"In recent years, it has become unprecedentedly convenient for the \"evil-doers\" to create and disseminate rumors in massive scale with low cost thanks to the popularity of social media outlets on Twitter, Facebook, etc.",
"The worst effect of false rumors could be devastating to individual and/or society.",
"Research pertaining rumors spans multiple disciplines, such as philosophy and humanities (Di-Fonzo and Bordia, 2007; Donovan, 2007) , social psychology (Allport and Postman, 1965; Jaeger et al., 1980; Rosnow and Foster, 2005) , political studies (Allport and Postman, 1946; Berinsky, 2017) , management science (DiFonzo et al., 1994; Kimmel, 2004) and recently computer science and artificial intelligence (Qazvinian et al., 2011; Ratkiewicz et al., 2011; Castillo et al., 2011; Hannak et al., 2014; Zhao et al., 2015; Ma et al., 2015) .",
"Rumor is commonly defined as information that emerge and spread among people whose truth value is unverified or intentionally false (Di-Fonzo and Bordia, 2007; Qazvinian et al., 2011) .",
"Analysis shows that people tend to stop spreading a rumor if it is known as false (Zubiaga et al., 2016b) .",
"However, identifying such misinformation is non-trivial and needs investigative journalism to fact check the suspected claim, which is labor-intensive and time-consuming.",
"The proliferation of social media makes it worse due to the ever-increasing information load and dynamics.",
"Therefore, it is necessary to develop automatic and assistant approaches to facilitate real-time rumor tracking and debunking.",
"For automating rumor detection, most of the previous studies focused on text mining from sequential microblog streams using supervised models based on feature engineering (Castillo et al., 2011; Kwon et al., 2013; Liu et al., 2015; Ma et al., 2015) , and more recently deep neural models (Ma et al., 2016; Chen et al., 2017; Ruchansky et al., 2017) .",
"These methods largely ignore or oversimplify the structural information associated with message propagation which however has been shown conducive to provide useful clues for identifying rumors.",
"Kernel-based method (Wu et al., 2015; Ma et al., 2017) was thus proposed to model the structure as propagation trees in order to differentiate rumorous and non-rumorous claims by comparing their tree-based similarities.",
"But such kind of approach cannot directly classify a tree without pairwise comparison with all other trees imposing unnecessary overhead, and it also cannot automatically learn any high-level feature representations out of the noisy surface features.",
"In this paper, we present a neural rumor detection approach based on recursive neural networks (RvNN) to bridge the content semantics and propagation clues.",
"RvNN and its variants were originally used to compose phrase or sentence representation for syntactic and semantic parsing (Socher et al., 2011 (Socher et al., , 2012 .",
"Unlike parsing, the input into our model is a propagation tree rooted from a source post rather than the parse tree of an individual sentence, and each tree node is a responsive post instead of an individual words.",
"The content semantics of posts and the responsive relationship among them can be jointly captured via the recursive feature learning process along the tree structure.",
"So, why can such neural model do better for the task?",
"Analysis has generally found that Twitter could \"self-correct\" some inaccurate information as users share opinions, conjectures and evidences (Zubiaga et al., 2017) .",
"To illustrate our intuition, Figure 1 exemplifies the propagation trees of two rumors in our dataset, one being false and the other being true 1 .",
"Structure-insensitive methods basically relying on the relative ratio of different stances in the text cannot do well when such clue is unclear like this example.",
"However, it can be seen that when a post denies the false rumor, it tends to spark supportive or affirmative replies confirming the denial; in contrast, denial to a true rumor tends to trigger question or denial in its replies.",
"This observation may suggest a more general hypothesis that the repliers tend to disagree with (or question) who support a false rumor or deny a true rumor, and also they tend to agree with who deny a false rumor or support a true rumor.",
"Meanwhile, a reply, rather than directly responding to the source tweet (i.e., the root), is usually responsive to its immediate ancestor (Lukasik et al., 2016; Zubiaga et al., 2016a) , suggesting obvious local characteristic of the interaction.",
"The recursive network naturally models such structures for learning to capture the rumor indicative signals and enhance the representation by recursively aggregating the signals from different branches.",
"To this end, we extend the standard RvNN into two variants, i.e., a bottom-up (BU) model and a top-down (TD) model, which represent the propagation tree structure from different angles, in order to visit the nodes and combine their representations following distinct directions.",
"The important merit of such architecture is that the node features can be selectively refined by the recursion given the connection and direction of all paths of the 1 False (true) rumor means the veracity of the rumorous claim is false (true).",
"Figure 1 : Propagation trees of two rumorous source tweets.",
"Nodes may express stances on their parent as commenting, supporting, questioning or denying.",
"The edge arrow indicates the direction from a response to its responded node, and the polarity is marked as '+' ('-') for support (denial).",
"The same node color indicates the same stance on the veracity of root node (i.e., source tweet).",
"tree.",
"As a result, it can be expected that the discriminative signals are better embedded into the learned representations.",
"We evaluate our proposed approach based on two public Twitter datasets.",
"The results show that our method outperforms strong rumor detection baselines with large margin and also demonstrate much higher effectiveness for detection at early stage of propagation, which is promising for realtime intervention and debunking.",
"Our contributions are summarized as follows in three folds: • This is the first study that deeply integrates both structure and content semantics based on tree-structured recursive neural networks for detecting rumors from microblog posts.",
"• We propose two variants of RvNN models based on bottom-up and top-down tree structures to generate better integrated representations for a claim by capturing both structural and textural properties signaling rumors.",
"• Our experiments based on real-world Twitter datasets achieve superior improvements over state-of-the-art baselines on both rumor classification and early detection tasks.",
"We make the source codes in our experiments publicly accessible 2 .",
"Related Work Most previous automatic approaches for rumor detection (Castillo et al., 2011; Yang et al., 2012; Liu et al., 2015) intended to learn a supervised classifier by utilizing a wide range of features crafted from post contents, user profiles and propagation patterns.",
"Subsequent studies were then conducted to engineer new features such as those representing rumor diffusion and cascades (Friggeri et al., 2014; Hannak et al., 2014) characterized by comments with links to debunking websites.",
"Kwon et al.",
"(2013) introduced a time-series-fitting model based on the volume of tweets over time.",
"Ma et al.",
"(2015) extended their model with more chronological social context features.",
"These approaches typically require heavy preprocessing and feature engineering.",
"Zhao et al.",
"(2015) alleviated the engineering effort by using a set of regular expressions (such as \"really?",
"\", \"not true\", etc) to find questing and denying tweets, but the approach was oversimplified and suffered from very low recall.",
"Ma et al.",
"(2016) used recurrent neural networks (RNN) to learn automatically the representations from tweets content based on time series.",
"Recently, they studied to mutually reinforce stance detection and rumor classification in a neural multi-task learning framework (Ma et al., 2018) .",
"However, the approaches cannot embed features reflecting how the posts are propagated and requires careful data segmentation to prepare for time sequence.",
"Some kernel-based methods were exploited to model the propagation structure.",
"Wu et al.",
"(2015) proposed a hybrid SVM classifier which combines a RBF kernel and a random-walk-based graph kernel to capture both flat and propagation patterns for detecting rumors on Sina Weibo.",
"Ma et al.",
"(2017) used tree kernel to capture the similarity of propagation trees by counting their similar substructures in order to identify different types of rumors on Twitter.",
"Compared to their studies, our model can learn the useful features via a more natural and general approach, i.e., the tree-structured neural network, to jointly generate representations from both structure and content.",
"RvNN has demonstrated state-of-the-art performances in a variety of tasks, e.g., images segmentation (Socher et al., 2011) , phrase representation from word vectors (Socher et al., 2012) , and sentiment classification in sentences (Socher et al., 2013) .",
"More recently, a deep RvNN was proposed to model the compositionality in natural language for fine-grained sentiment classification by stacking multiple recursive layers (Irsoy and Cardie, 2014) .",
"In order to avoid gradient vanishing, some studies integrated Long Short Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) to RvNN Tai et al., 2015) .",
"Mou et al.",
"(2015) used a convolutional network over tree structures for syntactic tree parsing of natural language sentences.",
"Problem Statement We define a Twitter rumor detection dataset as a set of claims C = {C 1 , C 2 , · · · , C |C| }, where each claim C i corresponds to a source tweet r i which consists of ideally all its relevant responsive tweets in chronological order, i.e., C i = {r i , x i1 , x i2 , · · · , x im } where each x i * is a responsive tweet of the root r i .",
"Note that although the tweets are notated sequentially, there are connections among them based on their reply or repost relationships, which can form a propagation tree structure (Wu et al., 2015; Ma et al., 2017) with r i being the root node.",
"We formulate this task as a supervised classification problem, which learns a classifier f from labeled claims, that is f : C i → Y i , where Y i takes one of the four finer-grained classes: non-rumor, false rumor, true rumor, and unverified rumor that are introduced in the literature (Ma et al., 2017; Zubiaga et al., 2016b ).",
"An important issue of the tree structure is concerned about the direction of edges, which can result in two different architectures of the model: 1) a bottom-up tree; 2) a top-down tree, which are defined as follows: • Bottom-up tree takes the similar shape as shown in Figure 1 , where responsive nodes always point to their responded nodes and leaf nodes not having any response are laid out at the furthest level.",
"We represent a tree as T i = V i , E i , where V i = C i which con- sists of all relevant posts as nodes, and E i denotes a set of all directed links, where for any u, v ∈ V i , u ← v exists if v responses to u.",
"This structure is similar to a citation network where a response mimics a reference.",
"• Top-down tree naturally conforms to the direction of information propagation, in which a link u → v means the information flows from u to v and v sees it and provides a response to u.",
"This structure reverses bottomup tree and simulates how information cas- cades from a source tweet, i.e., the root, to all its receivers, i.e., the decedents, which is similar as (Wu et al., 2015; Ma et al., 2017) .",
"RvNN-based Rumor Detection The core idea of our method is to strengthen the high-level representation of tree nodes by the recursion following the propagation structure over different branches in the tree.",
"For instance, the responsive nodes confirming or supporting a node (e.g., \"I agree\", \"be right\", etc) can further reinforce the stance of that node while denial or questioning responses (e.g., \"disagree, \"really?!)",
"otherwise weaken its stance.",
"Compared to the kernelbased method using propagation tree (Wu et al., 2015; Ma et al., 2017) , our method does not need pairwise comparison among large number of subtrees, and can learn much stronger representation of content following the response structure.",
"In this section, we will describe our extension to the standard RvNN for modeling rumor detection based on the bottom-up and top-down architectures presented in Section 3.",
"Standard Recursive Neural Networks RvNN is a type of tree-structured neural networks.",
"The original version of RvNN utilized binarized sentence parse trees (Socher et al., 2012) , in which the representation associated with each node of a parse tree is computed from its direct children.",
"The overall structure of the standard RvNN is illustrated as the right side of Figure 2 , corresponding to the input parse tree at the left side.",
"Leaf nodes are the words in an input sentence, each represented by a low-dimensional word embedding.",
"Non-leaf nodes are sentence constituents, computed by recursion based on the presentations of child nodes.",
"Let p be the feature vector of a parent node whose children are c 1 and c 2 , the representation of the parent is computed by p = f (W ·[c 1 ; c 2 ]+b), where f (·) is the activation function with W and b as parameters.",
"This computation is done recursively over all tree nodes; the learned hidden vectors of the nodes can then be used for various classification tasks.",
"Bottom-up RvNN The core idea of bottom-up model is to generate a feature vector for each subtree by recursively visiting every node from the leaves at the bottom to the root at the top.",
"In this way, the subtrees with similar contexts, such as those subtrees having a denial parent and a set of supportive children, will be projected into the proximity in the representation space.",
"And thus such local rumor indicative features are aggregated along different branches into some global representation of the whole tree.",
"For this purpose, we make a natural extension to the original RvNN.",
"The overall structure of our proposed bottom-up model is illustrated in Figure 3(b) , taking a bottom-up tree (see Figure 3 (a)) as input.",
"Different from the standard RvNN, the input of each node in the bottom-up model is a post represented as a vector of words in the vocabulary in terms of tf idf values.",
"Here, every node has an input vector, and the number of children of nodes varies significantly 3 .",
"In rumor detection, long short-term memory (LSTM) (Hochreiter and Schmidhuber, 1997) and gated recurrent units (GRU) (Cho et al., 2014) were used to learn textual representation, which adopts memory units to store information over long time steps (Ma et al., 2016) .",
"In this paper, we choose to extend GRU as hidden unit to model long-distance interactions over the tree nodes because it is more efficient due to fewer parameters.",
"Let S(j) denote the set of direct children of the node j.",
"The transition equations of node j in the bottom-up model are formulated as follows: where x j is the original input vector of node j, E denotes the parameter matrix for transforming this input post,x j is the transformed representation of j, [W * , U * ] are the weight connections inside GRU, and h j and h s refer to the hidden state of j and its s-th child.",
"Thus h S denotes the sum of the hidden state of all the children of j assuming that all children are equally important to j.",
"As with the standard GRU, denotes element-wise multiplication; a reset gate r j determines how to combine the current inputx j with the memory of children, and an update gate z j defines how much memory from the children is cascaded into the current node; andh j denotes the candidate activation of the hidden state of the current node.",
"Different from the standard GRU unit, the gating vectors in our variant of GRU are dependent on the states of many child units, allowing our model to incorporate representations from different children.",
"After recursive aggregation from bottom to up, the state of root node (i.e., source tweet) can be regard as the representation of the whole tree which is used for supervised classification.",
"So, an output layer is connected to the root node for predicting the class of the tree using a softmax function: x j = x j E h S = s∈S(j) h s r j = σ (W rxj + U r h S ) z j = σ (W zxj + U z h S ) h j = tanh (W hxj + U h (h S r j )) h j = (1 − z j ) h S + z j h j y = Sof tmax(Vh 0 + b) (2) where h 0 is the learned hidden vector of root node; V and b are the weights and bias in output layer.",
"Top-down RvNN This model is designed to leverage the structure of top-down tree to capture complex propagation patterns for classifying rumorous claims, which is shown in Figure 3 (c).",
"It models how the informa-tion flows from source post to the current node.",
"The idea of this top-down approach is to generate a strengthened feature vector for each post considering its propagation path, where rumor-indicative features are aggregated along the propagation history in the path.",
"For example, if current post agree with its parent's stance which denies the source post, the denial stance from the root node down to the current node on this path should be reinforced.",
"Due to different branches of any non-leaf node, the top-down visit to its subtree nodes is also recursive.",
"However, the nature of top-down tree lends this model different from the bottom-up one.",
"The representation of each node is computed by combining its own input and its parent node instead of its children nodes.",
"This process proceeds recursively from the root node to its children until all leaf nodes are reached.",
"Suppose that the hidden state of a non-leaf node can be passed synchronously to all its child nodes without loss.",
"Then the hidden state h j of a node j can be computed by combining the hidden state h P(j) of its parent node P(j) and its own input vector x j .",
"Therefore, the transition equations of node j can be formulated as a standard GRU: x j = x j E r j = σ W rxj + U r h P(j) z j = σ W zxj + U z h P(j) h j = tanh W hxj + U h (h P(j) r j ) h j = (1 − z j ) h P(j) + z j h j (3) Through the top-down recursion, the learned representations are eventually embedded into the hidden vector of all the leaf nodes.",
"Since the num-ber of leaf nodes varies, the resulting vectors cannot be directly fed into a fixed-size neural layer for output.",
"Therefore, we add a max-pooling layer to take the maximum value of each dimension of the vectors over all the leaf nodes.",
"This can also help capture the most appealing indicative features from all the propagation paths.",
"Based on the pooling result, we finally use a softmax function in the output layer to predict the label of the tree: y = Sof tmax(Vh ∞ + b) (4) where h ∞ is the pooling vector over all leaf nodes, V and b are parameters in the output layer.",
"Although both of the two RvNN models aim to capture the structural properties by recursively visiting all nodes, we can conjecture that the topdown model would be better.",
"The hypothesis is that in the bottom-up case the final output relies on the representation of single root, and its information loss can be larger than the top-down one since in the top-down case the representations embedded into all leaf nodes along different propagation paths can be incorporated via pooling holistically.",
"Model Training The model is trained to minimize the squared error between the probability distributions of the predictions and the ground truth: L(y,ŷ) = N n=1 C c=1 (y c −ŷ c ) 2 + λ||θ|| 2 2 (5) where y c is the ground truth andŷ c is the prediction probability of a class, N is the number of training claims, C is the number of classes, ||.|| 2 is the L 2 regularization term over all model parameters θ, and λ is the trade-off coefficient.",
"During training, all the model parameters are updated using efficient back-propagation through structure (Goller and Kuchler, 1996; Socher et al., 2013) , and the optimization is gradient-based following the Ada-grad update rule (Duchi et al., 2011) to speed up the convergence.",
"We empirically initialize the model parameters with uniform distribution and set the vocabulary size as 5,000, the size of embedding and hidden units as 100.",
"We iterate over all the training examples in each epoch and continue until the loss value converges or the maximum epoch number is met.",
"Experiments and Results Datasets For experimental evaluation, we use two publicly available Twitter datasets released by Ma et al.",
"(2017) , namely Twitter15 and Twitter16 4 , which respectively contains 1,381 and 1,181 propagation trees (see (Ma et al., 2017) for detailed statistics).",
"In each dataset, a group of wide spread source tweets along with their propagation threads, i.e., replies and retweets, are provided in the form of tree structure.",
"Each tree is annotated with one of the four class labels, i.e., non-rumor, false rumor, true rumor and unverified rumor.",
"We remove the retweets from the trees since they do not provide any extra information or evidence contentwise.",
"We build two versions for each tree, one for the bottom-up tree and the other for the top-down tree, by flipping the edges' direction.",
"Experimental Setup We make comprehensive comparisons between our models and some state-of-the-art baselines on rumor classification and early detection tasks.",
"-DTR: Zhao et al.",
"(2015) proposed a Decision-Tree-based Ranking model to identify trending rumors by searching for inquiry phrases.",
"-DTC: The information credibility model using a Decision-Tree Classifier (Castillo et al., 2011) based on manually engineering various statistical features of the tweets.",
"-RFC: The Random Forest Classier using 3 fitting parameters as temporal properties and a set of handcrafted features on user, linguistic and structural properties (Kwon et al., 2013) .",
"-SVM-TS: A linear SVM classifier that uses time-series to model the variation of handcrafted social context features (Ma et al., 2015) .",
"-SVM-BOW: A naive baseline we built by representing text content using bag-of-words and using linear SVM for rumor classification.",
"-SVM-TK and SVM-HK: SVM classifier uses a Tree Kernel (Ma et al., 2017) and that uses a Hybrid Kernel (Wu et al., 2015) , respectively, both of which model propagation structures with kernels.",
"-GRU-RNN: A detection model based on recurrent neural networks (Ma et al., 2016) with GRU units for learning rumor representations by modeling sequential structure of relevant posts.",
"We implement DTC and RFC using Weka 5 , SVM-based models using LibSVM 6 and all neural-network-based models with Theano 7 .",
"We conduct 5-fold cross-validation on the datasets and use accuracy over all the four categories and F1 measure on each class to evaluate the performance of models.",
"Rumor Classification Performance As shown in Table 1 , our proposed models basically yield much better performance than other methods on both datasets via the modeling of interaction structures of posts in the propagation.",
"It is observed that the performance of the 4 baselines in the first group based on handcrafted features is obviously poor, varying between 0.409 and 0.585 in accuracy, indicating that they fail to generalize due to the lack of capacity capturing helpful features.",
"Among these baselines, SVM-TS and RFC perform relatively better because they 5 www.cs.waikato.ac.nz/ml/weka 6 www.csie.ntu.edu.tw/˜cjlin/libsvm 7 deeplearning.net/software/theano use additional temporal traits, but they are still clearly worse than the models not relying on feature engineering.",
"DTR uses a set of regular expressions indicative of stances.",
"However, only 19.6% and 22.2% tweets in the two datasets contain strings covered by these regular expressions, rendering unsatisfactory result.",
"Among the two kernel methods that are based on comparing propagation structures, we observe that SVM-TK is much more effective than SVM-HK.",
"There are two reasons: 1) SVM-HK was originally proposed and experimented on Sina Weibo (Wu et al., 2015) , which may not be generalize well on Twitter.",
"2) SVM-HK loosely couples two separate kernels: a RBF kernel based on handcrafted features, plus a random walk-based kernel which relies on a set of pre-defined keywords for jumping over the nodes probabilistically.",
"This under utilizes the propagation information due to such oversimplified treatment of tree structure.",
"In contrast, SVM-TK is an integrated kernel and can fully utilize the structure by comparing the trees based on both textual and structural similarities.",
"It appears that using bag-of-words is already a decent model evidenced as the fairly good performance of SVM-BOW which is even better than SVM-HK.",
"This is because the features of SVM-HK are handcrafted for binary classification (i.e., non-rumor vs rumor), ignoring the importance of indicative words or units that benefit finer-grained classification which can be captured more effectively by SVM-BOW.",
"The sequential neural model GRU-RNN performs slightly worse than SVM-TK, but much worse than our recursive models.",
"This is because it is a special case of the recursive model where each non-leaf node has only one child.",
"It has to rely on a linear chain as input, which missed out valuable structural information.",
"However, it does learn high-level features from the post content via hidden units of the neural model while SVM-TK cannot which can only evaluates similarities based on the overlapping words among subtrees.",
"Our recursive models are inherently tree-structured and take advantages of representation learning following the propagation structure, thus beats SVM-TK.",
"In the two recursive models, TD-RvNN outperforms BU-RvNN, which indicates that the bottomup model may suffer from larger information loss than the top-down one.",
"This verifies the hypothesis we made in Section 4.3 that the pooling layer For only the non-rumor class, it seems that our method does not perform so well as some featureengineering baselines.",
"This can be explained by the fact that these baselines are trained with additional features such as user information (e.g., profile, verification status, etc) which may contain clues for differentiating non-rumors from rumors.",
"Also, the responses to non-rumors are usually much more diverse with little informative indication, making identification of non-rumors more difficult based on content even with the structure.",
"Early Rumor Detection Performance Detecting rumors at early state of propagation is important so that interventions can be made in a timely manner.",
"We compared different methods in term of different time delays measured by either tweet count received or time elapsed since the source tweet is posted.",
"The performance is evaluated by the accuracy obtained when we incrementally add test data up to the check point given the targeted time delay or tweets volume.",
"Figure 4 shows that the performance of our recursive models climbs more rapidly and starts to supersede the other models at the early stage.",
"Although all the methods are getting to their best per-formance in the end, TD-RvNN and BU-RvNN only need around 8 hours or about 90 tweets to achieve the comparable performance of the best baseline model, i.e., SVM-TK, which needs about 36 hours or around 300 posts, indicating superior early detection performance of our method.",
"Figure 5 shows a sample tree at the early stage of propagation that has been correctly classified as a false rumor by both recursive models.",
"We can see that this false rumor demonstrates typical patterns in subtrees and propagation paths indicative of the falsehood, where a set of responses supporting the parent posts that deny or question the source post are captured by our bottom-up model.",
"Similarly, some patterns of propagation from the root to leaf nodes like \"support→deny→support\" are also seized by our top-down model.",
"In comparison, sequential models may be confused because the supportive key terms such as \"be right\", \"yeah\", \"exactly!\"",
"dominate the responses, and the SVM-TK may miss similar subtrees by just comparing the surface words.",
"Conclusions and Future Work We propose a bottom-up and a top-down treestructured model based on recursive neural networks for rumor detection on Twitter.",
"The inher-ent nature of recursive models allows them using propagation tree to guide the learning of representations from tweets content, such as embedding various indicative signals hidden in the structure, for better identifying rumors.",
"Results on two public Twitter datasets show that our method improves rumor detection performance in very large margins as compared to state-of-the-art baselines.",
"In our future work, we plan to integrate other types of information such as user properties into the structured neural models to further enhance representation learning and detect rumor spreaders at the same time.",
"We also plan to use unsupervised models for the task by exploiting structural information."
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"4",
"4.1",
"4.2",
"4.3",
"4.4",
"5.1",
"5.2",
"5.3",
"5.4",
"6"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Problem Statement",
"RvNN-based Rumor Detection",
"Standard Recursive Neural Networks",
"Bottom-up RvNN",
"Top-down RvNN",
"Model Training",
"Datasets",
"Experimental Setup",
"Rumor Classification Performance",
"Early Rumor Detection Performance",
"Conclusions and Future Work"
]
} | GEM-SciDuet-train-135#paper-1364#slide-2 | Observation and Hypothesis | Existing works: Consider post representation or propagation structure
(a) RNN-based model (b) Tree kernel-based model
IDEA: Combining the two models, leveraging propagation structure by representation learning algorithm
Why such model do better?
Polarity stances (a) False rumor (b) True rumor
A reply usually respond to its immediate ancestor rather than the root tweet.
Repliers tend to disagree with (or question) who support a false rumor or deny a true rumor; repliers tend to agree with who deny a false rumor or support a true rumor. | Existing works: Consider post representation or propagation structure
(a) RNN-based model (b) Tree kernel-based model
IDEA: Combining the two models, leveraging propagation structure by representation learning algorithm
Why such model do better?
Polarity stances (a) False rumor (b) True rumor
A reply usually respond to its immediate ancestor rather than the root tweet.
Repliers tend to disagree with (or question) who support a false rumor or deny a true rumor; repliers tend to agree with who deny a false rumor or support a true rumor. | [] |
GEM-SciDuet-train-135#paper-1364#slide-3 | 1364 | Rumor Detection on Twitter with Tree-structured Recursive Neural Networks | Automatic rumor detection is technically very challenging. In this work, we try to learn discriminative features from tweets content by following their non-sequential propagation structure and generate more powerful representations for identifying different type of rumors. We propose two recursive neural models based on a bottom-up and a top-down tree-structured neural networks for rumor representation learning and classification, which naturally conform to the propagation layout of tweets. Results on two public Twitter datasets demonstrate that our recursive neural models 1) achieve much better performance than state-of-the-art approaches; 2) demonstrate superior capacity on detecting rumors at very early stage. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173
],
"paper_content_text": [
"Introduction Rumors have always been a social disease.",
"In recent years, it has become unprecedentedly convenient for the \"evil-doers\" to create and disseminate rumors in massive scale with low cost thanks to the popularity of social media outlets on Twitter, Facebook, etc.",
"The worst effect of false rumors could be devastating to individual and/or society.",
"Research pertaining rumors spans multiple disciplines, such as philosophy and humanities (Di-Fonzo and Bordia, 2007; Donovan, 2007) , social psychology (Allport and Postman, 1965; Jaeger et al., 1980; Rosnow and Foster, 2005) , political studies (Allport and Postman, 1946; Berinsky, 2017) , management science (DiFonzo et al., 1994; Kimmel, 2004) and recently computer science and artificial intelligence (Qazvinian et al., 2011; Ratkiewicz et al., 2011; Castillo et al., 2011; Hannak et al., 2014; Zhao et al., 2015; Ma et al., 2015) .",
"Rumor is commonly defined as information that emerge and spread among people whose truth value is unverified or intentionally false (Di-Fonzo and Bordia, 2007; Qazvinian et al., 2011) .",
"Analysis shows that people tend to stop spreading a rumor if it is known as false (Zubiaga et al., 2016b) .",
"However, identifying such misinformation is non-trivial and needs investigative journalism to fact check the suspected claim, which is labor-intensive and time-consuming.",
"The proliferation of social media makes it worse due to the ever-increasing information load and dynamics.",
"Therefore, it is necessary to develop automatic and assistant approaches to facilitate real-time rumor tracking and debunking.",
"For automating rumor detection, most of the previous studies focused on text mining from sequential microblog streams using supervised models based on feature engineering (Castillo et al., 2011; Kwon et al., 2013; Liu et al., 2015; Ma et al., 2015) , and more recently deep neural models (Ma et al., 2016; Chen et al., 2017; Ruchansky et al., 2017) .",
"These methods largely ignore or oversimplify the structural information associated with message propagation which however has been shown conducive to provide useful clues for identifying rumors.",
"Kernel-based method (Wu et al., 2015; Ma et al., 2017) was thus proposed to model the structure as propagation trees in order to differentiate rumorous and non-rumorous claims by comparing their tree-based similarities.",
"But such kind of approach cannot directly classify a tree without pairwise comparison with all other trees imposing unnecessary overhead, and it also cannot automatically learn any high-level feature representations out of the noisy surface features.",
"In this paper, we present a neural rumor detection approach based on recursive neural networks (RvNN) to bridge the content semantics and propagation clues.",
"RvNN and its variants were originally used to compose phrase or sentence representation for syntactic and semantic parsing (Socher et al., 2011 (Socher et al., , 2012 .",
"Unlike parsing, the input into our model is a propagation tree rooted from a source post rather than the parse tree of an individual sentence, and each tree node is a responsive post instead of an individual words.",
"The content semantics of posts and the responsive relationship among them can be jointly captured via the recursive feature learning process along the tree structure.",
"So, why can such neural model do better for the task?",
"Analysis has generally found that Twitter could \"self-correct\" some inaccurate information as users share opinions, conjectures and evidences (Zubiaga et al., 2017) .",
"To illustrate our intuition, Figure 1 exemplifies the propagation trees of two rumors in our dataset, one being false and the other being true 1 .",
"Structure-insensitive methods basically relying on the relative ratio of different stances in the text cannot do well when such clue is unclear like this example.",
"However, it can be seen that when a post denies the false rumor, it tends to spark supportive or affirmative replies confirming the denial; in contrast, denial to a true rumor tends to trigger question or denial in its replies.",
"This observation may suggest a more general hypothesis that the repliers tend to disagree with (or question) who support a false rumor or deny a true rumor, and also they tend to agree with who deny a false rumor or support a true rumor.",
"Meanwhile, a reply, rather than directly responding to the source tweet (i.e., the root), is usually responsive to its immediate ancestor (Lukasik et al., 2016; Zubiaga et al., 2016a) , suggesting obvious local characteristic of the interaction.",
"The recursive network naturally models such structures for learning to capture the rumor indicative signals and enhance the representation by recursively aggregating the signals from different branches.",
"To this end, we extend the standard RvNN into two variants, i.e., a bottom-up (BU) model and a top-down (TD) model, which represent the propagation tree structure from different angles, in order to visit the nodes and combine their representations following distinct directions.",
"The important merit of such architecture is that the node features can be selectively refined by the recursion given the connection and direction of all paths of the 1 False (true) rumor means the veracity of the rumorous claim is false (true).",
"Figure 1 : Propagation trees of two rumorous source tweets.",
"Nodes may express stances on their parent as commenting, supporting, questioning or denying.",
"The edge arrow indicates the direction from a response to its responded node, and the polarity is marked as '+' ('-') for support (denial).",
"The same node color indicates the same stance on the veracity of root node (i.e., source tweet).",
"tree.",
"As a result, it can be expected that the discriminative signals are better embedded into the learned representations.",
"We evaluate our proposed approach based on two public Twitter datasets.",
"The results show that our method outperforms strong rumor detection baselines with large margin and also demonstrate much higher effectiveness for detection at early stage of propagation, which is promising for realtime intervention and debunking.",
"Our contributions are summarized as follows in three folds: • This is the first study that deeply integrates both structure and content semantics based on tree-structured recursive neural networks for detecting rumors from microblog posts.",
"• We propose two variants of RvNN models based on bottom-up and top-down tree structures to generate better integrated representations for a claim by capturing both structural and textural properties signaling rumors.",
"• Our experiments based on real-world Twitter datasets achieve superior improvements over state-of-the-art baselines on both rumor classification and early detection tasks.",
"We make the source codes in our experiments publicly accessible 2 .",
"Related Work Most previous automatic approaches for rumor detection (Castillo et al., 2011; Yang et al., 2012; Liu et al., 2015) intended to learn a supervised classifier by utilizing a wide range of features crafted from post contents, user profiles and propagation patterns.",
"Subsequent studies were then conducted to engineer new features such as those representing rumor diffusion and cascades (Friggeri et al., 2014; Hannak et al., 2014) characterized by comments with links to debunking websites.",
"Kwon et al.",
"(2013) introduced a time-series-fitting model based on the volume of tweets over time.",
"Ma et al.",
"(2015) extended their model with more chronological social context features.",
"These approaches typically require heavy preprocessing and feature engineering.",
"Zhao et al.",
"(2015) alleviated the engineering effort by using a set of regular expressions (such as \"really?",
"\", \"not true\", etc) to find questing and denying tweets, but the approach was oversimplified and suffered from very low recall.",
"Ma et al.",
"(2016) used recurrent neural networks (RNN) to learn automatically the representations from tweets content based on time series.",
"Recently, they studied to mutually reinforce stance detection and rumor classification in a neural multi-task learning framework (Ma et al., 2018) .",
"However, the approaches cannot embed features reflecting how the posts are propagated and requires careful data segmentation to prepare for time sequence.",
"Some kernel-based methods were exploited to model the propagation structure.",
"Wu et al.",
"(2015) proposed a hybrid SVM classifier which combines a RBF kernel and a random-walk-based graph kernel to capture both flat and propagation patterns for detecting rumors on Sina Weibo.",
"Ma et al.",
"(2017) used tree kernel to capture the similarity of propagation trees by counting their similar substructures in order to identify different types of rumors on Twitter.",
"Compared to their studies, our model can learn the useful features via a more natural and general approach, i.e., the tree-structured neural network, to jointly generate representations from both structure and content.",
"RvNN has demonstrated state-of-the-art performances in a variety of tasks, e.g., images segmentation (Socher et al., 2011) , phrase representation from word vectors (Socher et al., 2012) , and sentiment classification in sentences (Socher et al., 2013) .",
"More recently, a deep RvNN was proposed to model the compositionality in natural language for fine-grained sentiment classification by stacking multiple recursive layers (Irsoy and Cardie, 2014) .",
"In order to avoid gradient vanishing, some studies integrated Long Short Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) to RvNN Tai et al., 2015) .",
"Mou et al.",
"(2015) used a convolutional network over tree structures for syntactic tree parsing of natural language sentences.",
"Problem Statement We define a Twitter rumor detection dataset as a set of claims C = {C 1 , C 2 , · · · , C |C| }, where each claim C i corresponds to a source tweet r i which consists of ideally all its relevant responsive tweets in chronological order, i.e., C i = {r i , x i1 , x i2 , · · · , x im } where each x i * is a responsive tweet of the root r i .",
"Note that although the tweets are notated sequentially, there are connections among them based on their reply or repost relationships, which can form a propagation tree structure (Wu et al., 2015; Ma et al., 2017) with r i being the root node.",
"We formulate this task as a supervised classification problem, which learns a classifier f from labeled claims, that is f : C i → Y i , where Y i takes one of the four finer-grained classes: non-rumor, false rumor, true rumor, and unverified rumor that are introduced in the literature (Ma et al., 2017; Zubiaga et al., 2016b ).",
"An important issue of the tree structure is concerned about the direction of edges, which can result in two different architectures of the model: 1) a bottom-up tree; 2) a top-down tree, which are defined as follows: • Bottom-up tree takes the similar shape as shown in Figure 1 , where responsive nodes always point to their responded nodes and leaf nodes not having any response are laid out at the furthest level.",
"We represent a tree as T i = V i , E i , where V i = C i which con- sists of all relevant posts as nodes, and E i denotes a set of all directed links, where for any u, v ∈ V i , u ← v exists if v responses to u.",
"This structure is similar to a citation network where a response mimics a reference.",
"• Top-down tree naturally conforms to the direction of information propagation, in which a link u → v means the information flows from u to v and v sees it and provides a response to u.",
"This structure reverses bottomup tree and simulates how information cas- cades from a source tweet, i.e., the root, to all its receivers, i.e., the decedents, which is similar as (Wu et al., 2015; Ma et al., 2017) .",
"RvNN-based Rumor Detection The core idea of our method is to strengthen the high-level representation of tree nodes by the recursion following the propagation structure over different branches in the tree.",
"For instance, the responsive nodes confirming or supporting a node (e.g., \"I agree\", \"be right\", etc) can further reinforce the stance of that node while denial or questioning responses (e.g., \"disagree, \"really?!)",
"otherwise weaken its stance.",
"Compared to the kernelbased method using propagation tree (Wu et al., 2015; Ma et al., 2017) , our method does not need pairwise comparison among large number of subtrees, and can learn much stronger representation of content following the response structure.",
"In this section, we will describe our extension to the standard RvNN for modeling rumor detection based on the bottom-up and top-down architectures presented in Section 3.",
"Standard Recursive Neural Networks RvNN is a type of tree-structured neural networks.",
"The original version of RvNN utilized binarized sentence parse trees (Socher et al., 2012) , in which the representation associated with each node of a parse tree is computed from its direct children.",
"The overall structure of the standard RvNN is illustrated as the right side of Figure 2 , corresponding to the input parse tree at the left side.",
"Leaf nodes are the words in an input sentence, each represented by a low-dimensional word embedding.",
"Non-leaf nodes are sentence constituents, computed by recursion based on the presentations of child nodes.",
"Let p be the feature vector of a parent node whose children are c 1 and c 2 , the representation of the parent is computed by p = f (W ·[c 1 ; c 2 ]+b), where f (·) is the activation function with W and b as parameters.",
"This computation is done recursively over all tree nodes; the learned hidden vectors of the nodes can then be used for various classification tasks.",
"Bottom-up RvNN The core idea of bottom-up model is to generate a feature vector for each subtree by recursively visiting every node from the leaves at the bottom to the root at the top.",
"In this way, the subtrees with similar contexts, such as those subtrees having a denial parent and a set of supportive children, will be projected into the proximity in the representation space.",
"And thus such local rumor indicative features are aggregated along different branches into some global representation of the whole tree.",
"For this purpose, we make a natural extension to the original RvNN.",
"The overall structure of our proposed bottom-up model is illustrated in Figure 3(b) , taking a bottom-up tree (see Figure 3 (a)) as input.",
"Different from the standard RvNN, the input of each node in the bottom-up model is a post represented as a vector of words in the vocabulary in terms of tf idf values.",
"Here, every node has an input vector, and the number of children of nodes varies significantly 3 .",
"In rumor detection, long short-term memory (LSTM) (Hochreiter and Schmidhuber, 1997) and gated recurrent units (GRU) (Cho et al., 2014) were used to learn textual representation, which adopts memory units to store information over long time steps (Ma et al., 2016) .",
"In this paper, we choose to extend GRU as hidden unit to model long-distance interactions over the tree nodes because it is more efficient due to fewer parameters.",
"Let S(j) denote the set of direct children of the node j.",
"The transition equations of node j in the bottom-up model are formulated as follows: where x j is the original input vector of node j, E denotes the parameter matrix for transforming this input post,x j is the transformed representation of j, [W * , U * ] are the weight connections inside GRU, and h j and h s refer to the hidden state of j and its s-th child.",
"Thus h S denotes the sum of the hidden state of all the children of j assuming that all children are equally important to j.",
"As with the standard GRU, denotes element-wise multiplication; a reset gate r j determines how to combine the current inputx j with the memory of children, and an update gate z j defines how much memory from the children is cascaded into the current node; andh j denotes the candidate activation of the hidden state of the current node.",
"Different from the standard GRU unit, the gating vectors in our variant of GRU are dependent on the states of many child units, allowing our model to incorporate representations from different children.",
"After recursive aggregation from bottom to up, the state of root node (i.e., source tweet) can be regard as the representation of the whole tree which is used for supervised classification.",
"So, an output layer is connected to the root node for predicting the class of the tree using a softmax function: x j = x j E h S = s∈S(j) h s r j = σ (W rxj + U r h S ) z j = σ (W zxj + U z h S ) h j = tanh (W hxj + U h (h S r j )) h j = (1 − z j ) h S + z j h j y = Sof tmax(Vh 0 + b) (2) where h 0 is the learned hidden vector of root node; V and b are the weights and bias in output layer.",
"Top-down RvNN This model is designed to leverage the structure of top-down tree to capture complex propagation patterns for classifying rumorous claims, which is shown in Figure 3 (c).",
"It models how the informa-tion flows from source post to the current node.",
"The idea of this top-down approach is to generate a strengthened feature vector for each post considering its propagation path, where rumor-indicative features are aggregated along the propagation history in the path.",
"For example, if current post agree with its parent's stance which denies the source post, the denial stance from the root node down to the current node on this path should be reinforced.",
"Due to different branches of any non-leaf node, the top-down visit to its subtree nodes is also recursive.",
"However, the nature of top-down tree lends this model different from the bottom-up one.",
"The representation of each node is computed by combining its own input and its parent node instead of its children nodes.",
"This process proceeds recursively from the root node to its children until all leaf nodes are reached.",
"Suppose that the hidden state of a non-leaf node can be passed synchronously to all its child nodes without loss.",
"Then the hidden state h j of a node j can be computed by combining the hidden state h P(j) of its parent node P(j) and its own input vector x j .",
"Therefore, the transition equations of node j can be formulated as a standard GRU: x j = x j E r j = σ W rxj + U r h P(j) z j = σ W zxj + U z h P(j) h j = tanh W hxj + U h (h P(j) r j ) h j = (1 − z j ) h P(j) + z j h j (3) Through the top-down recursion, the learned representations are eventually embedded into the hidden vector of all the leaf nodes.",
"Since the num-ber of leaf nodes varies, the resulting vectors cannot be directly fed into a fixed-size neural layer for output.",
"Therefore, we add a max-pooling layer to take the maximum value of each dimension of the vectors over all the leaf nodes.",
"This can also help capture the most appealing indicative features from all the propagation paths.",
"Based on the pooling result, we finally use a softmax function in the output layer to predict the label of the tree: y = Sof tmax(Vh ∞ + b) (4) where h ∞ is the pooling vector over all leaf nodes, V and b are parameters in the output layer.",
"Although both of the two RvNN models aim to capture the structural properties by recursively visiting all nodes, we can conjecture that the topdown model would be better.",
"The hypothesis is that in the bottom-up case the final output relies on the representation of single root, and its information loss can be larger than the top-down one since in the top-down case the representations embedded into all leaf nodes along different propagation paths can be incorporated via pooling holistically.",
"Model Training The model is trained to minimize the squared error between the probability distributions of the predictions and the ground truth: L(y,ŷ) = N n=1 C c=1 (y c −ŷ c ) 2 + λ||θ|| 2 2 (5) where y c is the ground truth andŷ c is the prediction probability of a class, N is the number of training claims, C is the number of classes, ||.|| 2 is the L 2 regularization term over all model parameters θ, and λ is the trade-off coefficient.",
"During training, all the model parameters are updated using efficient back-propagation through structure (Goller and Kuchler, 1996; Socher et al., 2013) , and the optimization is gradient-based following the Ada-grad update rule (Duchi et al., 2011) to speed up the convergence.",
"We empirically initialize the model parameters with uniform distribution and set the vocabulary size as 5,000, the size of embedding and hidden units as 100.",
"We iterate over all the training examples in each epoch and continue until the loss value converges or the maximum epoch number is met.",
"Experiments and Results Datasets For experimental evaluation, we use two publicly available Twitter datasets released by Ma et al.",
"(2017) , namely Twitter15 and Twitter16 4 , which respectively contains 1,381 and 1,181 propagation trees (see (Ma et al., 2017) for detailed statistics).",
"In each dataset, a group of wide spread source tweets along with their propagation threads, i.e., replies and retweets, are provided in the form of tree structure.",
"Each tree is annotated with one of the four class labels, i.e., non-rumor, false rumor, true rumor and unverified rumor.",
"We remove the retweets from the trees since they do not provide any extra information or evidence contentwise.",
"We build two versions for each tree, one for the bottom-up tree and the other for the top-down tree, by flipping the edges' direction.",
"Experimental Setup We make comprehensive comparisons between our models and some state-of-the-art baselines on rumor classification and early detection tasks.",
"-DTR: Zhao et al.",
"(2015) proposed a Decision-Tree-based Ranking model to identify trending rumors by searching for inquiry phrases.",
"-DTC: The information credibility model using a Decision-Tree Classifier (Castillo et al., 2011) based on manually engineering various statistical features of the tweets.",
"-RFC: The Random Forest Classier using 3 fitting parameters as temporal properties and a set of handcrafted features on user, linguistic and structural properties (Kwon et al., 2013) .",
"-SVM-TS: A linear SVM classifier that uses time-series to model the variation of handcrafted social context features (Ma et al., 2015) .",
"-SVM-BOW: A naive baseline we built by representing text content using bag-of-words and using linear SVM for rumor classification.",
"-SVM-TK and SVM-HK: SVM classifier uses a Tree Kernel (Ma et al., 2017) and that uses a Hybrid Kernel (Wu et al., 2015) , respectively, both of which model propagation structures with kernels.",
"-GRU-RNN: A detection model based on recurrent neural networks (Ma et al., 2016) with GRU units for learning rumor representations by modeling sequential structure of relevant posts.",
"We implement DTC and RFC using Weka 5 , SVM-based models using LibSVM 6 and all neural-network-based models with Theano 7 .",
"We conduct 5-fold cross-validation on the datasets and use accuracy over all the four categories and F1 measure on each class to evaluate the performance of models.",
"Rumor Classification Performance As shown in Table 1 , our proposed models basically yield much better performance than other methods on both datasets via the modeling of interaction structures of posts in the propagation.",
"It is observed that the performance of the 4 baselines in the first group based on handcrafted features is obviously poor, varying between 0.409 and 0.585 in accuracy, indicating that they fail to generalize due to the lack of capacity capturing helpful features.",
"Among these baselines, SVM-TS and RFC perform relatively better because they 5 www.cs.waikato.ac.nz/ml/weka 6 www.csie.ntu.edu.tw/˜cjlin/libsvm 7 deeplearning.net/software/theano use additional temporal traits, but they are still clearly worse than the models not relying on feature engineering.",
"DTR uses a set of regular expressions indicative of stances.",
"However, only 19.6% and 22.2% tweets in the two datasets contain strings covered by these regular expressions, rendering unsatisfactory result.",
"Among the two kernel methods that are based on comparing propagation structures, we observe that SVM-TK is much more effective than SVM-HK.",
"There are two reasons: 1) SVM-HK was originally proposed and experimented on Sina Weibo (Wu et al., 2015) , which may not be generalize well on Twitter.",
"2) SVM-HK loosely couples two separate kernels: a RBF kernel based on handcrafted features, plus a random walk-based kernel which relies on a set of pre-defined keywords for jumping over the nodes probabilistically.",
"This under utilizes the propagation information due to such oversimplified treatment of tree structure.",
"In contrast, SVM-TK is an integrated kernel and can fully utilize the structure by comparing the trees based on both textual and structural similarities.",
"It appears that using bag-of-words is already a decent model evidenced as the fairly good performance of SVM-BOW which is even better than SVM-HK.",
"This is because the features of SVM-HK are handcrafted for binary classification (i.e., non-rumor vs rumor), ignoring the importance of indicative words or units that benefit finer-grained classification which can be captured more effectively by SVM-BOW.",
"The sequential neural model GRU-RNN performs slightly worse than SVM-TK, but much worse than our recursive models.",
"This is because it is a special case of the recursive model where each non-leaf node has only one child.",
"It has to rely on a linear chain as input, which missed out valuable structural information.",
"However, it does learn high-level features from the post content via hidden units of the neural model while SVM-TK cannot which can only evaluates similarities based on the overlapping words among subtrees.",
"Our recursive models are inherently tree-structured and take advantages of representation learning following the propagation structure, thus beats SVM-TK.",
"In the two recursive models, TD-RvNN outperforms BU-RvNN, which indicates that the bottomup model may suffer from larger information loss than the top-down one.",
"This verifies the hypothesis we made in Section 4.3 that the pooling layer For only the non-rumor class, it seems that our method does not perform so well as some featureengineering baselines.",
"This can be explained by the fact that these baselines are trained with additional features such as user information (e.g., profile, verification status, etc) which may contain clues for differentiating non-rumors from rumors.",
"Also, the responses to non-rumors are usually much more diverse with little informative indication, making identification of non-rumors more difficult based on content even with the structure.",
"Early Rumor Detection Performance Detecting rumors at early state of propagation is important so that interventions can be made in a timely manner.",
"We compared different methods in term of different time delays measured by either tweet count received or time elapsed since the source tweet is posted.",
"The performance is evaluated by the accuracy obtained when we incrementally add test data up to the check point given the targeted time delay or tweets volume.",
"Figure 4 shows that the performance of our recursive models climbs more rapidly and starts to supersede the other models at the early stage.",
"Although all the methods are getting to their best per-formance in the end, TD-RvNN and BU-RvNN only need around 8 hours or about 90 tweets to achieve the comparable performance of the best baseline model, i.e., SVM-TK, which needs about 36 hours or around 300 posts, indicating superior early detection performance of our method.",
"Figure 5 shows a sample tree at the early stage of propagation that has been correctly classified as a false rumor by both recursive models.",
"We can see that this false rumor demonstrates typical patterns in subtrees and propagation paths indicative of the falsehood, where a set of responses supporting the parent posts that deny or question the source post are captured by our bottom-up model.",
"Similarly, some patterns of propagation from the root to leaf nodes like \"support→deny→support\" are also seized by our top-down model.",
"In comparison, sequential models may be confused because the supportive key terms such as \"be right\", \"yeah\", \"exactly!\"",
"dominate the responses, and the SVM-TK may miss similar subtrees by just comparing the surface words.",
"Conclusions and Future Work We propose a bottom-up and a top-down treestructured model based on recursive neural networks for rumor detection on Twitter.",
"The inher-ent nature of recursive models allows them using propagation tree to guide the learning of representations from tweets content, such as embedding various indicative signals hidden in the structure, for better identifying rumors.",
"Results on two public Twitter datasets show that our method improves rumor detection performance in very large margins as compared to state-of-the-art baselines.",
"In our future work, we plan to integrate other types of information such as user properties into the structured neural models to further enhance representation learning and detect rumor spreaders at the same time.",
"We also plan to use unsupervised models for the task by exploiting structural information."
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"4",
"4.1",
"4.2",
"4.3",
"4.4",
"5.1",
"5.2",
"5.3",
"5.4",
"6"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Problem Statement",
"RvNN-based Rumor Detection",
"Standard Recursive Neural Networks",
"Bottom-up RvNN",
"Top-down RvNN",
"Model Training",
"Datasets",
"Experimental Setup",
"Rumor Classification Performance",
"Early Rumor Detection Performance",
"Conclusions and Future Work"
]
} | GEM-SciDuet-train-135#paper-1364#slide-3 | Contributions | The first study that deeply integrates both structure and content semantics based on tree-structured recursive neural networks for detecting rumors from microblog posts
Propose two variants of RvNN models based on bottom-up and top-down tree structures, to generate better integrated representations for a claim by capturing both structural and textural properties signaling rumors.
Our experiments based on two real-world Twitter datasets achieve superior improvements over state-of-the-art baselines on both rumor classification and early detection tasks.
We make the source codes in our experiments publicly accessible at https://github.com/majingCUHK/Rumor_RvNN | The first study that deeply integrates both structure and content semantics based on tree-structured recursive neural networks for detecting rumors from microblog posts
Propose two variants of RvNN models based on bottom-up and top-down tree structures, to generate better integrated representations for a claim by capturing both structural and textural properties signaling rumors.
Our experiments based on two real-world Twitter datasets achieve superior improvements over state-of-the-art baselines on both rumor classification and early detection tasks.
We make the source codes in our experiments publicly accessible at https://github.com/majingCUHK/Rumor_RvNN | [] |
GEM-SciDuet-train-135#paper-1364#slide-4 | 1364 | Rumor Detection on Twitter with Tree-structured Recursive Neural Networks | Automatic rumor detection is technically very challenging. In this work, we try to learn discriminative features from tweets content by following their non-sequential propagation structure and generate more powerful representations for identifying different type of rumors. We propose two recursive neural models based on a bottom-up and a top-down tree-structured neural networks for rumor representation learning and classification, which naturally conform to the propagation layout of tweets. Results on two public Twitter datasets demonstrate that our recursive neural models 1) achieve much better performance than state-of-the-art approaches; 2) demonstrate superior capacity on detecting rumors at very early stage. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173
],
"paper_content_text": [
"Introduction Rumors have always been a social disease.",
"In recent years, it has become unprecedentedly convenient for the \"evil-doers\" to create and disseminate rumors in massive scale with low cost thanks to the popularity of social media outlets on Twitter, Facebook, etc.",
"The worst effect of false rumors could be devastating to individual and/or society.",
"Research pertaining rumors spans multiple disciplines, such as philosophy and humanities (Di-Fonzo and Bordia, 2007; Donovan, 2007) , social psychology (Allport and Postman, 1965; Jaeger et al., 1980; Rosnow and Foster, 2005) , political studies (Allport and Postman, 1946; Berinsky, 2017) , management science (DiFonzo et al., 1994; Kimmel, 2004) and recently computer science and artificial intelligence (Qazvinian et al., 2011; Ratkiewicz et al., 2011; Castillo et al., 2011; Hannak et al., 2014; Zhao et al., 2015; Ma et al., 2015) .",
"Rumor is commonly defined as information that emerge and spread among people whose truth value is unverified or intentionally false (Di-Fonzo and Bordia, 2007; Qazvinian et al., 2011) .",
"Analysis shows that people tend to stop spreading a rumor if it is known as false (Zubiaga et al., 2016b) .",
"However, identifying such misinformation is non-trivial and needs investigative journalism to fact check the suspected claim, which is labor-intensive and time-consuming.",
"The proliferation of social media makes it worse due to the ever-increasing information load and dynamics.",
"Therefore, it is necessary to develop automatic and assistant approaches to facilitate real-time rumor tracking and debunking.",
"For automating rumor detection, most of the previous studies focused on text mining from sequential microblog streams using supervised models based on feature engineering (Castillo et al., 2011; Kwon et al., 2013; Liu et al., 2015; Ma et al., 2015) , and more recently deep neural models (Ma et al., 2016; Chen et al., 2017; Ruchansky et al., 2017) .",
"These methods largely ignore or oversimplify the structural information associated with message propagation which however has been shown conducive to provide useful clues for identifying rumors.",
"Kernel-based method (Wu et al., 2015; Ma et al., 2017) was thus proposed to model the structure as propagation trees in order to differentiate rumorous and non-rumorous claims by comparing their tree-based similarities.",
"But such kind of approach cannot directly classify a tree without pairwise comparison with all other trees imposing unnecessary overhead, and it also cannot automatically learn any high-level feature representations out of the noisy surface features.",
"In this paper, we present a neural rumor detection approach based on recursive neural networks (RvNN) to bridge the content semantics and propagation clues.",
"RvNN and its variants were originally used to compose phrase or sentence representation for syntactic and semantic parsing (Socher et al., 2011 (Socher et al., , 2012 .",
"Unlike parsing, the input into our model is a propagation tree rooted from a source post rather than the parse tree of an individual sentence, and each tree node is a responsive post instead of an individual words.",
"The content semantics of posts and the responsive relationship among them can be jointly captured via the recursive feature learning process along the tree structure.",
"So, why can such neural model do better for the task?",
"Analysis has generally found that Twitter could \"self-correct\" some inaccurate information as users share opinions, conjectures and evidences (Zubiaga et al., 2017) .",
"To illustrate our intuition, Figure 1 exemplifies the propagation trees of two rumors in our dataset, one being false and the other being true 1 .",
"Structure-insensitive methods basically relying on the relative ratio of different stances in the text cannot do well when such clue is unclear like this example.",
"However, it can be seen that when a post denies the false rumor, it tends to spark supportive or affirmative replies confirming the denial; in contrast, denial to a true rumor tends to trigger question or denial in its replies.",
"This observation may suggest a more general hypothesis that the repliers tend to disagree with (or question) who support a false rumor or deny a true rumor, and also they tend to agree with who deny a false rumor or support a true rumor.",
"Meanwhile, a reply, rather than directly responding to the source tweet (i.e., the root), is usually responsive to its immediate ancestor (Lukasik et al., 2016; Zubiaga et al., 2016a) , suggesting obvious local characteristic of the interaction.",
"The recursive network naturally models such structures for learning to capture the rumor indicative signals and enhance the representation by recursively aggregating the signals from different branches.",
"To this end, we extend the standard RvNN into two variants, i.e., a bottom-up (BU) model and a top-down (TD) model, which represent the propagation tree structure from different angles, in order to visit the nodes and combine their representations following distinct directions.",
"The important merit of such architecture is that the node features can be selectively refined by the recursion given the connection and direction of all paths of the 1 False (true) rumor means the veracity of the rumorous claim is false (true).",
"Figure 1 : Propagation trees of two rumorous source tweets.",
"Nodes may express stances on their parent as commenting, supporting, questioning or denying.",
"The edge arrow indicates the direction from a response to its responded node, and the polarity is marked as '+' ('-') for support (denial).",
"The same node color indicates the same stance on the veracity of root node (i.e., source tweet).",
"tree.",
"As a result, it can be expected that the discriminative signals are better embedded into the learned representations.",
"We evaluate our proposed approach based on two public Twitter datasets.",
"The results show that our method outperforms strong rumor detection baselines with large margin and also demonstrate much higher effectiveness for detection at early stage of propagation, which is promising for realtime intervention and debunking.",
"Our contributions are summarized as follows in three folds: • This is the first study that deeply integrates both structure and content semantics based on tree-structured recursive neural networks for detecting rumors from microblog posts.",
"• We propose two variants of RvNN models based on bottom-up and top-down tree structures to generate better integrated representations for a claim by capturing both structural and textural properties signaling rumors.",
"• Our experiments based on real-world Twitter datasets achieve superior improvements over state-of-the-art baselines on both rumor classification and early detection tasks.",
"We make the source codes in our experiments publicly accessible 2 .",
"Related Work Most previous automatic approaches for rumor detection (Castillo et al., 2011; Yang et al., 2012; Liu et al., 2015) intended to learn a supervised classifier by utilizing a wide range of features crafted from post contents, user profiles and propagation patterns.",
"Subsequent studies were then conducted to engineer new features such as those representing rumor diffusion and cascades (Friggeri et al., 2014; Hannak et al., 2014) characterized by comments with links to debunking websites.",
"Kwon et al.",
"(2013) introduced a time-series-fitting model based on the volume of tweets over time.",
"Ma et al.",
"(2015) extended their model with more chronological social context features.",
"These approaches typically require heavy preprocessing and feature engineering.",
"Zhao et al.",
"(2015) alleviated the engineering effort by using a set of regular expressions (such as \"really?",
"\", \"not true\", etc) to find questing and denying tweets, but the approach was oversimplified and suffered from very low recall.",
"Ma et al.",
"(2016) used recurrent neural networks (RNN) to learn automatically the representations from tweets content based on time series.",
"Recently, they studied to mutually reinforce stance detection and rumor classification in a neural multi-task learning framework (Ma et al., 2018) .",
"However, the approaches cannot embed features reflecting how the posts are propagated and requires careful data segmentation to prepare for time sequence.",
"Some kernel-based methods were exploited to model the propagation structure.",
"Wu et al.",
"(2015) proposed a hybrid SVM classifier which combines a RBF kernel and a random-walk-based graph kernel to capture both flat and propagation patterns for detecting rumors on Sina Weibo.",
"Ma et al.",
"(2017) used tree kernel to capture the similarity of propagation trees by counting their similar substructures in order to identify different types of rumors on Twitter.",
"Compared to their studies, our model can learn the useful features via a more natural and general approach, i.e., the tree-structured neural network, to jointly generate representations from both structure and content.",
"RvNN has demonstrated state-of-the-art performances in a variety of tasks, e.g., images segmentation (Socher et al., 2011) , phrase representation from word vectors (Socher et al., 2012) , and sentiment classification in sentences (Socher et al., 2013) .",
"More recently, a deep RvNN was proposed to model the compositionality in natural language for fine-grained sentiment classification by stacking multiple recursive layers (Irsoy and Cardie, 2014) .",
"In order to avoid gradient vanishing, some studies integrated Long Short Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) to RvNN Tai et al., 2015) .",
"Mou et al.",
"(2015) used a convolutional network over tree structures for syntactic tree parsing of natural language sentences.",
"Problem Statement We define a Twitter rumor detection dataset as a set of claims C = {C 1 , C 2 , · · · , C |C| }, where each claim C i corresponds to a source tweet r i which consists of ideally all its relevant responsive tweets in chronological order, i.e., C i = {r i , x i1 , x i2 , · · · , x im } where each x i * is a responsive tweet of the root r i .",
"Note that although the tweets are notated sequentially, there are connections among them based on their reply or repost relationships, which can form a propagation tree structure (Wu et al., 2015; Ma et al., 2017) with r i being the root node.",
"We formulate this task as a supervised classification problem, which learns a classifier f from labeled claims, that is f : C i → Y i , where Y i takes one of the four finer-grained classes: non-rumor, false rumor, true rumor, and unverified rumor that are introduced in the literature (Ma et al., 2017; Zubiaga et al., 2016b ).",
"An important issue of the tree structure is concerned about the direction of edges, which can result in two different architectures of the model: 1) a bottom-up tree; 2) a top-down tree, which are defined as follows: • Bottom-up tree takes the similar shape as shown in Figure 1 , where responsive nodes always point to their responded nodes and leaf nodes not having any response are laid out at the furthest level.",
"We represent a tree as T i = V i , E i , where V i = C i which con- sists of all relevant posts as nodes, and E i denotes a set of all directed links, where for any u, v ∈ V i , u ← v exists if v responses to u.",
"This structure is similar to a citation network where a response mimics a reference.",
"• Top-down tree naturally conforms to the direction of information propagation, in which a link u → v means the information flows from u to v and v sees it and provides a response to u.",
"This structure reverses bottomup tree and simulates how information cas- cades from a source tweet, i.e., the root, to all its receivers, i.e., the decedents, which is similar as (Wu et al., 2015; Ma et al., 2017) .",
"RvNN-based Rumor Detection The core idea of our method is to strengthen the high-level representation of tree nodes by the recursion following the propagation structure over different branches in the tree.",
"For instance, the responsive nodes confirming or supporting a node (e.g., \"I agree\", \"be right\", etc) can further reinforce the stance of that node while denial or questioning responses (e.g., \"disagree, \"really?!)",
"otherwise weaken its stance.",
"Compared to the kernelbased method using propagation tree (Wu et al., 2015; Ma et al., 2017) , our method does not need pairwise comparison among large number of subtrees, and can learn much stronger representation of content following the response structure.",
"In this section, we will describe our extension to the standard RvNN for modeling rumor detection based on the bottom-up and top-down architectures presented in Section 3.",
"Standard Recursive Neural Networks RvNN is a type of tree-structured neural networks.",
"The original version of RvNN utilized binarized sentence parse trees (Socher et al., 2012) , in which the representation associated with each node of a parse tree is computed from its direct children.",
"The overall structure of the standard RvNN is illustrated as the right side of Figure 2 , corresponding to the input parse tree at the left side.",
"Leaf nodes are the words in an input sentence, each represented by a low-dimensional word embedding.",
"Non-leaf nodes are sentence constituents, computed by recursion based on the presentations of child nodes.",
"Let p be the feature vector of a parent node whose children are c 1 and c 2 , the representation of the parent is computed by p = f (W ·[c 1 ; c 2 ]+b), where f (·) is the activation function with W and b as parameters.",
"This computation is done recursively over all tree nodes; the learned hidden vectors of the nodes can then be used for various classification tasks.",
"Bottom-up RvNN The core idea of bottom-up model is to generate a feature vector for each subtree by recursively visiting every node from the leaves at the bottom to the root at the top.",
"In this way, the subtrees with similar contexts, such as those subtrees having a denial parent and a set of supportive children, will be projected into the proximity in the representation space.",
"And thus such local rumor indicative features are aggregated along different branches into some global representation of the whole tree.",
"For this purpose, we make a natural extension to the original RvNN.",
"The overall structure of our proposed bottom-up model is illustrated in Figure 3(b) , taking a bottom-up tree (see Figure 3 (a)) as input.",
"Different from the standard RvNN, the input of each node in the bottom-up model is a post represented as a vector of words in the vocabulary in terms of tf idf values.",
"Here, every node has an input vector, and the number of children of nodes varies significantly 3 .",
"In rumor detection, long short-term memory (LSTM) (Hochreiter and Schmidhuber, 1997) and gated recurrent units (GRU) (Cho et al., 2014) were used to learn textual representation, which adopts memory units to store information over long time steps (Ma et al., 2016) .",
"In this paper, we choose to extend GRU as hidden unit to model long-distance interactions over the tree nodes because it is more efficient due to fewer parameters.",
"Let S(j) denote the set of direct children of the node j.",
"The transition equations of node j in the bottom-up model are formulated as follows: where x j is the original input vector of node j, E denotes the parameter matrix for transforming this input post,x j is the transformed representation of j, [W * , U * ] are the weight connections inside GRU, and h j and h s refer to the hidden state of j and its s-th child.",
"Thus h S denotes the sum of the hidden state of all the children of j assuming that all children are equally important to j.",
"As with the standard GRU, denotes element-wise multiplication; a reset gate r j determines how to combine the current inputx j with the memory of children, and an update gate z j defines how much memory from the children is cascaded into the current node; andh j denotes the candidate activation of the hidden state of the current node.",
"Different from the standard GRU unit, the gating vectors in our variant of GRU are dependent on the states of many child units, allowing our model to incorporate representations from different children.",
"After recursive aggregation from bottom to up, the state of root node (i.e., source tweet) can be regard as the representation of the whole tree which is used for supervised classification.",
"So, an output layer is connected to the root node for predicting the class of the tree using a softmax function: x j = x j E h S = s∈S(j) h s r j = σ (W rxj + U r h S ) z j = σ (W zxj + U z h S ) h j = tanh (W hxj + U h (h S r j )) h j = (1 − z j ) h S + z j h j y = Sof tmax(Vh 0 + b) (2) where h 0 is the learned hidden vector of root node; V and b are the weights and bias in output layer.",
"Top-down RvNN This model is designed to leverage the structure of top-down tree to capture complex propagation patterns for classifying rumorous claims, which is shown in Figure 3 (c).",
"It models how the informa-tion flows from source post to the current node.",
"The idea of this top-down approach is to generate a strengthened feature vector for each post considering its propagation path, where rumor-indicative features are aggregated along the propagation history in the path.",
"For example, if current post agree with its parent's stance which denies the source post, the denial stance from the root node down to the current node on this path should be reinforced.",
"Due to different branches of any non-leaf node, the top-down visit to its subtree nodes is also recursive.",
"However, the nature of top-down tree lends this model different from the bottom-up one.",
"The representation of each node is computed by combining its own input and its parent node instead of its children nodes.",
"This process proceeds recursively from the root node to its children until all leaf nodes are reached.",
"Suppose that the hidden state of a non-leaf node can be passed synchronously to all its child nodes without loss.",
"Then the hidden state h j of a node j can be computed by combining the hidden state h P(j) of its parent node P(j) and its own input vector x j .",
"Therefore, the transition equations of node j can be formulated as a standard GRU: x j = x j E r j = σ W rxj + U r h P(j) z j = σ W zxj + U z h P(j) h j = tanh W hxj + U h (h P(j) r j ) h j = (1 − z j ) h P(j) + z j h j (3) Through the top-down recursion, the learned representations are eventually embedded into the hidden vector of all the leaf nodes.",
"Since the num-ber of leaf nodes varies, the resulting vectors cannot be directly fed into a fixed-size neural layer for output.",
"Therefore, we add a max-pooling layer to take the maximum value of each dimension of the vectors over all the leaf nodes.",
"This can also help capture the most appealing indicative features from all the propagation paths.",
"Based on the pooling result, we finally use a softmax function in the output layer to predict the label of the tree: y = Sof tmax(Vh ∞ + b) (4) where h ∞ is the pooling vector over all leaf nodes, V and b are parameters in the output layer.",
"Although both of the two RvNN models aim to capture the structural properties by recursively visiting all nodes, we can conjecture that the topdown model would be better.",
"The hypothesis is that in the bottom-up case the final output relies on the representation of single root, and its information loss can be larger than the top-down one since in the top-down case the representations embedded into all leaf nodes along different propagation paths can be incorporated via pooling holistically.",
"Model Training The model is trained to minimize the squared error between the probability distributions of the predictions and the ground truth: L(y,ŷ) = N n=1 C c=1 (y c −ŷ c ) 2 + λ||θ|| 2 2 (5) where y c is the ground truth andŷ c is the prediction probability of a class, N is the number of training claims, C is the number of classes, ||.|| 2 is the L 2 regularization term over all model parameters θ, and λ is the trade-off coefficient.",
"During training, all the model parameters are updated using efficient back-propagation through structure (Goller and Kuchler, 1996; Socher et al., 2013) , and the optimization is gradient-based following the Ada-grad update rule (Duchi et al., 2011) to speed up the convergence.",
"We empirically initialize the model parameters with uniform distribution and set the vocabulary size as 5,000, the size of embedding and hidden units as 100.",
"We iterate over all the training examples in each epoch and continue until the loss value converges or the maximum epoch number is met.",
"Experiments and Results Datasets For experimental evaluation, we use two publicly available Twitter datasets released by Ma et al.",
"(2017) , namely Twitter15 and Twitter16 4 , which respectively contains 1,381 and 1,181 propagation trees (see (Ma et al., 2017) for detailed statistics).",
"In each dataset, a group of wide spread source tweets along with their propagation threads, i.e., replies and retweets, are provided in the form of tree structure.",
"Each tree is annotated with one of the four class labels, i.e., non-rumor, false rumor, true rumor and unverified rumor.",
"We remove the retweets from the trees since they do not provide any extra information or evidence contentwise.",
"We build two versions for each tree, one for the bottom-up tree and the other for the top-down tree, by flipping the edges' direction.",
"Experimental Setup We make comprehensive comparisons between our models and some state-of-the-art baselines on rumor classification and early detection tasks.",
"-DTR: Zhao et al.",
"(2015) proposed a Decision-Tree-based Ranking model to identify trending rumors by searching for inquiry phrases.",
"-DTC: The information credibility model using a Decision-Tree Classifier (Castillo et al., 2011) based on manually engineering various statistical features of the tweets.",
"-RFC: The Random Forest Classier using 3 fitting parameters as temporal properties and a set of handcrafted features on user, linguistic and structural properties (Kwon et al., 2013) .",
"-SVM-TS: A linear SVM classifier that uses time-series to model the variation of handcrafted social context features (Ma et al., 2015) .",
"-SVM-BOW: A naive baseline we built by representing text content using bag-of-words and using linear SVM for rumor classification.",
"-SVM-TK and SVM-HK: SVM classifier uses a Tree Kernel (Ma et al., 2017) and that uses a Hybrid Kernel (Wu et al., 2015) , respectively, both of which model propagation structures with kernels.",
"-GRU-RNN: A detection model based on recurrent neural networks (Ma et al., 2016) with GRU units for learning rumor representations by modeling sequential structure of relevant posts.",
"We implement DTC and RFC using Weka 5 , SVM-based models using LibSVM 6 and all neural-network-based models with Theano 7 .",
"We conduct 5-fold cross-validation on the datasets and use accuracy over all the four categories and F1 measure on each class to evaluate the performance of models.",
"Rumor Classification Performance As shown in Table 1 , our proposed models basically yield much better performance than other methods on both datasets via the modeling of interaction structures of posts in the propagation.",
"It is observed that the performance of the 4 baselines in the first group based on handcrafted features is obviously poor, varying between 0.409 and 0.585 in accuracy, indicating that they fail to generalize due to the lack of capacity capturing helpful features.",
"Among these baselines, SVM-TS and RFC perform relatively better because they 5 www.cs.waikato.ac.nz/ml/weka 6 www.csie.ntu.edu.tw/˜cjlin/libsvm 7 deeplearning.net/software/theano use additional temporal traits, but they are still clearly worse than the models not relying on feature engineering.",
"DTR uses a set of regular expressions indicative of stances.",
"However, only 19.6% and 22.2% tweets in the two datasets contain strings covered by these regular expressions, rendering unsatisfactory result.",
"Among the two kernel methods that are based on comparing propagation structures, we observe that SVM-TK is much more effective than SVM-HK.",
"There are two reasons: 1) SVM-HK was originally proposed and experimented on Sina Weibo (Wu et al., 2015) , which may not be generalize well on Twitter.",
"2) SVM-HK loosely couples two separate kernels: a RBF kernel based on handcrafted features, plus a random walk-based kernel which relies on a set of pre-defined keywords for jumping over the nodes probabilistically.",
"This under utilizes the propagation information due to such oversimplified treatment of tree structure.",
"In contrast, SVM-TK is an integrated kernel and can fully utilize the structure by comparing the trees based on both textual and structural similarities.",
"It appears that using bag-of-words is already a decent model evidenced as the fairly good performance of SVM-BOW which is even better than SVM-HK.",
"This is because the features of SVM-HK are handcrafted for binary classification (i.e., non-rumor vs rumor), ignoring the importance of indicative words or units that benefit finer-grained classification which can be captured more effectively by SVM-BOW.",
"The sequential neural model GRU-RNN performs slightly worse than SVM-TK, but much worse than our recursive models.",
"This is because it is a special case of the recursive model where each non-leaf node has only one child.",
"It has to rely on a linear chain as input, which missed out valuable structural information.",
"However, it does learn high-level features from the post content via hidden units of the neural model while SVM-TK cannot which can only evaluates similarities based on the overlapping words among subtrees.",
"Our recursive models are inherently tree-structured and take advantages of representation learning following the propagation structure, thus beats SVM-TK.",
"In the two recursive models, TD-RvNN outperforms BU-RvNN, which indicates that the bottomup model may suffer from larger information loss than the top-down one.",
"This verifies the hypothesis we made in Section 4.3 that the pooling layer For only the non-rumor class, it seems that our method does not perform so well as some featureengineering baselines.",
"This can be explained by the fact that these baselines are trained with additional features such as user information (e.g., profile, verification status, etc) which may contain clues for differentiating non-rumors from rumors.",
"Also, the responses to non-rumors are usually much more diverse with little informative indication, making identification of non-rumors more difficult based on content even with the structure.",
"Early Rumor Detection Performance Detecting rumors at early state of propagation is important so that interventions can be made in a timely manner.",
"We compared different methods in term of different time delays measured by either tweet count received or time elapsed since the source tweet is posted.",
"The performance is evaluated by the accuracy obtained when we incrementally add test data up to the check point given the targeted time delay or tweets volume.",
"Figure 4 shows that the performance of our recursive models climbs more rapidly and starts to supersede the other models at the early stage.",
"Although all the methods are getting to their best per-formance in the end, TD-RvNN and BU-RvNN only need around 8 hours or about 90 tweets to achieve the comparable performance of the best baseline model, i.e., SVM-TK, which needs about 36 hours or around 300 posts, indicating superior early detection performance of our method.",
"Figure 5 shows a sample tree at the early stage of propagation that has been correctly classified as a false rumor by both recursive models.",
"We can see that this false rumor demonstrates typical patterns in subtrees and propagation paths indicative of the falsehood, where a set of responses supporting the parent posts that deny or question the source post are captured by our bottom-up model.",
"Similarly, some patterns of propagation from the root to leaf nodes like \"support→deny→support\" are also seized by our top-down model.",
"In comparison, sequential models may be confused because the supportive key terms such as \"be right\", \"yeah\", \"exactly!\"",
"dominate the responses, and the SVM-TK may miss similar subtrees by just comparing the surface words.",
"Conclusions and Future Work We propose a bottom-up and a top-down treestructured model based on recursive neural networks for rumor detection on Twitter.",
"The inher-ent nature of recursive models allows them using propagation tree to guide the learning of representations from tweets content, such as embedding various indicative signals hidden in the structure, for better identifying rumors.",
"Results on two public Twitter datasets show that our method improves rumor detection performance in very large margins as compared to state-of-the-art baselines.",
"In our future work, we plan to integrate other types of information such as user properties into the structured neural models to further enhance representation learning and detect rumor spreaders at the same time.",
"We also plan to use unsupervised models for the task by exploiting structural information."
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"4",
"4.1",
"4.2",
"4.3",
"4.4",
"5.1",
"5.2",
"5.3",
"5.4",
"6"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Problem Statement",
"RvNN-based Rumor Detection",
"Standard Recursive Neural Networks",
"Bottom-up RvNN",
"Top-down RvNN",
"Model Training",
"Datasets",
"Experimental Setup",
"Rumor Classification Performance",
"Early Rumor Detection Performance",
"Conclusions and Future Work"
]
} | GEM-SciDuet-train-135#paper-1364#slide-4 | Related Work | Systems based on common sense and investigative journalism,
Learning-based models for rumor detection
Using handcrafted and temporal features: Liu et al. (2015), Ma et al.
Tree-kernel-based model: Without hand-
images segmentation (Socher et al, 2011) phrase representation from word vectors (Socher et al, 2012)
Sentiment analysis (Socher et al, 2013) etc | Systems based on common sense and investigative journalism,
Learning-based models for rumor detection
Using handcrafted and temporal features: Liu et al. (2015), Ma et al.
Tree-kernel-based model: Without hand-
images segmentation (Socher et al, 2011) phrase representation from word vectors (Socher et al, 2012)
Sentiment analysis (Socher et al, 2013) etc | [] |
GEM-SciDuet-train-135#paper-1364#slide-5 | 1364 | Rumor Detection on Twitter with Tree-structured Recursive Neural Networks | Automatic rumor detection is technically very challenging. In this work, we try to learn discriminative features from tweets content by following their non-sequential propagation structure and generate more powerful representations for identifying different type of rumors. We propose two recursive neural models based on a bottom-up and a top-down tree-structured neural networks for rumor representation learning and classification, which naturally conform to the propagation layout of tweets. Results on two public Twitter datasets demonstrate that our recursive neural models 1) achieve much better performance than state-of-the-art approaches; 2) demonstrate superior capacity on detecting rumors at very early stage. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173
],
"paper_content_text": [
"Introduction Rumors have always been a social disease.",
"In recent years, it has become unprecedentedly convenient for the \"evil-doers\" to create and disseminate rumors in massive scale with low cost thanks to the popularity of social media outlets on Twitter, Facebook, etc.",
"The worst effect of false rumors could be devastating to individual and/or society.",
"Research pertaining rumors spans multiple disciplines, such as philosophy and humanities (Di-Fonzo and Bordia, 2007; Donovan, 2007) , social psychology (Allport and Postman, 1965; Jaeger et al., 1980; Rosnow and Foster, 2005) , political studies (Allport and Postman, 1946; Berinsky, 2017) , management science (DiFonzo et al., 1994; Kimmel, 2004) and recently computer science and artificial intelligence (Qazvinian et al., 2011; Ratkiewicz et al., 2011; Castillo et al., 2011; Hannak et al., 2014; Zhao et al., 2015; Ma et al., 2015) .",
"Rumor is commonly defined as information that emerge and spread among people whose truth value is unverified or intentionally false (Di-Fonzo and Bordia, 2007; Qazvinian et al., 2011) .",
"Analysis shows that people tend to stop spreading a rumor if it is known as false (Zubiaga et al., 2016b) .",
"However, identifying such misinformation is non-trivial and needs investigative journalism to fact check the suspected claim, which is labor-intensive and time-consuming.",
"The proliferation of social media makes it worse due to the ever-increasing information load and dynamics.",
"Therefore, it is necessary to develop automatic and assistant approaches to facilitate real-time rumor tracking and debunking.",
"For automating rumor detection, most of the previous studies focused on text mining from sequential microblog streams using supervised models based on feature engineering (Castillo et al., 2011; Kwon et al., 2013; Liu et al., 2015; Ma et al., 2015) , and more recently deep neural models (Ma et al., 2016; Chen et al., 2017; Ruchansky et al., 2017) .",
"These methods largely ignore or oversimplify the structural information associated with message propagation which however has been shown conducive to provide useful clues for identifying rumors.",
"Kernel-based method (Wu et al., 2015; Ma et al., 2017) was thus proposed to model the structure as propagation trees in order to differentiate rumorous and non-rumorous claims by comparing their tree-based similarities.",
"But such kind of approach cannot directly classify a tree without pairwise comparison with all other trees imposing unnecessary overhead, and it also cannot automatically learn any high-level feature representations out of the noisy surface features.",
"In this paper, we present a neural rumor detection approach based on recursive neural networks (RvNN) to bridge the content semantics and propagation clues.",
"RvNN and its variants were originally used to compose phrase or sentence representation for syntactic and semantic parsing (Socher et al., 2011 (Socher et al., , 2012 .",
"Unlike parsing, the input into our model is a propagation tree rooted from a source post rather than the parse tree of an individual sentence, and each tree node is a responsive post instead of an individual words.",
"The content semantics of posts and the responsive relationship among them can be jointly captured via the recursive feature learning process along the tree structure.",
"So, why can such neural model do better for the task?",
"Analysis has generally found that Twitter could \"self-correct\" some inaccurate information as users share opinions, conjectures and evidences (Zubiaga et al., 2017) .",
"To illustrate our intuition, Figure 1 exemplifies the propagation trees of two rumors in our dataset, one being false and the other being true 1 .",
"Structure-insensitive methods basically relying on the relative ratio of different stances in the text cannot do well when such clue is unclear like this example.",
"However, it can be seen that when a post denies the false rumor, it tends to spark supportive or affirmative replies confirming the denial; in contrast, denial to a true rumor tends to trigger question or denial in its replies.",
"This observation may suggest a more general hypothesis that the repliers tend to disagree with (or question) who support a false rumor or deny a true rumor, and also they tend to agree with who deny a false rumor or support a true rumor.",
"Meanwhile, a reply, rather than directly responding to the source tweet (i.e., the root), is usually responsive to its immediate ancestor (Lukasik et al., 2016; Zubiaga et al., 2016a) , suggesting obvious local characteristic of the interaction.",
"The recursive network naturally models such structures for learning to capture the rumor indicative signals and enhance the representation by recursively aggregating the signals from different branches.",
"To this end, we extend the standard RvNN into two variants, i.e., a bottom-up (BU) model and a top-down (TD) model, which represent the propagation tree structure from different angles, in order to visit the nodes and combine their representations following distinct directions.",
"The important merit of such architecture is that the node features can be selectively refined by the recursion given the connection and direction of all paths of the 1 False (true) rumor means the veracity of the rumorous claim is false (true).",
"Figure 1 : Propagation trees of two rumorous source tweets.",
"Nodes may express stances on their parent as commenting, supporting, questioning or denying.",
"The edge arrow indicates the direction from a response to its responded node, and the polarity is marked as '+' ('-') for support (denial).",
"The same node color indicates the same stance on the veracity of root node (i.e., source tweet).",
"tree.",
"As a result, it can be expected that the discriminative signals are better embedded into the learned representations.",
"We evaluate our proposed approach based on two public Twitter datasets.",
"The results show that our method outperforms strong rumor detection baselines with large margin and also demonstrate much higher effectiveness for detection at early stage of propagation, which is promising for realtime intervention and debunking.",
"Our contributions are summarized as follows in three folds: • This is the first study that deeply integrates both structure and content semantics based on tree-structured recursive neural networks for detecting rumors from microblog posts.",
"• We propose two variants of RvNN models based on bottom-up and top-down tree structures to generate better integrated representations for a claim by capturing both structural and textural properties signaling rumors.",
"• Our experiments based on real-world Twitter datasets achieve superior improvements over state-of-the-art baselines on both rumor classification and early detection tasks.",
"We make the source codes in our experiments publicly accessible 2 .",
"Related Work Most previous automatic approaches for rumor detection (Castillo et al., 2011; Yang et al., 2012; Liu et al., 2015) intended to learn a supervised classifier by utilizing a wide range of features crafted from post contents, user profiles and propagation patterns.",
"Subsequent studies were then conducted to engineer new features such as those representing rumor diffusion and cascades (Friggeri et al., 2014; Hannak et al., 2014) characterized by comments with links to debunking websites.",
"Kwon et al.",
"(2013) introduced a time-series-fitting model based on the volume of tweets over time.",
"Ma et al.",
"(2015) extended their model with more chronological social context features.",
"These approaches typically require heavy preprocessing and feature engineering.",
"Zhao et al.",
"(2015) alleviated the engineering effort by using a set of regular expressions (such as \"really?",
"\", \"not true\", etc) to find questing and denying tweets, but the approach was oversimplified and suffered from very low recall.",
"Ma et al.",
"(2016) used recurrent neural networks (RNN) to learn automatically the representations from tweets content based on time series.",
"Recently, they studied to mutually reinforce stance detection and rumor classification in a neural multi-task learning framework (Ma et al., 2018) .",
"However, the approaches cannot embed features reflecting how the posts are propagated and requires careful data segmentation to prepare for time sequence.",
"Some kernel-based methods were exploited to model the propagation structure.",
"Wu et al.",
"(2015) proposed a hybrid SVM classifier which combines a RBF kernel and a random-walk-based graph kernel to capture both flat and propagation patterns for detecting rumors on Sina Weibo.",
"Ma et al.",
"(2017) used tree kernel to capture the similarity of propagation trees by counting their similar substructures in order to identify different types of rumors on Twitter.",
"Compared to their studies, our model can learn the useful features via a more natural and general approach, i.e., the tree-structured neural network, to jointly generate representations from both structure and content.",
"RvNN has demonstrated state-of-the-art performances in a variety of tasks, e.g., images segmentation (Socher et al., 2011) , phrase representation from word vectors (Socher et al., 2012) , and sentiment classification in sentences (Socher et al., 2013) .",
"More recently, a deep RvNN was proposed to model the compositionality in natural language for fine-grained sentiment classification by stacking multiple recursive layers (Irsoy and Cardie, 2014) .",
"In order to avoid gradient vanishing, some studies integrated Long Short Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) to RvNN Tai et al., 2015) .",
"Mou et al.",
"(2015) used a convolutional network over tree structures for syntactic tree parsing of natural language sentences.",
"Problem Statement We define a Twitter rumor detection dataset as a set of claims C = {C 1 , C 2 , · · · , C |C| }, where each claim C i corresponds to a source tweet r i which consists of ideally all its relevant responsive tweets in chronological order, i.e., C i = {r i , x i1 , x i2 , · · · , x im } where each x i * is a responsive tweet of the root r i .",
"Note that although the tweets are notated sequentially, there are connections among them based on their reply or repost relationships, which can form a propagation tree structure (Wu et al., 2015; Ma et al., 2017) with r i being the root node.",
"We formulate this task as a supervised classification problem, which learns a classifier f from labeled claims, that is f : C i → Y i , where Y i takes one of the four finer-grained classes: non-rumor, false rumor, true rumor, and unverified rumor that are introduced in the literature (Ma et al., 2017; Zubiaga et al., 2016b ).",
"An important issue of the tree structure is concerned about the direction of edges, which can result in two different architectures of the model: 1) a bottom-up tree; 2) a top-down tree, which are defined as follows: • Bottom-up tree takes the similar shape as shown in Figure 1 , where responsive nodes always point to their responded nodes and leaf nodes not having any response are laid out at the furthest level.",
"We represent a tree as T i = V i , E i , where V i = C i which con- sists of all relevant posts as nodes, and E i denotes a set of all directed links, where for any u, v ∈ V i , u ← v exists if v responses to u.",
"This structure is similar to a citation network where a response mimics a reference.",
"• Top-down tree naturally conforms to the direction of information propagation, in which a link u → v means the information flows from u to v and v sees it and provides a response to u.",
"This structure reverses bottomup tree and simulates how information cas- cades from a source tweet, i.e., the root, to all its receivers, i.e., the decedents, which is similar as (Wu et al., 2015; Ma et al., 2017) .",
"RvNN-based Rumor Detection The core idea of our method is to strengthen the high-level representation of tree nodes by the recursion following the propagation structure over different branches in the tree.",
"For instance, the responsive nodes confirming or supporting a node (e.g., \"I agree\", \"be right\", etc) can further reinforce the stance of that node while denial or questioning responses (e.g., \"disagree, \"really?!)",
"otherwise weaken its stance.",
"Compared to the kernelbased method using propagation tree (Wu et al., 2015; Ma et al., 2017) , our method does not need pairwise comparison among large number of subtrees, and can learn much stronger representation of content following the response structure.",
"In this section, we will describe our extension to the standard RvNN for modeling rumor detection based on the bottom-up and top-down architectures presented in Section 3.",
"Standard Recursive Neural Networks RvNN is a type of tree-structured neural networks.",
"The original version of RvNN utilized binarized sentence parse trees (Socher et al., 2012) , in which the representation associated with each node of a parse tree is computed from its direct children.",
"The overall structure of the standard RvNN is illustrated as the right side of Figure 2 , corresponding to the input parse tree at the left side.",
"Leaf nodes are the words in an input sentence, each represented by a low-dimensional word embedding.",
"Non-leaf nodes are sentence constituents, computed by recursion based on the presentations of child nodes.",
"Let p be the feature vector of a parent node whose children are c 1 and c 2 , the representation of the parent is computed by p = f (W ·[c 1 ; c 2 ]+b), where f (·) is the activation function with W and b as parameters.",
"This computation is done recursively over all tree nodes; the learned hidden vectors of the nodes can then be used for various classification tasks.",
"Bottom-up RvNN The core idea of bottom-up model is to generate a feature vector for each subtree by recursively visiting every node from the leaves at the bottom to the root at the top.",
"In this way, the subtrees with similar contexts, such as those subtrees having a denial parent and a set of supportive children, will be projected into the proximity in the representation space.",
"And thus such local rumor indicative features are aggregated along different branches into some global representation of the whole tree.",
"For this purpose, we make a natural extension to the original RvNN.",
"The overall structure of our proposed bottom-up model is illustrated in Figure 3(b) , taking a bottom-up tree (see Figure 3 (a)) as input.",
"Different from the standard RvNN, the input of each node in the bottom-up model is a post represented as a vector of words in the vocabulary in terms of tf idf values.",
"Here, every node has an input vector, and the number of children of nodes varies significantly 3 .",
"In rumor detection, long short-term memory (LSTM) (Hochreiter and Schmidhuber, 1997) and gated recurrent units (GRU) (Cho et al., 2014) were used to learn textual representation, which adopts memory units to store information over long time steps (Ma et al., 2016) .",
"In this paper, we choose to extend GRU as hidden unit to model long-distance interactions over the tree nodes because it is more efficient due to fewer parameters.",
"Let S(j) denote the set of direct children of the node j.",
"The transition equations of node j in the bottom-up model are formulated as follows: where x j is the original input vector of node j, E denotes the parameter matrix for transforming this input post,x j is the transformed representation of j, [W * , U * ] are the weight connections inside GRU, and h j and h s refer to the hidden state of j and its s-th child.",
"Thus h S denotes the sum of the hidden state of all the children of j assuming that all children are equally important to j.",
"As with the standard GRU, denotes element-wise multiplication; a reset gate r j determines how to combine the current inputx j with the memory of children, and an update gate z j defines how much memory from the children is cascaded into the current node; andh j denotes the candidate activation of the hidden state of the current node.",
"Different from the standard GRU unit, the gating vectors in our variant of GRU are dependent on the states of many child units, allowing our model to incorporate representations from different children.",
"After recursive aggregation from bottom to up, the state of root node (i.e., source tweet) can be regard as the representation of the whole tree which is used for supervised classification.",
"So, an output layer is connected to the root node for predicting the class of the tree using a softmax function: x j = x j E h S = s∈S(j) h s r j = σ (W rxj + U r h S ) z j = σ (W zxj + U z h S ) h j = tanh (W hxj + U h (h S r j )) h j = (1 − z j ) h S + z j h j y = Sof tmax(Vh 0 + b) (2) where h 0 is the learned hidden vector of root node; V and b are the weights and bias in output layer.",
"Top-down RvNN This model is designed to leverage the structure of top-down tree to capture complex propagation patterns for classifying rumorous claims, which is shown in Figure 3 (c).",
"It models how the informa-tion flows from source post to the current node.",
"The idea of this top-down approach is to generate a strengthened feature vector for each post considering its propagation path, where rumor-indicative features are aggregated along the propagation history in the path.",
"For example, if current post agree with its parent's stance which denies the source post, the denial stance from the root node down to the current node on this path should be reinforced.",
"Due to different branches of any non-leaf node, the top-down visit to its subtree nodes is also recursive.",
"However, the nature of top-down tree lends this model different from the bottom-up one.",
"The representation of each node is computed by combining its own input and its parent node instead of its children nodes.",
"This process proceeds recursively from the root node to its children until all leaf nodes are reached.",
"Suppose that the hidden state of a non-leaf node can be passed synchronously to all its child nodes without loss.",
"Then the hidden state h j of a node j can be computed by combining the hidden state h P(j) of its parent node P(j) and its own input vector x j .",
"Therefore, the transition equations of node j can be formulated as a standard GRU: x j = x j E r j = σ W rxj + U r h P(j) z j = σ W zxj + U z h P(j) h j = tanh W hxj + U h (h P(j) r j ) h j = (1 − z j ) h P(j) + z j h j (3) Through the top-down recursion, the learned representations are eventually embedded into the hidden vector of all the leaf nodes.",
"Since the num-ber of leaf nodes varies, the resulting vectors cannot be directly fed into a fixed-size neural layer for output.",
"Therefore, we add a max-pooling layer to take the maximum value of each dimension of the vectors over all the leaf nodes.",
"This can also help capture the most appealing indicative features from all the propagation paths.",
"Based on the pooling result, we finally use a softmax function in the output layer to predict the label of the tree: y = Sof tmax(Vh ∞ + b) (4) where h ∞ is the pooling vector over all leaf nodes, V and b are parameters in the output layer.",
"Although both of the two RvNN models aim to capture the structural properties by recursively visiting all nodes, we can conjecture that the topdown model would be better.",
"The hypothesis is that in the bottom-up case the final output relies on the representation of single root, and its information loss can be larger than the top-down one since in the top-down case the representations embedded into all leaf nodes along different propagation paths can be incorporated via pooling holistically.",
"Model Training The model is trained to minimize the squared error between the probability distributions of the predictions and the ground truth: L(y,ŷ) = N n=1 C c=1 (y c −ŷ c ) 2 + λ||θ|| 2 2 (5) where y c is the ground truth andŷ c is the prediction probability of a class, N is the number of training claims, C is the number of classes, ||.|| 2 is the L 2 regularization term over all model parameters θ, and λ is the trade-off coefficient.",
"During training, all the model parameters are updated using efficient back-propagation through structure (Goller and Kuchler, 1996; Socher et al., 2013) , and the optimization is gradient-based following the Ada-grad update rule (Duchi et al., 2011) to speed up the convergence.",
"We empirically initialize the model parameters with uniform distribution and set the vocabulary size as 5,000, the size of embedding and hidden units as 100.",
"We iterate over all the training examples in each epoch and continue until the loss value converges or the maximum epoch number is met.",
"Experiments and Results Datasets For experimental evaluation, we use two publicly available Twitter datasets released by Ma et al.",
"(2017) , namely Twitter15 and Twitter16 4 , which respectively contains 1,381 and 1,181 propagation trees (see (Ma et al., 2017) for detailed statistics).",
"In each dataset, a group of wide spread source tweets along with their propagation threads, i.e., replies and retweets, are provided in the form of tree structure.",
"Each tree is annotated with one of the four class labels, i.e., non-rumor, false rumor, true rumor and unverified rumor.",
"We remove the retweets from the trees since they do not provide any extra information or evidence contentwise.",
"We build two versions for each tree, one for the bottom-up tree and the other for the top-down tree, by flipping the edges' direction.",
"Experimental Setup We make comprehensive comparisons between our models and some state-of-the-art baselines on rumor classification and early detection tasks.",
"-DTR: Zhao et al.",
"(2015) proposed a Decision-Tree-based Ranking model to identify trending rumors by searching for inquiry phrases.",
"-DTC: The information credibility model using a Decision-Tree Classifier (Castillo et al., 2011) based on manually engineering various statistical features of the tweets.",
"-RFC: The Random Forest Classier using 3 fitting parameters as temporal properties and a set of handcrafted features on user, linguistic and structural properties (Kwon et al., 2013) .",
"-SVM-TS: A linear SVM classifier that uses time-series to model the variation of handcrafted social context features (Ma et al., 2015) .",
"-SVM-BOW: A naive baseline we built by representing text content using bag-of-words and using linear SVM for rumor classification.",
"-SVM-TK and SVM-HK: SVM classifier uses a Tree Kernel (Ma et al., 2017) and that uses a Hybrid Kernel (Wu et al., 2015) , respectively, both of which model propagation structures with kernels.",
"-GRU-RNN: A detection model based on recurrent neural networks (Ma et al., 2016) with GRU units for learning rumor representations by modeling sequential structure of relevant posts.",
"We implement DTC and RFC using Weka 5 , SVM-based models using LibSVM 6 and all neural-network-based models with Theano 7 .",
"We conduct 5-fold cross-validation on the datasets and use accuracy over all the four categories and F1 measure on each class to evaluate the performance of models.",
"Rumor Classification Performance As shown in Table 1 , our proposed models basically yield much better performance than other methods on both datasets via the modeling of interaction structures of posts in the propagation.",
"It is observed that the performance of the 4 baselines in the first group based on handcrafted features is obviously poor, varying between 0.409 and 0.585 in accuracy, indicating that they fail to generalize due to the lack of capacity capturing helpful features.",
"Among these baselines, SVM-TS and RFC perform relatively better because they 5 www.cs.waikato.ac.nz/ml/weka 6 www.csie.ntu.edu.tw/˜cjlin/libsvm 7 deeplearning.net/software/theano use additional temporal traits, but they are still clearly worse than the models not relying on feature engineering.",
"DTR uses a set of regular expressions indicative of stances.",
"However, only 19.6% and 22.2% tweets in the two datasets contain strings covered by these regular expressions, rendering unsatisfactory result.",
"Among the two kernel methods that are based on comparing propagation structures, we observe that SVM-TK is much more effective than SVM-HK.",
"There are two reasons: 1) SVM-HK was originally proposed and experimented on Sina Weibo (Wu et al., 2015) , which may not be generalize well on Twitter.",
"2) SVM-HK loosely couples two separate kernels: a RBF kernel based on handcrafted features, plus a random walk-based kernel which relies on a set of pre-defined keywords for jumping over the nodes probabilistically.",
"This under utilizes the propagation information due to such oversimplified treatment of tree structure.",
"In contrast, SVM-TK is an integrated kernel and can fully utilize the structure by comparing the trees based on both textual and structural similarities.",
"It appears that using bag-of-words is already a decent model evidenced as the fairly good performance of SVM-BOW which is even better than SVM-HK.",
"This is because the features of SVM-HK are handcrafted for binary classification (i.e., non-rumor vs rumor), ignoring the importance of indicative words or units that benefit finer-grained classification which can be captured more effectively by SVM-BOW.",
"The sequential neural model GRU-RNN performs slightly worse than SVM-TK, but much worse than our recursive models.",
"This is because it is a special case of the recursive model where each non-leaf node has only one child.",
"It has to rely on a linear chain as input, which missed out valuable structural information.",
"However, it does learn high-level features from the post content via hidden units of the neural model while SVM-TK cannot which can only evaluates similarities based on the overlapping words among subtrees.",
"Our recursive models are inherently tree-structured and take advantages of representation learning following the propagation structure, thus beats SVM-TK.",
"In the two recursive models, TD-RvNN outperforms BU-RvNN, which indicates that the bottomup model may suffer from larger information loss than the top-down one.",
"This verifies the hypothesis we made in Section 4.3 that the pooling layer For only the non-rumor class, it seems that our method does not perform so well as some featureengineering baselines.",
"This can be explained by the fact that these baselines are trained with additional features such as user information (e.g., profile, verification status, etc) which may contain clues for differentiating non-rumors from rumors.",
"Also, the responses to non-rumors are usually much more diverse with little informative indication, making identification of non-rumors more difficult based on content even with the structure.",
"Early Rumor Detection Performance Detecting rumors at early state of propagation is important so that interventions can be made in a timely manner.",
"We compared different methods in term of different time delays measured by either tweet count received or time elapsed since the source tweet is posted.",
"The performance is evaluated by the accuracy obtained when we incrementally add test data up to the check point given the targeted time delay or tweets volume.",
"Figure 4 shows that the performance of our recursive models climbs more rapidly and starts to supersede the other models at the early stage.",
"Although all the methods are getting to their best per-formance in the end, TD-RvNN and BU-RvNN only need around 8 hours or about 90 tweets to achieve the comparable performance of the best baseline model, i.e., SVM-TK, which needs about 36 hours or around 300 posts, indicating superior early detection performance of our method.",
"Figure 5 shows a sample tree at the early stage of propagation that has been correctly classified as a false rumor by both recursive models.",
"We can see that this false rumor demonstrates typical patterns in subtrees and propagation paths indicative of the falsehood, where a set of responses supporting the parent posts that deny or question the source post are captured by our bottom-up model.",
"Similarly, some patterns of propagation from the root to leaf nodes like \"support→deny→support\" are also seized by our top-down model.",
"In comparison, sequential models may be confused because the supportive key terms such as \"be right\", \"yeah\", \"exactly!\"",
"dominate the responses, and the SVM-TK may miss similar subtrees by just comparing the surface words.",
"Conclusions and Future Work We propose a bottom-up and a top-down treestructured model based on recursive neural networks for rumor detection on Twitter.",
"The inher-ent nature of recursive models allows them using propagation tree to guide the learning of representations from tweets content, such as embedding various indicative signals hidden in the structure, for better identifying rumors.",
"Results on two public Twitter datasets show that our method improves rumor detection performance in very large margins as compared to state-of-the-art baselines.",
"In our future work, we plan to integrate other types of information such as user properties into the structured neural models to further enhance representation learning and detect rumor spreaders at the same time.",
"We also plan to use unsupervised models for the task by exploiting structural information."
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"4",
"4.1",
"4.2",
"4.3",
"4.4",
"5.1",
"5.2",
"5.3",
"5.4",
"6"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Problem Statement",
"RvNN-based Rumor Detection",
"Standard Recursive Neural Networks",
"Bottom-up RvNN",
"Top-down RvNN",
"Model Training",
"Datasets",
"Experimental Setup",
"Rumor Classification Performance",
"Early Rumor Detection Performance",
"Conclusions and Future Work"
]
} | GEM-SciDuet-train-135#paper-1364#slide-5 | Problem Statement | Given a set of microblog posts R = {}, model each source tweet as a tree structure T = < , >, where each node provide the text content of each post. And is directed edges corresponding to response relation.
Task 1 finer-grained classification for each source post
false rumor, true rumor, non-rumor, unverified rumor
Task 2 detect rumor as early as possible | Given a set of microblog posts R = {}, model each source tweet as a tree structure T = < , >, where each node provide the text content of each post. And is directed edges corresponding to response relation.
Task 1 finer-grained classification for each source post
false rumor, true rumor, non-rumor, unverified rumor
Task 2 detect rumor as early as possible | [] |
GEM-SciDuet-train-135#paper-1364#slide-6 | 1364 | Rumor Detection on Twitter with Tree-structured Recursive Neural Networks | Automatic rumor detection is technically very challenging. In this work, we try to learn discriminative features from tweets content by following their non-sequential propagation structure and generate more powerful representations for identifying different type of rumors. We propose two recursive neural models based on a bottom-up and a top-down tree-structured neural networks for rumor representation learning and classification, which naturally conform to the propagation layout of tweets. Results on two public Twitter datasets demonstrate that our recursive neural models 1) achieve much better performance than state-of-the-art approaches; 2) demonstrate superior capacity on detecting rumors at very early stage. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173
],
"paper_content_text": [
"Introduction Rumors have always been a social disease.",
"In recent years, it has become unprecedentedly convenient for the \"evil-doers\" to create and disseminate rumors in massive scale with low cost thanks to the popularity of social media outlets on Twitter, Facebook, etc.",
"The worst effect of false rumors could be devastating to individual and/or society.",
"Research pertaining rumors spans multiple disciplines, such as philosophy and humanities (Di-Fonzo and Bordia, 2007; Donovan, 2007) , social psychology (Allport and Postman, 1965; Jaeger et al., 1980; Rosnow and Foster, 2005) , political studies (Allport and Postman, 1946; Berinsky, 2017) , management science (DiFonzo et al., 1994; Kimmel, 2004) and recently computer science and artificial intelligence (Qazvinian et al., 2011; Ratkiewicz et al., 2011; Castillo et al., 2011; Hannak et al., 2014; Zhao et al., 2015; Ma et al., 2015) .",
"Rumor is commonly defined as information that emerge and spread among people whose truth value is unverified or intentionally false (Di-Fonzo and Bordia, 2007; Qazvinian et al., 2011) .",
"Analysis shows that people tend to stop spreading a rumor if it is known as false (Zubiaga et al., 2016b) .",
"However, identifying such misinformation is non-trivial and needs investigative journalism to fact check the suspected claim, which is labor-intensive and time-consuming.",
"The proliferation of social media makes it worse due to the ever-increasing information load and dynamics.",
"Therefore, it is necessary to develop automatic and assistant approaches to facilitate real-time rumor tracking and debunking.",
"For automating rumor detection, most of the previous studies focused on text mining from sequential microblog streams using supervised models based on feature engineering (Castillo et al., 2011; Kwon et al., 2013; Liu et al., 2015; Ma et al., 2015) , and more recently deep neural models (Ma et al., 2016; Chen et al., 2017; Ruchansky et al., 2017) .",
"These methods largely ignore or oversimplify the structural information associated with message propagation which however has been shown conducive to provide useful clues for identifying rumors.",
"Kernel-based method (Wu et al., 2015; Ma et al., 2017) was thus proposed to model the structure as propagation trees in order to differentiate rumorous and non-rumorous claims by comparing their tree-based similarities.",
"But such kind of approach cannot directly classify a tree without pairwise comparison with all other trees imposing unnecessary overhead, and it also cannot automatically learn any high-level feature representations out of the noisy surface features.",
"In this paper, we present a neural rumor detection approach based on recursive neural networks (RvNN) to bridge the content semantics and propagation clues.",
"RvNN and its variants were originally used to compose phrase or sentence representation for syntactic and semantic parsing (Socher et al., 2011 (Socher et al., , 2012 .",
"Unlike parsing, the input into our model is a propagation tree rooted from a source post rather than the parse tree of an individual sentence, and each tree node is a responsive post instead of an individual words.",
"The content semantics of posts and the responsive relationship among them can be jointly captured via the recursive feature learning process along the tree structure.",
"So, why can such neural model do better for the task?",
"Analysis has generally found that Twitter could \"self-correct\" some inaccurate information as users share opinions, conjectures and evidences (Zubiaga et al., 2017) .",
"To illustrate our intuition, Figure 1 exemplifies the propagation trees of two rumors in our dataset, one being false and the other being true 1 .",
"Structure-insensitive methods basically relying on the relative ratio of different stances in the text cannot do well when such clue is unclear like this example.",
"However, it can be seen that when a post denies the false rumor, it tends to spark supportive or affirmative replies confirming the denial; in contrast, denial to a true rumor tends to trigger question or denial in its replies.",
"This observation may suggest a more general hypothesis that the repliers tend to disagree with (or question) who support a false rumor or deny a true rumor, and also they tend to agree with who deny a false rumor or support a true rumor.",
"Meanwhile, a reply, rather than directly responding to the source tweet (i.e., the root), is usually responsive to its immediate ancestor (Lukasik et al., 2016; Zubiaga et al., 2016a) , suggesting obvious local characteristic of the interaction.",
"The recursive network naturally models such structures for learning to capture the rumor indicative signals and enhance the representation by recursively aggregating the signals from different branches.",
"To this end, we extend the standard RvNN into two variants, i.e., a bottom-up (BU) model and a top-down (TD) model, which represent the propagation tree structure from different angles, in order to visit the nodes and combine their representations following distinct directions.",
"The important merit of such architecture is that the node features can be selectively refined by the recursion given the connection and direction of all paths of the 1 False (true) rumor means the veracity of the rumorous claim is false (true).",
"Figure 1 : Propagation trees of two rumorous source tweets.",
"Nodes may express stances on their parent as commenting, supporting, questioning or denying.",
"The edge arrow indicates the direction from a response to its responded node, and the polarity is marked as '+' ('-') for support (denial).",
"The same node color indicates the same stance on the veracity of root node (i.e., source tweet).",
"tree.",
"As a result, it can be expected that the discriminative signals are better embedded into the learned representations.",
"We evaluate our proposed approach based on two public Twitter datasets.",
"The results show that our method outperforms strong rumor detection baselines with large margin and also demonstrate much higher effectiveness for detection at early stage of propagation, which is promising for realtime intervention and debunking.",
"Our contributions are summarized as follows in three folds: • This is the first study that deeply integrates both structure and content semantics based on tree-structured recursive neural networks for detecting rumors from microblog posts.",
"• We propose two variants of RvNN models based on bottom-up and top-down tree structures to generate better integrated representations for a claim by capturing both structural and textural properties signaling rumors.",
"• Our experiments based on real-world Twitter datasets achieve superior improvements over state-of-the-art baselines on both rumor classification and early detection tasks.",
"We make the source codes in our experiments publicly accessible 2 .",
"Related Work Most previous automatic approaches for rumor detection (Castillo et al., 2011; Yang et al., 2012; Liu et al., 2015) intended to learn a supervised classifier by utilizing a wide range of features crafted from post contents, user profiles and propagation patterns.",
"Subsequent studies were then conducted to engineer new features such as those representing rumor diffusion and cascades (Friggeri et al., 2014; Hannak et al., 2014) characterized by comments with links to debunking websites.",
"Kwon et al.",
"(2013) introduced a time-series-fitting model based on the volume of tweets over time.",
"Ma et al.",
"(2015) extended their model with more chronological social context features.",
"These approaches typically require heavy preprocessing and feature engineering.",
"Zhao et al.",
"(2015) alleviated the engineering effort by using a set of regular expressions (such as \"really?",
"\", \"not true\", etc) to find questing and denying tweets, but the approach was oversimplified and suffered from very low recall.",
"Ma et al.",
"(2016) used recurrent neural networks (RNN) to learn automatically the representations from tweets content based on time series.",
"Recently, they studied to mutually reinforce stance detection and rumor classification in a neural multi-task learning framework (Ma et al., 2018) .",
"However, the approaches cannot embed features reflecting how the posts are propagated and requires careful data segmentation to prepare for time sequence.",
"Some kernel-based methods were exploited to model the propagation structure.",
"Wu et al.",
"(2015) proposed a hybrid SVM classifier which combines a RBF kernel and a random-walk-based graph kernel to capture both flat and propagation patterns for detecting rumors on Sina Weibo.",
"Ma et al.",
"(2017) used tree kernel to capture the similarity of propagation trees by counting their similar substructures in order to identify different types of rumors on Twitter.",
"Compared to their studies, our model can learn the useful features via a more natural and general approach, i.e., the tree-structured neural network, to jointly generate representations from both structure and content.",
"RvNN has demonstrated state-of-the-art performances in a variety of tasks, e.g., images segmentation (Socher et al., 2011) , phrase representation from word vectors (Socher et al., 2012) , and sentiment classification in sentences (Socher et al., 2013) .",
"More recently, a deep RvNN was proposed to model the compositionality in natural language for fine-grained sentiment classification by stacking multiple recursive layers (Irsoy and Cardie, 2014) .",
"In order to avoid gradient vanishing, some studies integrated Long Short Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) to RvNN Tai et al., 2015) .",
"Mou et al.",
"(2015) used a convolutional network over tree structures for syntactic tree parsing of natural language sentences.",
"Problem Statement We define a Twitter rumor detection dataset as a set of claims C = {C 1 , C 2 , · · · , C |C| }, where each claim C i corresponds to a source tweet r i which consists of ideally all its relevant responsive tweets in chronological order, i.e., C i = {r i , x i1 , x i2 , · · · , x im } where each x i * is a responsive tweet of the root r i .",
"Note that although the tweets are notated sequentially, there are connections among them based on their reply or repost relationships, which can form a propagation tree structure (Wu et al., 2015; Ma et al., 2017) with r i being the root node.",
"We formulate this task as a supervised classification problem, which learns a classifier f from labeled claims, that is f : C i → Y i , where Y i takes one of the four finer-grained classes: non-rumor, false rumor, true rumor, and unverified rumor that are introduced in the literature (Ma et al., 2017; Zubiaga et al., 2016b ).",
"An important issue of the tree structure is concerned about the direction of edges, which can result in two different architectures of the model: 1) a bottom-up tree; 2) a top-down tree, which are defined as follows: • Bottom-up tree takes the similar shape as shown in Figure 1 , where responsive nodes always point to their responded nodes and leaf nodes not having any response are laid out at the furthest level.",
"We represent a tree as T i = V i , E i , where V i = C i which con- sists of all relevant posts as nodes, and E i denotes a set of all directed links, where for any u, v ∈ V i , u ← v exists if v responses to u.",
"This structure is similar to a citation network where a response mimics a reference.",
"• Top-down tree naturally conforms to the direction of information propagation, in which a link u → v means the information flows from u to v and v sees it and provides a response to u.",
"This structure reverses bottomup tree and simulates how information cas- cades from a source tweet, i.e., the root, to all its receivers, i.e., the decedents, which is similar as (Wu et al., 2015; Ma et al., 2017) .",
"RvNN-based Rumor Detection The core idea of our method is to strengthen the high-level representation of tree nodes by the recursion following the propagation structure over different branches in the tree.",
"For instance, the responsive nodes confirming or supporting a node (e.g., \"I agree\", \"be right\", etc) can further reinforce the stance of that node while denial or questioning responses (e.g., \"disagree, \"really?!)",
"otherwise weaken its stance.",
"Compared to the kernelbased method using propagation tree (Wu et al., 2015; Ma et al., 2017) , our method does not need pairwise comparison among large number of subtrees, and can learn much stronger representation of content following the response structure.",
"In this section, we will describe our extension to the standard RvNN for modeling rumor detection based on the bottom-up and top-down architectures presented in Section 3.",
"Standard Recursive Neural Networks RvNN is a type of tree-structured neural networks.",
"The original version of RvNN utilized binarized sentence parse trees (Socher et al., 2012) , in which the representation associated with each node of a parse tree is computed from its direct children.",
"The overall structure of the standard RvNN is illustrated as the right side of Figure 2 , corresponding to the input parse tree at the left side.",
"Leaf nodes are the words in an input sentence, each represented by a low-dimensional word embedding.",
"Non-leaf nodes are sentence constituents, computed by recursion based on the presentations of child nodes.",
"Let p be the feature vector of a parent node whose children are c 1 and c 2 , the representation of the parent is computed by p = f (W ·[c 1 ; c 2 ]+b), where f (·) is the activation function with W and b as parameters.",
"This computation is done recursively over all tree nodes; the learned hidden vectors of the nodes can then be used for various classification tasks.",
"Bottom-up RvNN The core idea of bottom-up model is to generate a feature vector for each subtree by recursively visiting every node from the leaves at the bottom to the root at the top.",
"In this way, the subtrees with similar contexts, such as those subtrees having a denial parent and a set of supportive children, will be projected into the proximity in the representation space.",
"And thus such local rumor indicative features are aggregated along different branches into some global representation of the whole tree.",
"For this purpose, we make a natural extension to the original RvNN.",
"The overall structure of our proposed bottom-up model is illustrated in Figure 3(b) , taking a bottom-up tree (see Figure 3 (a)) as input.",
"Different from the standard RvNN, the input of each node in the bottom-up model is a post represented as a vector of words in the vocabulary in terms of tf idf values.",
"Here, every node has an input vector, and the number of children of nodes varies significantly 3 .",
"In rumor detection, long short-term memory (LSTM) (Hochreiter and Schmidhuber, 1997) and gated recurrent units (GRU) (Cho et al., 2014) were used to learn textual representation, which adopts memory units to store information over long time steps (Ma et al., 2016) .",
"In this paper, we choose to extend GRU as hidden unit to model long-distance interactions over the tree nodes because it is more efficient due to fewer parameters.",
"Let S(j) denote the set of direct children of the node j.",
"The transition equations of node j in the bottom-up model are formulated as follows: where x j is the original input vector of node j, E denotes the parameter matrix for transforming this input post,x j is the transformed representation of j, [W * , U * ] are the weight connections inside GRU, and h j and h s refer to the hidden state of j and its s-th child.",
"Thus h S denotes the sum of the hidden state of all the children of j assuming that all children are equally important to j.",
"As with the standard GRU, denotes element-wise multiplication; a reset gate r j determines how to combine the current inputx j with the memory of children, and an update gate z j defines how much memory from the children is cascaded into the current node; andh j denotes the candidate activation of the hidden state of the current node.",
"Different from the standard GRU unit, the gating vectors in our variant of GRU are dependent on the states of many child units, allowing our model to incorporate representations from different children.",
"After recursive aggregation from bottom to up, the state of root node (i.e., source tweet) can be regard as the representation of the whole tree which is used for supervised classification.",
"So, an output layer is connected to the root node for predicting the class of the tree using a softmax function: x j = x j E h S = s∈S(j) h s r j = σ (W rxj + U r h S ) z j = σ (W zxj + U z h S ) h j = tanh (W hxj + U h (h S r j )) h j = (1 − z j ) h S + z j h j y = Sof tmax(Vh 0 + b) (2) where h 0 is the learned hidden vector of root node; V and b are the weights and bias in output layer.",
"Top-down RvNN This model is designed to leverage the structure of top-down tree to capture complex propagation patterns for classifying rumorous claims, which is shown in Figure 3 (c).",
"It models how the informa-tion flows from source post to the current node.",
"The idea of this top-down approach is to generate a strengthened feature vector for each post considering its propagation path, where rumor-indicative features are aggregated along the propagation history in the path.",
"For example, if current post agree with its parent's stance which denies the source post, the denial stance from the root node down to the current node on this path should be reinforced.",
"Due to different branches of any non-leaf node, the top-down visit to its subtree nodes is also recursive.",
"However, the nature of top-down tree lends this model different from the bottom-up one.",
"The representation of each node is computed by combining its own input and its parent node instead of its children nodes.",
"This process proceeds recursively from the root node to its children until all leaf nodes are reached.",
"Suppose that the hidden state of a non-leaf node can be passed synchronously to all its child nodes without loss.",
"Then the hidden state h j of a node j can be computed by combining the hidden state h P(j) of its parent node P(j) and its own input vector x j .",
"Therefore, the transition equations of node j can be formulated as a standard GRU: x j = x j E r j = σ W rxj + U r h P(j) z j = σ W zxj + U z h P(j) h j = tanh W hxj + U h (h P(j) r j ) h j = (1 − z j ) h P(j) + z j h j (3) Through the top-down recursion, the learned representations are eventually embedded into the hidden vector of all the leaf nodes.",
"Since the num-ber of leaf nodes varies, the resulting vectors cannot be directly fed into a fixed-size neural layer for output.",
"Therefore, we add a max-pooling layer to take the maximum value of each dimension of the vectors over all the leaf nodes.",
"This can also help capture the most appealing indicative features from all the propagation paths.",
"Based on the pooling result, we finally use a softmax function in the output layer to predict the label of the tree: y = Sof tmax(Vh ∞ + b) (4) where h ∞ is the pooling vector over all leaf nodes, V and b are parameters in the output layer.",
"Although both of the two RvNN models aim to capture the structural properties by recursively visiting all nodes, we can conjecture that the topdown model would be better.",
"The hypothesis is that in the bottom-up case the final output relies on the representation of single root, and its information loss can be larger than the top-down one since in the top-down case the representations embedded into all leaf nodes along different propagation paths can be incorporated via pooling holistically.",
"Model Training The model is trained to minimize the squared error between the probability distributions of the predictions and the ground truth: L(y,ŷ) = N n=1 C c=1 (y c −ŷ c ) 2 + λ||θ|| 2 2 (5) where y c is the ground truth andŷ c is the prediction probability of a class, N is the number of training claims, C is the number of classes, ||.|| 2 is the L 2 regularization term over all model parameters θ, and λ is the trade-off coefficient.",
"During training, all the model parameters are updated using efficient back-propagation through structure (Goller and Kuchler, 1996; Socher et al., 2013) , and the optimization is gradient-based following the Ada-grad update rule (Duchi et al., 2011) to speed up the convergence.",
"We empirically initialize the model parameters with uniform distribution and set the vocabulary size as 5,000, the size of embedding and hidden units as 100.",
"We iterate over all the training examples in each epoch and continue until the loss value converges or the maximum epoch number is met.",
"Experiments and Results Datasets For experimental evaluation, we use two publicly available Twitter datasets released by Ma et al.",
"(2017) , namely Twitter15 and Twitter16 4 , which respectively contains 1,381 and 1,181 propagation trees (see (Ma et al., 2017) for detailed statistics).",
"In each dataset, a group of wide spread source tweets along with their propagation threads, i.e., replies and retweets, are provided in the form of tree structure.",
"Each tree is annotated with one of the four class labels, i.e., non-rumor, false rumor, true rumor and unverified rumor.",
"We remove the retweets from the trees since they do not provide any extra information or evidence contentwise.",
"We build two versions for each tree, one for the bottom-up tree and the other for the top-down tree, by flipping the edges' direction.",
"Experimental Setup We make comprehensive comparisons between our models and some state-of-the-art baselines on rumor classification and early detection tasks.",
"-DTR: Zhao et al.",
"(2015) proposed a Decision-Tree-based Ranking model to identify trending rumors by searching for inquiry phrases.",
"-DTC: The information credibility model using a Decision-Tree Classifier (Castillo et al., 2011) based on manually engineering various statistical features of the tweets.",
"-RFC: The Random Forest Classier using 3 fitting parameters as temporal properties and a set of handcrafted features on user, linguistic and structural properties (Kwon et al., 2013) .",
"-SVM-TS: A linear SVM classifier that uses time-series to model the variation of handcrafted social context features (Ma et al., 2015) .",
"-SVM-BOW: A naive baseline we built by representing text content using bag-of-words and using linear SVM for rumor classification.",
"-SVM-TK and SVM-HK: SVM classifier uses a Tree Kernel (Ma et al., 2017) and that uses a Hybrid Kernel (Wu et al., 2015) , respectively, both of which model propagation structures with kernels.",
"-GRU-RNN: A detection model based on recurrent neural networks (Ma et al., 2016) with GRU units for learning rumor representations by modeling sequential structure of relevant posts.",
"We implement DTC and RFC using Weka 5 , SVM-based models using LibSVM 6 and all neural-network-based models with Theano 7 .",
"We conduct 5-fold cross-validation on the datasets and use accuracy over all the four categories and F1 measure on each class to evaluate the performance of models.",
"Rumor Classification Performance As shown in Table 1 , our proposed models basically yield much better performance than other methods on both datasets via the modeling of interaction structures of posts in the propagation.",
"It is observed that the performance of the 4 baselines in the first group based on handcrafted features is obviously poor, varying between 0.409 and 0.585 in accuracy, indicating that they fail to generalize due to the lack of capacity capturing helpful features.",
"Among these baselines, SVM-TS and RFC perform relatively better because they 5 www.cs.waikato.ac.nz/ml/weka 6 www.csie.ntu.edu.tw/˜cjlin/libsvm 7 deeplearning.net/software/theano use additional temporal traits, but they are still clearly worse than the models not relying on feature engineering.",
"DTR uses a set of regular expressions indicative of stances.",
"However, only 19.6% and 22.2% tweets in the two datasets contain strings covered by these regular expressions, rendering unsatisfactory result.",
"Among the two kernel methods that are based on comparing propagation structures, we observe that SVM-TK is much more effective than SVM-HK.",
"There are two reasons: 1) SVM-HK was originally proposed and experimented on Sina Weibo (Wu et al., 2015) , which may not be generalize well on Twitter.",
"2) SVM-HK loosely couples two separate kernels: a RBF kernel based on handcrafted features, plus a random walk-based kernel which relies on a set of pre-defined keywords for jumping over the nodes probabilistically.",
"This under utilizes the propagation information due to such oversimplified treatment of tree structure.",
"In contrast, SVM-TK is an integrated kernel and can fully utilize the structure by comparing the trees based on both textual and structural similarities.",
"It appears that using bag-of-words is already a decent model evidenced as the fairly good performance of SVM-BOW which is even better than SVM-HK.",
"This is because the features of SVM-HK are handcrafted for binary classification (i.e., non-rumor vs rumor), ignoring the importance of indicative words or units that benefit finer-grained classification which can be captured more effectively by SVM-BOW.",
"The sequential neural model GRU-RNN performs slightly worse than SVM-TK, but much worse than our recursive models.",
"This is because it is a special case of the recursive model where each non-leaf node has only one child.",
"It has to rely on a linear chain as input, which missed out valuable structural information.",
"However, it does learn high-level features from the post content via hidden units of the neural model while SVM-TK cannot which can only evaluates similarities based on the overlapping words among subtrees.",
"Our recursive models are inherently tree-structured and take advantages of representation learning following the propagation structure, thus beats SVM-TK.",
"In the two recursive models, TD-RvNN outperforms BU-RvNN, which indicates that the bottomup model may suffer from larger information loss than the top-down one.",
"This verifies the hypothesis we made in Section 4.3 that the pooling layer For only the non-rumor class, it seems that our method does not perform so well as some featureengineering baselines.",
"This can be explained by the fact that these baselines are trained with additional features such as user information (e.g., profile, verification status, etc) which may contain clues for differentiating non-rumors from rumors.",
"Also, the responses to non-rumors are usually much more diverse with little informative indication, making identification of non-rumors more difficult based on content even with the structure.",
"Early Rumor Detection Performance Detecting rumors at early state of propagation is important so that interventions can be made in a timely manner.",
"We compared different methods in term of different time delays measured by either tweet count received or time elapsed since the source tweet is posted.",
"The performance is evaluated by the accuracy obtained when we incrementally add test data up to the check point given the targeted time delay or tweets volume.",
"Figure 4 shows that the performance of our recursive models climbs more rapidly and starts to supersede the other models at the early stage.",
"Although all the methods are getting to their best per-formance in the end, TD-RvNN and BU-RvNN only need around 8 hours or about 90 tweets to achieve the comparable performance of the best baseline model, i.e., SVM-TK, which needs about 36 hours or around 300 posts, indicating superior early detection performance of our method.",
"Figure 5 shows a sample tree at the early stage of propagation that has been correctly classified as a false rumor by both recursive models.",
"We can see that this false rumor demonstrates typical patterns in subtrees and propagation paths indicative of the falsehood, where a set of responses supporting the parent posts that deny or question the source post are captured by our bottom-up model.",
"Similarly, some patterns of propagation from the root to leaf nodes like \"support→deny→support\" are also seized by our top-down model.",
"In comparison, sequential models may be confused because the supportive key terms such as \"be right\", \"yeah\", \"exactly!\"",
"dominate the responses, and the SVM-TK may miss similar subtrees by just comparing the surface words.",
"Conclusions and Future Work We propose a bottom-up and a top-down treestructured model based on recursive neural networks for rumor detection on Twitter.",
"The inher-ent nature of recursive models allows them using propagation tree to guide the learning of representations from tweets content, such as embedding various indicative signals hidden in the structure, for better identifying rumors.",
"Results on two public Twitter datasets show that our method improves rumor detection performance in very large margins as compared to state-of-the-art baselines.",
"In our future work, we plan to integrate other types of information such as user properties into the structured neural models to further enhance representation learning and detect rumor spreaders at the same time.",
"We also plan to use unsupervised models for the task by exploiting structural information."
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"4",
"4.1",
"4.2",
"4.3",
"4.4",
"5.1",
"5.2",
"5.3",
"5.4",
"6"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Problem Statement",
"RvNN-based Rumor Detection",
"Standard Recursive Neural Networks",
"Bottom-up RvNN",
"Top-down RvNN",
"Model Training",
"Datasets",
"Experimental Setup",
"Rumor Classification Performance",
"Early Rumor Detection Performance",
"Conclusions and Future Work"
]
} | GEM-SciDuet-train-135#paper-1364#slide-6 | Tweet Structure | Root tweet : #Walmart donates $10,000 to #DarrenWilson bottom-up tree fund to continue police racial profiling
1:30 Idc if they killed a mf foreal. Ima always shop with @Walmart. I'm
: NEED SOURCE. have a feeling this is just hearsay ... just bein honest
I agree. I have been hearing this all day but no source 1:12
: Exactly, i don't think Wal-Mart would let everyone know this if they did!! 2:21
Wal-Mart would let everyone know this if they did!! 2:21 replies
top-down tree : #Walmart donates $10,000 to #DarrenWilson fund to continue police racial profiling | Root tweet : #Walmart donates $10,000 to #DarrenWilson bottom-up tree fund to continue police racial profiling
1:30 Idc if they killed a mf foreal. Ima always shop with @Walmart. I'm
: NEED SOURCE. have a feeling this is just hearsay ... just bein honest
I agree. I have been hearing this all day but no source 1:12
: Exactly, i don't think Wal-Mart would let everyone know this if they did!! 2:21
Wal-Mart would let everyone know this if they did!! 2:21 replies
top-down tree : #Walmart donates $10,000 to #DarrenWilson fund to continue police racial profiling | [] |
GEM-SciDuet-train-135#paper-1364#slide-7 | 1364 | Rumor Detection on Twitter with Tree-structured Recursive Neural Networks | Automatic rumor detection is technically very challenging. In this work, we try to learn discriminative features from tweets content by following their non-sequential propagation structure and generate more powerful representations for identifying different type of rumors. We propose two recursive neural models based on a bottom-up and a top-down tree-structured neural networks for rumor representation learning and classification, which naturally conform to the propagation layout of tweets. Results on two public Twitter datasets demonstrate that our recursive neural models 1) achieve much better performance than state-of-the-art approaches; 2) demonstrate superior capacity on detecting rumors at very early stage. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173
],
"paper_content_text": [
"Introduction Rumors have always been a social disease.",
"In recent years, it has become unprecedentedly convenient for the \"evil-doers\" to create and disseminate rumors in massive scale with low cost thanks to the popularity of social media outlets on Twitter, Facebook, etc.",
"The worst effect of false rumors could be devastating to individual and/or society.",
"Research pertaining rumors spans multiple disciplines, such as philosophy and humanities (Di-Fonzo and Bordia, 2007; Donovan, 2007) , social psychology (Allport and Postman, 1965; Jaeger et al., 1980; Rosnow and Foster, 2005) , political studies (Allport and Postman, 1946; Berinsky, 2017) , management science (DiFonzo et al., 1994; Kimmel, 2004) and recently computer science and artificial intelligence (Qazvinian et al., 2011; Ratkiewicz et al., 2011; Castillo et al., 2011; Hannak et al., 2014; Zhao et al., 2015; Ma et al., 2015) .",
"Rumor is commonly defined as information that emerge and spread among people whose truth value is unverified or intentionally false (Di-Fonzo and Bordia, 2007; Qazvinian et al., 2011) .",
"Analysis shows that people tend to stop spreading a rumor if it is known as false (Zubiaga et al., 2016b) .",
"However, identifying such misinformation is non-trivial and needs investigative journalism to fact check the suspected claim, which is labor-intensive and time-consuming.",
"The proliferation of social media makes it worse due to the ever-increasing information load and dynamics.",
"Therefore, it is necessary to develop automatic and assistant approaches to facilitate real-time rumor tracking and debunking.",
"For automating rumor detection, most of the previous studies focused on text mining from sequential microblog streams using supervised models based on feature engineering (Castillo et al., 2011; Kwon et al., 2013; Liu et al., 2015; Ma et al., 2015) , and more recently deep neural models (Ma et al., 2016; Chen et al., 2017; Ruchansky et al., 2017) .",
"These methods largely ignore or oversimplify the structural information associated with message propagation which however has been shown conducive to provide useful clues for identifying rumors.",
"Kernel-based method (Wu et al., 2015; Ma et al., 2017) was thus proposed to model the structure as propagation trees in order to differentiate rumorous and non-rumorous claims by comparing their tree-based similarities.",
"But such kind of approach cannot directly classify a tree without pairwise comparison with all other trees imposing unnecessary overhead, and it also cannot automatically learn any high-level feature representations out of the noisy surface features.",
"In this paper, we present a neural rumor detection approach based on recursive neural networks (RvNN) to bridge the content semantics and propagation clues.",
"RvNN and its variants were originally used to compose phrase or sentence representation for syntactic and semantic parsing (Socher et al., 2011 (Socher et al., , 2012 .",
"Unlike parsing, the input into our model is a propagation tree rooted from a source post rather than the parse tree of an individual sentence, and each tree node is a responsive post instead of an individual words.",
"The content semantics of posts and the responsive relationship among them can be jointly captured via the recursive feature learning process along the tree structure.",
"So, why can such neural model do better for the task?",
"Analysis has generally found that Twitter could \"self-correct\" some inaccurate information as users share opinions, conjectures and evidences (Zubiaga et al., 2017) .",
"To illustrate our intuition, Figure 1 exemplifies the propagation trees of two rumors in our dataset, one being false and the other being true 1 .",
"Structure-insensitive methods basically relying on the relative ratio of different stances in the text cannot do well when such clue is unclear like this example.",
"However, it can be seen that when a post denies the false rumor, it tends to spark supportive or affirmative replies confirming the denial; in contrast, denial to a true rumor tends to trigger question or denial in its replies.",
"This observation may suggest a more general hypothesis that the repliers tend to disagree with (or question) who support a false rumor or deny a true rumor, and also they tend to agree with who deny a false rumor or support a true rumor.",
"Meanwhile, a reply, rather than directly responding to the source tweet (i.e., the root), is usually responsive to its immediate ancestor (Lukasik et al., 2016; Zubiaga et al., 2016a) , suggesting obvious local characteristic of the interaction.",
"The recursive network naturally models such structures for learning to capture the rumor indicative signals and enhance the representation by recursively aggregating the signals from different branches.",
"To this end, we extend the standard RvNN into two variants, i.e., a bottom-up (BU) model and a top-down (TD) model, which represent the propagation tree structure from different angles, in order to visit the nodes and combine their representations following distinct directions.",
"The important merit of such architecture is that the node features can be selectively refined by the recursion given the connection and direction of all paths of the 1 False (true) rumor means the veracity of the rumorous claim is false (true).",
"Figure 1 : Propagation trees of two rumorous source tweets.",
"Nodes may express stances on their parent as commenting, supporting, questioning or denying.",
"The edge arrow indicates the direction from a response to its responded node, and the polarity is marked as '+' ('-') for support (denial).",
"The same node color indicates the same stance on the veracity of root node (i.e., source tweet).",
"tree.",
"As a result, it can be expected that the discriminative signals are better embedded into the learned representations.",
"We evaluate our proposed approach based on two public Twitter datasets.",
"The results show that our method outperforms strong rumor detection baselines with large margin and also demonstrate much higher effectiveness for detection at early stage of propagation, which is promising for realtime intervention and debunking.",
"Our contributions are summarized as follows in three folds: • This is the first study that deeply integrates both structure and content semantics based on tree-structured recursive neural networks for detecting rumors from microblog posts.",
"• We propose two variants of RvNN models based on bottom-up and top-down tree structures to generate better integrated representations for a claim by capturing both structural and textural properties signaling rumors.",
"• Our experiments based on real-world Twitter datasets achieve superior improvements over state-of-the-art baselines on both rumor classification and early detection tasks.",
"We make the source codes in our experiments publicly accessible 2 .",
"Related Work Most previous automatic approaches for rumor detection (Castillo et al., 2011; Yang et al., 2012; Liu et al., 2015) intended to learn a supervised classifier by utilizing a wide range of features crafted from post contents, user profiles and propagation patterns.",
"Subsequent studies were then conducted to engineer new features such as those representing rumor diffusion and cascades (Friggeri et al., 2014; Hannak et al., 2014) characterized by comments with links to debunking websites.",
"Kwon et al.",
"(2013) introduced a time-series-fitting model based on the volume of tweets over time.",
"Ma et al.",
"(2015) extended their model with more chronological social context features.",
"These approaches typically require heavy preprocessing and feature engineering.",
"Zhao et al.",
"(2015) alleviated the engineering effort by using a set of regular expressions (such as \"really?",
"\", \"not true\", etc) to find questing and denying tweets, but the approach was oversimplified and suffered from very low recall.",
"Ma et al.",
"(2016) used recurrent neural networks (RNN) to learn automatically the representations from tweets content based on time series.",
"Recently, they studied to mutually reinforce stance detection and rumor classification in a neural multi-task learning framework (Ma et al., 2018) .",
"However, the approaches cannot embed features reflecting how the posts are propagated and requires careful data segmentation to prepare for time sequence.",
"Some kernel-based methods were exploited to model the propagation structure.",
"Wu et al.",
"(2015) proposed a hybrid SVM classifier which combines a RBF kernel and a random-walk-based graph kernel to capture both flat and propagation patterns for detecting rumors on Sina Weibo.",
"Ma et al.",
"(2017) used tree kernel to capture the similarity of propagation trees by counting their similar substructures in order to identify different types of rumors on Twitter.",
"Compared to their studies, our model can learn the useful features via a more natural and general approach, i.e., the tree-structured neural network, to jointly generate representations from both structure and content.",
"RvNN has demonstrated state-of-the-art performances in a variety of tasks, e.g., images segmentation (Socher et al., 2011) , phrase representation from word vectors (Socher et al., 2012) , and sentiment classification in sentences (Socher et al., 2013) .",
"More recently, a deep RvNN was proposed to model the compositionality in natural language for fine-grained sentiment classification by stacking multiple recursive layers (Irsoy and Cardie, 2014) .",
"In order to avoid gradient vanishing, some studies integrated Long Short Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) to RvNN Tai et al., 2015) .",
"Mou et al.",
"(2015) used a convolutional network over tree structures for syntactic tree parsing of natural language sentences.",
"Problem Statement We define a Twitter rumor detection dataset as a set of claims C = {C 1 , C 2 , · · · , C |C| }, where each claim C i corresponds to a source tweet r i which consists of ideally all its relevant responsive tweets in chronological order, i.e., C i = {r i , x i1 , x i2 , · · · , x im } where each x i * is a responsive tweet of the root r i .",
"Note that although the tweets are notated sequentially, there are connections among them based on their reply or repost relationships, which can form a propagation tree structure (Wu et al., 2015; Ma et al., 2017) with r i being the root node.",
"We formulate this task as a supervised classification problem, which learns a classifier f from labeled claims, that is f : C i → Y i , where Y i takes one of the four finer-grained classes: non-rumor, false rumor, true rumor, and unverified rumor that are introduced in the literature (Ma et al., 2017; Zubiaga et al., 2016b ).",
"An important issue of the tree structure is concerned about the direction of edges, which can result in two different architectures of the model: 1) a bottom-up tree; 2) a top-down tree, which are defined as follows: • Bottom-up tree takes the similar shape as shown in Figure 1 , where responsive nodes always point to their responded nodes and leaf nodes not having any response are laid out at the furthest level.",
"We represent a tree as T i = V i , E i , where V i = C i which con- sists of all relevant posts as nodes, and E i denotes a set of all directed links, where for any u, v ∈ V i , u ← v exists if v responses to u.",
"This structure is similar to a citation network where a response mimics a reference.",
"• Top-down tree naturally conforms to the direction of information propagation, in which a link u → v means the information flows from u to v and v sees it and provides a response to u.",
"This structure reverses bottomup tree and simulates how information cas- cades from a source tweet, i.e., the root, to all its receivers, i.e., the decedents, which is similar as (Wu et al., 2015; Ma et al., 2017) .",
"RvNN-based Rumor Detection The core idea of our method is to strengthen the high-level representation of tree nodes by the recursion following the propagation structure over different branches in the tree.",
"For instance, the responsive nodes confirming or supporting a node (e.g., \"I agree\", \"be right\", etc) can further reinforce the stance of that node while denial or questioning responses (e.g., \"disagree, \"really?!)",
"otherwise weaken its stance.",
"Compared to the kernelbased method using propagation tree (Wu et al., 2015; Ma et al., 2017) , our method does not need pairwise comparison among large number of subtrees, and can learn much stronger representation of content following the response structure.",
"In this section, we will describe our extension to the standard RvNN for modeling rumor detection based on the bottom-up and top-down architectures presented in Section 3.",
"Standard Recursive Neural Networks RvNN is a type of tree-structured neural networks.",
"The original version of RvNN utilized binarized sentence parse trees (Socher et al., 2012) , in which the representation associated with each node of a parse tree is computed from its direct children.",
"The overall structure of the standard RvNN is illustrated as the right side of Figure 2 , corresponding to the input parse tree at the left side.",
"Leaf nodes are the words in an input sentence, each represented by a low-dimensional word embedding.",
"Non-leaf nodes are sentence constituents, computed by recursion based on the presentations of child nodes.",
"Let p be the feature vector of a parent node whose children are c 1 and c 2 , the representation of the parent is computed by p = f (W ·[c 1 ; c 2 ]+b), where f (·) is the activation function with W and b as parameters.",
"This computation is done recursively over all tree nodes; the learned hidden vectors of the nodes can then be used for various classification tasks.",
"Bottom-up RvNN The core idea of bottom-up model is to generate a feature vector for each subtree by recursively visiting every node from the leaves at the bottom to the root at the top.",
"In this way, the subtrees with similar contexts, such as those subtrees having a denial parent and a set of supportive children, will be projected into the proximity in the representation space.",
"And thus such local rumor indicative features are aggregated along different branches into some global representation of the whole tree.",
"For this purpose, we make a natural extension to the original RvNN.",
"The overall structure of our proposed bottom-up model is illustrated in Figure 3(b) , taking a bottom-up tree (see Figure 3 (a)) as input.",
"Different from the standard RvNN, the input of each node in the bottom-up model is a post represented as a vector of words in the vocabulary in terms of tf idf values.",
"Here, every node has an input vector, and the number of children of nodes varies significantly 3 .",
"In rumor detection, long short-term memory (LSTM) (Hochreiter and Schmidhuber, 1997) and gated recurrent units (GRU) (Cho et al., 2014) were used to learn textual representation, which adopts memory units to store information over long time steps (Ma et al., 2016) .",
"In this paper, we choose to extend GRU as hidden unit to model long-distance interactions over the tree nodes because it is more efficient due to fewer parameters.",
"Let S(j) denote the set of direct children of the node j.",
"The transition equations of node j in the bottom-up model are formulated as follows: where x j is the original input vector of node j, E denotes the parameter matrix for transforming this input post,x j is the transformed representation of j, [W * , U * ] are the weight connections inside GRU, and h j and h s refer to the hidden state of j and its s-th child.",
"Thus h S denotes the sum of the hidden state of all the children of j assuming that all children are equally important to j.",
"As with the standard GRU, denotes element-wise multiplication; a reset gate r j determines how to combine the current inputx j with the memory of children, and an update gate z j defines how much memory from the children is cascaded into the current node; andh j denotes the candidate activation of the hidden state of the current node.",
"Different from the standard GRU unit, the gating vectors in our variant of GRU are dependent on the states of many child units, allowing our model to incorporate representations from different children.",
"After recursive aggregation from bottom to up, the state of root node (i.e., source tweet) can be regard as the representation of the whole tree which is used for supervised classification.",
"So, an output layer is connected to the root node for predicting the class of the tree using a softmax function: x j = x j E h S = s∈S(j) h s r j = σ (W rxj + U r h S ) z j = σ (W zxj + U z h S ) h j = tanh (W hxj + U h (h S r j )) h j = (1 − z j ) h S + z j h j y = Sof tmax(Vh 0 + b) (2) where h 0 is the learned hidden vector of root node; V and b are the weights and bias in output layer.",
"Top-down RvNN This model is designed to leverage the structure of top-down tree to capture complex propagation patterns for classifying rumorous claims, which is shown in Figure 3 (c).",
"It models how the informa-tion flows from source post to the current node.",
"The idea of this top-down approach is to generate a strengthened feature vector for each post considering its propagation path, where rumor-indicative features are aggregated along the propagation history in the path.",
"For example, if current post agree with its parent's stance which denies the source post, the denial stance from the root node down to the current node on this path should be reinforced.",
"Due to different branches of any non-leaf node, the top-down visit to its subtree nodes is also recursive.",
"However, the nature of top-down tree lends this model different from the bottom-up one.",
"The representation of each node is computed by combining its own input and its parent node instead of its children nodes.",
"This process proceeds recursively from the root node to its children until all leaf nodes are reached.",
"Suppose that the hidden state of a non-leaf node can be passed synchronously to all its child nodes without loss.",
"Then the hidden state h j of a node j can be computed by combining the hidden state h P(j) of its parent node P(j) and its own input vector x j .",
"Therefore, the transition equations of node j can be formulated as a standard GRU: x j = x j E r j = σ W rxj + U r h P(j) z j = σ W zxj + U z h P(j) h j = tanh W hxj + U h (h P(j) r j ) h j = (1 − z j ) h P(j) + z j h j (3) Through the top-down recursion, the learned representations are eventually embedded into the hidden vector of all the leaf nodes.",
"Since the num-ber of leaf nodes varies, the resulting vectors cannot be directly fed into a fixed-size neural layer for output.",
"Therefore, we add a max-pooling layer to take the maximum value of each dimension of the vectors over all the leaf nodes.",
"This can also help capture the most appealing indicative features from all the propagation paths.",
"Based on the pooling result, we finally use a softmax function in the output layer to predict the label of the tree: y = Sof tmax(Vh ∞ + b) (4) where h ∞ is the pooling vector over all leaf nodes, V and b are parameters in the output layer.",
"Although both of the two RvNN models aim to capture the structural properties by recursively visiting all nodes, we can conjecture that the topdown model would be better.",
"The hypothesis is that in the bottom-up case the final output relies on the representation of single root, and its information loss can be larger than the top-down one since in the top-down case the representations embedded into all leaf nodes along different propagation paths can be incorporated via pooling holistically.",
"Model Training The model is trained to minimize the squared error between the probability distributions of the predictions and the ground truth: L(y,ŷ) = N n=1 C c=1 (y c −ŷ c ) 2 + λ||θ|| 2 2 (5) where y c is the ground truth andŷ c is the prediction probability of a class, N is the number of training claims, C is the number of classes, ||.|| 2 is the L 2 regularization term over all model parameters θ, and λ is the trade-off coefficient.",
"During training, all the model parameters are updated using efficient back-propagation through structure (Goller and Kuchler, 1996; Socher et al., 2013) , and the optimization is gradient-based following the Ada-grad update rule (Duchi et al., 2011) to speed up the convergence.",
"We empirically initialize the model parameters with uniform distribution and set the vocabulary size as 5,000, the size of embedding and hidden units as 100.",
"We iterate over all the training examples in each epoch and continue until the loss value converges or the maximum epoch number is met.",
"Experiments and Results Datasets For experimental evaluation, we use two publicly available Twitter datasets released by Ma et al.",
"(2017) , namely Twitter15 and Twitter16 4 , which respectively contains 1,381 and 1,181 propagation trees (see (Ma et al., 2017) for detailed statistics).",
"In each dataset, a group of wide spread source tweets along with their propagation threads, i.e., replies and retweets, are provided in the form of tree structure.",
"Each tree is annotated with one of the four class labels, i.e., non-rumor, false rumor, true rumor and unverified rumor.",
"We remove the retweets from the trees since they do not provide any extra information or evidence contentwise.",
"We build two versions for each tree, one for the bottom-up tree and the other for the top-down tree, by flipping the edges' direction.",
"Experimental Setup We make comprehensive comparisons between our models and some state-of-the-art baselines on rumor classification and early detection tasks.",
"-DTR: Zhao et al.",
"(2015) proposed a Decision-Tree-based Ranking model to identify trending rumors by searching for inquiry phrases.",
"-DTC: The information credibility model using a Decision-Tree Classifier (Castillo et al., 2011) based on manually engineering various statistical features of the tweets.",
"-RFC: The Random Forest Classier using 3 fitting parameters as temporal properties and a set of handcrafted features on user, linguistic and structural properties (Kwon et al., 2013) .",
"-SVM-TS: A linear SVM classifier that uses time-series to model the variation of handcrafted social context features (Ma et al., 2015) .",
"-SVM-BOW: A naive baseline we built by representing text content using bag-of-words and using linear SVM for rumor classification.",
"-SVM-TK and SVM-HK: SVM classifier uses a Tree Kernel (Ma et al., 2017) and that uses a Hybrid Kernel (Wu et al., 2015) , respectively, both of which model propagation structures with kernels.",
"-GRU-RNN: A detection model based on recurrent neural networks (Ma et al., 2016) with GRU units for learning rumor representations by modeling sequential structure of relevant posts.",
"We implement DTC and RFC using Weka 5 , SVM-based models using LibSVM 6 and all neural-network-based models with Theano 7 .",
"We conduct 5-fold cross-validation on the datasets and use accuracy over all the four categories and F1 measure on each class to evaluate the performance of models.",
"Rumor Classification Performance As shown in Table 1 , our proposed models basically yield much better performance than other methods on both datasets via the modeling of interaction structures of posts in the propagation.",
"It is observed that the performance of the 4 baselines in the first group based on handcrafted features is obviously poor, varying between 0.409 and 0.585 in accuracy, indicating that they fail to generalize due to the lack of capacity capturing helpful features.",
"Among these baselines, SVM-TS and RFC perform relatively better because they 5 www.cs.waikato.ac.nz/ml/weka 6 www.csie.ntu.edu.tw/˜cjlin/libsvm 7 deeplearning.net/software/theano use additional temporal traits, but they are still clearly worse than the models not relying on feature engineering.",
"DTR uses a set of regular expressions indicative of stances.",
"However, only 19.6% and 22.2% tweets in the two datasets contain strings covered by these regular expressions, rendering unsatisfactory result.",
"Among the two kernel methods that are based on comparing propagation structures, we observe that SVM-TK is much more effective than SVM-HK.",
"There are two reasons: 1) SVM-HK was originally proposed and experimented on Sina Weibo (Wu et al., 2015) , which may not be generalize well on Twitter.",
"2) SVM-HK loosely couples two separate kernels: a RBF kernel based on handcrafted features, plus a random walk-based kernel which relies on a set of pre-defined keywords for jumping over the nodes probabilistically.",
"This under utilizes the propagation information due to such oversimplified treatment of tree structure.",
"In contrast, SVM-TK is an integrated kernel and can fully utilize the structure by comparing the trees based on both textual and structural similarities.",
"It appears that using bag-of-words is already a decent model evidenced as the fairly good performance of SVM-BOW which is even better than SVM-HK.",
"This is because the features of SVM-HK are handcrafted for binary classification (i.e., non-rumor vs rumor), ignoring the importance of indicative words or units that benefit finer-grained classification which can be captured more effectively by SVM-BOW.",
"The sequential neural model GRU-RNN performs slightly worse than SVM-TK, but much worse than our recursive models.",
"This is because it is a special case of the recursive model where each non-leaf node has only one child.",
"It has to rely on a linear chain as input, which missed out valuable structural information.",
"However, it does learn high-level features from the post content via hidden units of the neural model while SVM-TK cannot which can only evaluates similarities based on the overlapping words among subtrees.",
"Our recursive models are inherently tree-structured and take advantages of representation learning following the propagation structure, thus beats SVM-TK.",
"In the two recursive models, TD-RvNN outperforms BU-RvNN, which indicates that the bottomup model may suffer from larger information loss than the top-down one.",
"This verifies the hypothesis we made in Section 4.3 that the pooling layer For only the non-rumor class, it seems that our method does not perform so well as some featureengineering baselines.",
"This can be explained by the fact that these baselines are trained with additional features such as user information (e.g., profile, verification status, etc) which may contain clues for differentiating non-rumors from rumors.",
"Also, the responses to non-rumors are usually much more diverse with little informative indication, making identification of non-rumors more difficult based on content even with the structure.",
"Early Rumor Detection Performance Detecting rumors at early state of propagation is important so that interventions can be made in a timely manner.",
"We compared different methods in term of different time delays measured by either tweet count received or time elapsed since the source tweet is posted.",
"The performance is evaluated by the accuracy obtained when we incrementally add test data up to the check point given the targeted time delay or tweets volume.",
"Figure 4 shows that the performance of our recursive models climbs more rapidly and starts to supersede the other models at the early stage.",
"Although all the methods are getting to their best per-formance in the end, TD-RvNN and BU-RvNN only need around 8 hours or about 90 tweets to achieve the comparable performance of the best baseline model, i.e., SVM-TK, which needs about 36 hours or around 300 posts, indicating superior early detection performance of our method.",
"Figure 5 shows a sample tree at the early stage of propagation that has been correctly classified as a false rumor by both recursive models.",
"We can see that this false rumor demonstrates typical patterns in subtrees and propagation paths indicative of the falsehood, where a set of responses supporting the parent posts that deny or question the source post are captured by our bottom-up model.",
"Similarly, some patterns of propagation from the root to leaf nodes like \"support→deny→support\" are also seized by our top-down model.",
"In comparison, sequential models may be confused because the supportive key terms such as \"be right\", \"yeah\", \"exactly!\"",
"dominate the responses, and the SVM-TK may miss similar subtrees by just comparing the surface words.",
"Conclusions and Future Work We propose a bottom-up and a top-down treestructured model based on recursive neural networks for rumor detection on Twitter.",
"The inher-ent nature of recursive models allows them using propagation tree to guide the learning of representations from tweets content, such as embedding various indicative signals hidden in the structure, for better identifying rumors.",
"Results on two public Twitter datasets show that our method improves rumor detection performance in very large margins as compared to state-of-the-art baselines.",
"In our future work, we plan to integrate other types of information such as user properties into the structured neural models to further enhance representation learning and detect rumor spreaders at the same time.",
"We also plan to use unsupervised models for the task by exploiting structural information."
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"4",
"4.1",
"4.2",
"4.3",
"4.4",
"5.1",
"5.2",
"5.3",
"5.4",
"6"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Problem Statement",
"RvNN-based Rumor Detection",
"Standard Recursive Neural Networks",
"Bottom-up RvNN",
"Top-down RvNN",
"Model Training",
"Datasets",
"Experimental Setup",
"Rumor Classification Performance",
"Early Rumor Detection Performance",
"Conclusions and Future Work"
]
} | GEM-SciDuet-train-135#paper-1364#slide-7 | Standard Recursive Neural Networks | RvNN (tree-structured neural networks) utilize sentence parse trees: representation associated with each node of a parse tree is computed from its direct children, computed by
p: the feature vector of a parent node whose children are and computation is done recursively over all tree nodes | RvNN (tree-structured neural networks) utilize sentence parse trees: representation associated with each node of a parse tree is computed from its direct children, computed by
p: the feature vector of a parent node whose children are and computation is done recursively over all tree nodes | [] |
GEM-SciDuet-train-135#paper-1364#slide-8 | 1364 | Rumor Detection on Twitter with Tree-structured Recursive Neural Networks | Automatic rumor detection is technically very challenging. In this work, we try to learn discriminative features from tweets content by following their non-sequential propagation structure and generate more powerful representations for identifying different type of rumors. We propose two recursive neural models based on a bottom-up and a top-down tree-structured neural networks for rumor representation learning and classification, which naturally conform to the propagation layout of tweets. Results on two public Twitter datasets demonstrate that our recursive neural models 1) achieve much better performance than state-of-the-art approaches; 2) demonstrate superior capacity on detecting rumors at very early stage. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173
],
"paper_content_text": [
"Introduction Rumors have always been a social disease.",
"In recent years, it has become unprecedentedly convenient for the \"evil-doers\" to create and disseminate rumors in massive scale with low cost thanks to the popularity of social media outlets on Twitter, Facebook, etc.",
"The worst effect of false rumors could be devastating to individual and/or society.",
"Research pertaining rumors spans multiple disciplines, such as philosophy and humanities (Di-Fonzo and Bordia, 2007; Donovan, 2007) , social psychology (Allport and Postman, 1965; Jaeger et al., 1980; Rosnow and Foster, 2005) , political studies (Allport and Postman, 1946; Berinsky, 2017) , management science (DiFonzo et al., 1994; Kimmel, 2004) and recently computer science and artificial intelligence (Qazvinian et al., 2011; Ratkiewicz et al., 2011; Castillo et al., 2011; Hannak et al., 2014; Zhao et al., 2015; Ma et al., 2015) .",
"Rumor is commonly defined as information that emerge and spread among people whose truth value is unverified or intentionally false (Di-Fonzo and Bordia, 2007; Qazvinian et al., 2011) .",
"Analysis shows that people tend to stop spreading a rumor if it is known as false (Zubiaga et al., 2016b) .",
"However, identifying such misinformation is non-trivial and needs investigative journalism to fact check the suspected claim, which is labor-intensive and time-consuming.",
"The proliferation of social media makes it worse due to the ever-increasing information load and dynamics.",
"Therefore, it is necessary to develop automatic and assistant approaches to facilitate real-time rumor tracking and debunking.",
"For automating rumor detection, most of the previous studies focused on text mining from sequential microblog streams using supervised models based on feature engineering (Castillo et al., 2011; Kwon et al., 2013; Liu et al., 2015; Ma et al., 2015) , and more recently deep neural models (Ma et al., 2016; Chen et al., 2017; Ruchansky et al., 2017) .",
"These methods largely ignore or oversimplify the structural information associated with message propagation which however has been shown conducive to provide useful clues for identifying rumors.",
"Kernel-based method (Wu et al., 2015; Ma et al., 2017) was thus proposed to model the structure as propagation trees in order to differentiate rumorous and non-rumorous claims by comparing their tree-based similarities.",
"But such kind of approach cannot directly classify a tree without pairwise comparison with all other trees imposing unnecessary overhead, and it also cannot automatically learn any high-level feature representations out of the noisy surface features.",
"In this paper, we present a neural rumor detection approach based on recursive neural networks (RvNN) to bridge the content semantics and propagation clues.",
"RvNN and its variants were originally used to compose phrase or sentence representation for syntactic and semantic parsing (Socher et al., 2011 (Socher et al., , 2012 .",
"Unlike parsing, the input into our model is a propagation tree rooted from a source post rather than the parse tree of an individual sentence, and each tree node is a responsive post instead of an individual words.",
"The content semantics of posts and the responsive relationship among them can be jointly captured via the recursive feature learning process along the tree structure.",
"So, why can such neural model do better for the task?",
"Analysis has generally found that Twitter could \"self-correct\" some inaccurate information as users share opinions, conjectures and evidences (Zubiaga et al., 2017) .",
"To illustrate our intuition, Figure 1 exemplifies the propagation trees of two rumors in our dataset, one being false and the other being true 1 .",
"Structure-insensitive methods basically relying on the relative ratio of different stances in the text cannot do well when such clue is unclear like this example.",
"However, it can be seen that when a post denies the false rumor, it tends to spark supportive or affirmative replies confirming the denial; in contrast, denial to a true rumor tends to trigger question or denial in its replies.",
"This observation may suggest a more general hypothesis that the repliers tend to disagree with (or question) who support a false rumor or deny a true rumor, and also they tend to agree with who deny a false rumor or support a true rumor.",
"Meanwhile, a reply, rather than directly responding to the source tweet (i.e., the root), is usually responsive to its immediate ancestor (Lukasik et al., 2016; Zubiaga et al., 2016a) , suggesting obvious local characteristic of the interaction.",
"The recursive network naturally models such structures for learning to capture the rumor indicative signals and enhance the representation by recursively aggregating the signals from different branches.",
"To this end, we extend the standard RvNN into two variants, i.e., a bottom-up (BU) model and a top-down (TD) model, which represent the propagation tree structure from different angles, in order to visit the nodes and combine their representations following distinct directions.",
"The important merit of such architecture is that the node features can be selectively refined by the recursion given the connection and direction of all paths of the 1 False (true) rumor means the veracity of the rumorous claim is false (true).",
"Figure 1 : Propagation trees of two rumorous source tweets.",
"Nodes may express stances on their parent as commenting, supporting, questioning or denying.",
"The edge arrow indicates the direction from a response to its responded node, and the polarity is marked as '+' ('-') for support (denial).",
"The same node color indicates the same stance on the veracity of root node (i.e., source tweet).",
"tree.",
"As a result, it can be expected that the discriminative signals are better embedded into the learned representations.",
"We evaluate our proposed approach based on two public Twitter datasets.",
"The results show that our method outperforms strong rumor detection baselines with large margin and also demonstrate much higher effectiveness for detection at early stage of propagation, which is promising for realtime intervention and debunking.",
"Our contributions are summarized as follows in three folds: • This is the first study that deeply integrates both structure and content semantics based on tree-structured recursive neural networks for detecting rumors from microblog posts.",
"• We propose two variants of RvNN models based on bottom-up and top-down tree structures to generate better integrated representations for a claim by capturing both structural and textural properties signaling rumors.",
"• Our experiments based on real-world Twitter datasets achieve superior improvements over state-of-the-art baselines on both rumor classification and early detection tasks.",
"We make the source codes in our experiments publicly accessible 2 .",
"Related Work Most previous automatic approaches for rumor detection (Castillo et al., 2011; Yang et al., 2012; Liu et al., 2015) intended to learn a supervised classifier by utilizing a wide range of features crafted from post contents, user profiles and propagation patterns.",
"Subsequent studies were then conducted to engineer new features such as those representing rumor diffusion and cascades (Friggeri et al., 2014; Hannak et al., 2014) characterized by comments with links to debunking websites.",
"Kwon et al.",
"(2013) introduced a time-series-fitting model based on the volume of tweets over time.",
"Ma et al.",
"(2015) extended their model with more chronological social context features.",
"These approaches typically require heavy preprocessing and feature engineering.",
"Zhao et al.",
"(2015) alleviated the engineering effort by using a set of regular expressions (such as \"really?",
"\", \"not true\", etc) to find questing and denying tweets, but the approach was oversimplified and suffered from very low recall.",
"Ma et al.",
"(2016) used recurrent neural networks (RNN) to learn automatically the representations from tweets content based on time series.",
"Recently, they studied to mutually reinforce stance detection and rumor classification in a neural multi-task learning framework (Ma et al., 2018) .",
"However, the approaches cannot embed features reflecting how the posts are propagated and requires careful data segmentation to prepare for time sequence.",
"Some kernel-based methods were exploited to model the propagation structure.",
"Wu et al.",
"(2015) proposed a hybrid SVM classifier which combines a RBF kernel and a random-walk-based graph kernel to capture both flat and propagation patterns for detecting rumors on Sina Weibo.",
"Ma et al.",
"(2017) used tree kernel to capture the similarity of propagation trees by counting their similar substructures in order to identify different types of rumors on Twitter.",
"Compared to their studies, our model can learn the useful features via a more natural and general approach, i.e., the tree-structured neural network, to jointly generate representations from both structure and content.",
"RvNN has demonstrated state-of-the-art performances in a variety of tasks, e.g., images segmentation (Socher et al., 2011) , phrase representation from word vectors (Socher et al., 2012) , and sentiment classification in sentences (Socher et al., 2013) .",
"More recently, a deep RvNN was proposed to model the compositionality in natural language for fine-grained sentiment classification by stacking multiple recursive layers (Irsoy and Cardie, 2014) .",
"In order to avoid gradient vanishing, some studies integrated Long Short Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) to RvNN Tai et al., 2015) .",
"Mou et al.",
"(2015) used a convolutional network over tree structures for syntactic tree parsing of natural language sentences.",
"Problem Statement We define a Twitter rumor detection dataset as a set of claims C = {C 1 , C 2 , · · · , C |C| }, where each claim C i corresponds to a source tweet r i which consists of ideally all its relevant responsive tweets in chronological order, i.e., C i = {r i , x i1 , x i2 , · · · , x im } where each x i * is a responsive tweet of the root r i .",
"Note that although the tweets are notated sequentially, there are connections among them based on their reply or repost relationships, which can form a propagation tree structure (Wu et al., 2015; Ma et al., 2017) with r i being the root node.",
"We formulate this task as a supervised classification problem, which learns a classifier f from labeled claims, that is f : C i → Y i , where Y i takes one of the four finer-grained classes: non-rumor, false rumor, true rumor, and unverified rumor that are introduced in the literature (Ma et al., 2017; Zubiaga et al., 2016b ).",
"An important issue of the tree structure is concerned about the direction of edges, which can result in two different architectures of the model: 1) a bottom-up tree; 2) a top-down tree, which are defined as follows: • Bottom-up tree takes the similar shape as shown in Figure 1 , where responsive nodes always point to their responded nodes and leaf nodes not having any response are laid out at the furthest level.",
"We represent a tree as T i = V i , E i , where V i = C i which con- sists of all relevant posts as nodes, and E i denotes a set of all directed links, where for any u, v ∈ V i , u ← v exists if v responses to u.",
"This structure is similar to a citation network where a response mimics a reference.",
"• Top-down tree naturally conforms to the direction of information propagation, in which a link u → v means the information flows from u to v and v sees it and provides a response to u.",
"This structure reverses bottomup tree and simulates how information cas- cades from a source tweet, i.e., the root, to all its receivers, i.e., the decedents, which is similar as (Wu et al., 2015; Ma et al., 2017) .",
"RvNN-based Rumor Detection The core idea of our method is to strengthen the high-level representation of tree nodes by the recursion following the propagation structure over different branches in the tree.",
"For instance, the responsive nodes confirming or supporting a node (e.g., \"I agree\", \"be right\", etc) can further reinforce the stance of that node while denial or questioning responses (e.g., \"disagree, \"really?!)",
"otherwise weaken its stance.",
"Compared to the kernelbased method using propagation tree (Wu et al., 2015; Ma et al., 2017) , our method does not need pairwise comparison among large number of subtrees, and can learn much stronger representation of content following the response structure.",
"In this section, we will describe our extension to the standard RvNN for modeling rumor detection based on the bottom-up and top-down architectures presented in Section 3.",
"Standard Recursive Neural Networks RvNN is a type of tree-structured neural networks.",
"The original version of RvNN utilized binarized sentence parse trees (Socher et al., 2012) , in which the representation associated with each node of a parse tree is computed from its direct children.",
"The overall structure of the standard RvNN is illustrated as the right side of Figure 2 , corresponding to the input parse tree at the left side.",
"Leaf nodes are the words in an input sentence, each represented by a low-dimensional word embedding.",
"Non-leaf nodes are sentence constituents, computed by recursion based on the presentations of child nodes.",
"Let p be the feature vector of a parent node whose children are c 1 and c 2 , the representation of the parent is computed by p = f (W ·[c 1 ; c 2 ]+b), where f (·) is the activation function with W and b as parameters.",
"This computation is done recursively over all tree nodes; the learned hidden vectors of the nodes can then be used for various classification tasks.",
"Bottom-up RvNN The core idea of bottom-up model is to generate a feature vector for each subtree by recursively visiting every node from the leaves at the bottom to the root at the top.",
"In this way, the subtrees with similar contexts, such as those subtrees having a denial parent and a set of supportive children, will be projected into the proximity in the representation space.",
"And thus such local rumor indicative features are aggregated along different branches into some global representation of the whole tree.",
"For this purpose, we make a natural extension to the original RvNN.",
"The overall structure of our proposed bottom-up model is illustrated in Figure 3(b) , taking a bottom-up tree (see Figure 3 (a)) as input.",
"Different from the standard RvNN, the input of each node in the bottom-up model is a post represented as a vector of words in the vocabulary in terms of tf idf values.",
"Here, every node has an input vector, and the number of children of nodes varies significantly 3 .",
"In rumor detection, long short-term memory (LSTM) (Hochreiter and Schmidhuber, 1997) and gated recurrent units (GRU) (Cho et al., 2014) were used to learn textual representation, which adopts memory units to store information over long time steps (Ma et al., 2016) .",
"In this paper, we choose to extend GRU as hidden unit to model long-distance interactions over the tree nodes because it is more efficient due to fewer parameters.",
"Let S(j) denote the set of direct children of the node j.",
"The transition equations of node j in the bottom-up model are formulated as follows: where x j is the original input vector of node j, E denotes the parameter matrix for transforming this input post,x j is the transformed representation of j, [W * , U * ] are the weight connections inside GRU, and h j and h s refer to the hidden state of j and its s-th child.",
"Thus h S denotes the sum of the hidden state of all the children of j assuming that all children are equally important to j.",
"As with the standard GRU, denotes element-wise multiplication; a reset gate r j determines how to combine the current inputx j with the memory of children, and an update gate z j defines how much memory from the children is cascaded into the current node; andh j denotes the candidate activation of the hidden state of the current node.",
"Different from the standard GRU unit, the gating vectors in our variant of GRU are dependent on the states of many child units, allowing our model to incorporate representations from different children.",
"After recursive aggregation from bottom to up, the state of root node (i.e., source tweet) can be regard as the representation of the whole tree which is used for supervised classification.",
"So, an output layer is connected to the root node for predicting the class of the tree using a softmax function: x j = x j E h S = s∈S(j) h s r j = σ (W rxj + U r h S ) z j = σ (W zxj + U z h S ) h j = tanh (W hxj + U h (h S r j )) h j = (1 − z j ) h S + z j h j y = Sof tmax(Vh 0 + b) (2) where h 0 is the learned hidden vector of root node; V and b are the weights and bias in output layer.",
"Top-down RvNN This model is designed to leverage the structure of top-down tree to capture complex propagation patterns for classifying rumorous claims, which is shown in Figure 3 (c).",
"It models how the informa-tion flows from source post to the current node.",
"The idea of this top-down approach is to generate a strengthened feature vector for each post considering its propagation path, where rumor-indicative features are aggregated along the propagation history in the path.",
"For example, if current post agree with its parent's stance which denies the source post, the denial stance from the root node down to the current node on this path should be reinforced.",
"Due to different branches of any non-leaf node, the top-down visit to its subtree nodes is also recursive.",
"However, the nature of top-down tree lends this model different from the bottom-up one.",
"The representation of each node is computed by combining its own input and its parent node instead of its children nodes.",
"This process proceeds recursively from the root node to its children until all leaf nodes are reached.",
"Suppose that the hidden state of a non-leaf node can be passed synchronously to all its child nodes without loss.",
"Then the hidden state h j of a node j can be computed by combining the hidden state h P(j) of its parent node P(j) and its own input vector x j .",
"Therefore, the transition equations of node j can be formulated as a standard GRU: x j = x j E r j = σ W rxj + U r h P(j) z j = σ W zxj + U z h P(j) h j = tanh W hxj + U h (h P(j) r j ) h j = (1 − z j ) h P(j) + z j h j (3) Through the top-down recursion, the learned representations are eventually embedded into the hidden vector of all the leaf nodes.",
"Since the num-ber of leaf nodes varies, the resulting vectors cannot be directly fed into a fixed-size neural layer for output.",
"Therefore, we add a max-pooling layer to take the maximum value of each dimension of the vectors over all the leaf nodes.",
"This can also help capture the most appealing indicative features from all the propagation paths.",
"Based on the pooling result, we finally use a softmax function in the output layer to predict the label of the tree: y = Sof tmax(Vh ∞ + b) (4) where h ∞ is the pooling vector over all leaf nodes, V and b are parameters in the output layer.",
"Although both of the two RvNN models aim to capture the structural properties by recursively visiting all nodes, we can conjecture that the topdown model would be better.",
"The hypothesis is that in the bottom-up case the final output relies on the representation of single root, and its information loss can be larger than the top-down one since in the top-down case the representations embedded into all leaf nodes along different propagation paths can be incorporated via pooling holistically.",
"Model Training The model is trained to minimize the squared error between the probability distributions of the predictions and the ground truth: L(y,ŷ) = N n=1 C c=1 (y c −ŷ c ) 2 + λ||θ|| 2 2 (5) where y c is the ground truth andŷ c is the prediction probability of a class, N is the number of training claims, C is the number of classes, ||.|| 2 is the L 2 regularization term over all model parameters θ, and λ is the trade-off coefficient.",
"During training, all the model parameters are updated using efficient back-propagation through structure (Goller and Kuchler, 1996; Socher et al., 2013) , and the optimization is gradient-based following the Ada-grad update rule (Duchi et al., 2011) to speed up the convergence.",
"We empirically initialize the model parameters with uniform distribution and set the vocabulary size as 5,000, the size of embedding and hidden units as 100.",
"We iterate over all the training examples in each epoch and continue until the loss value converges or the maximum epoch number is met.",
"Experiments and Results Datasets For experimental evaluation, we use two publicly available Twitter datasets released by Ma et al.",
"(2017) , namely Twitter15 and Twitter16 4 , which respectively contains 1,381 and 1,181 propagation trees (see (Ma et al., 2017) for detailed statistics).",
"In each dataset, a group of wide spread source tweets along with their propagation threads, i.e., replies and retweets, are provided in the form of tree structure.",
"Each tree is annotated with one of the four class labels, i.e., non-rumor, false rumor, true rumor and unverified rumor.",
"We remove the retweets from the trees since they do not provide any extra information or evidence contentwise.",
"We build two versions for each tree, one for the bottom-up tree and the other for the top-down tree, by flipping the edges' direction.",
"Experimental Setup We make comprehensive comparisons between our models and some state-of-the-art baselines on rumor classification and early detection tasks.",
"-DTR: Zhao et al.",
"(2015) proposed a Decision-Tree-based Ranking model to identify trending rumors by searching for inquiry phrases.",
"-DTC: The information credibility model using a Decision-Tree Classifier (Castillo et al., 2011) based on manually engineering various statistical features of the tweets.",
"-RFC: The Random Forest Classier using 3 fitting parameters as temporal properties and a set of handcrafted features on user, linguistic and structural properties (Kwon et al., 2013) .",
"-SVM-TS: A linear SVM classifier that uses time-series to model the variation of handcrafted social context features (Ma et al., 2015) .",
"-SVM-BOW: A naive baseline we built by representing text content using bag-of-words and using linear SVM for rumor classification.",
"-SVM-TK and SVM-HK: SVM classifier uses a Tree Kernel (Ma et al., 2017) and that uses a Hybrid Kernel (Wu et al., 2015) , respectively, both of which model propagation structures with kernels.",
"-GRU-RNN: A detection model based on recurrent neural networks (Ma et al., 2016) with GRU units for learning rumor representations by modeling sequential structure of relevant posts.",
"We implement DTC and RFC using Weka 5 , SVM-based models using LibSVM 6 and all neural-network-based models with Theano 7 .",
"We conduct 5-fold cross-validation on the datasets and use accuracy over all the four categories and F1 measure on each class to evaluate the performance of models.",
"Rumor Classification Performance As shown in Table 1 , our proposed models basically yield much better performance than other methods on both datasets via the modeling of interaction structures of posts in the propagation.",
"It is observed that the performance of the 4 baselines in the first group based on handcrafted features is obviously poor, varying between 0.409 and 0.585 in accuracy, indicating that they fail to generalize due to the lack of capacity capturing helpful features.",
"Among these baselines, SVM-TS and RFC perform relatively better because they 5 www.cs.waikato.ac.nz/ml/weka 6 www.csie.ntu.edu.tw/˜cjlin/libsvm 7 deeplearning.net/software/theano use additional temporal traits, but they are still clearly worse than the models not relying on feature engineering.",
"DTR uses a set of regular expressions indicative of stances.",
"However, only 19.6% and 22.2% tweets in the two datasets contain strings covered by these regular expressions, rendering unsatisfactory result.",
"Among the two kernel methods that are based on comparing propagation structures, we observe that SVM-TK is much more effective than SVM-HK.",
"There are two reasons: 1) SVM-HK was originally proposed and experimented on Sina Weibo (Wu et al., 2015) , which may not be generalize well on Twitter.",
"2) SVM-HK loosely couples two separate kernels: a RBF kernel based on handcrafted features, plus a random walk-based kernel which relies on a set of pre-defined keywords for jumping over the nodes probabilistically.",
"This under utilizes the propagation information due to such oversimplified treatment of tree structure.",
"In contrast, SVM-TK is an integrated kernel and can fully utilize the structure by comparing the trees based on both textual and structural similarities.",
"It appears that using bag-of-words is already a decent model evidenced as the fairly good performance of SVM-BOW which is even better than SVM-HK.",
"This is because the features of SVM-HK are handcrafted for binary classification (i.e., non-rumor vs rumor), ignoring the importance of indicative words or units that benefit finer-grained classification which can be captured more effectively by SVM-BOW.",
"The sequential neural model GRU-RNN performs slightly worse than SVM-TK, but much worse than our recursive models.",
"This is because it is a special case of the recursive model where each non-leaf node has only one child.",
"It has to rely on a linear chain as input, which missed out valuable structural information.",
"However, it does learn high-level features from the post content via hidden units of the neural model while SVM-TK cannot which can only evaluates similarities based on the overlapping words among subtrees.",
"Our recursive models are inherently tree-structured and take advantages of representation learning following the propagation structure, thus beats SVM-TK.",
"In the two recursive models, TD-RvNN outperforms BU-RvNN, which indicates that the bottomup model may suffer from larger information loss than the top-down one.",
"This verifies the hypothesis we made in Section 4.3 that the pooling layer For only the non-rumor class, it seems that our method does not perform so well as some featureengineering baselines.",
"This can be explained by the fact that these baselines are trained with additional features such as user information (e.g., profile, verification status, etc) which may contain clues for differentiating non-rumors from rumors.",
"Also, the responses to non-rumors are usually much more diverse with little informative indication, making identification of non-rumors more difficult based on content even with the structure.",
"Early Rumor Detection Performance Detecting rumors at early state of propagation is important so that interventions can be made in a timely manner.",
"We compared different methods in term of different time delays measured by either tweet count received or time elapsed since the source tweet is posted.",
"The performance is evaluated by the accuracy obtained when we incrementally add test data up to the check point given the targeted time delay or tweets volume.",
"Figure 4 shows that the performance of our recursive models climbs more rapidly and starts to supersede the other models at the early stage.",
"Although all the methods are getting to their best per-formance in the end, TD-RvNN and BU-RvNN only need around 8 hours or about 90 tweets to achieve the comparable performance of the best baseline model, i.e., SVM-TK, which needs about 36 hours or around 300 posts, indicating superior early detection performance of our method.",
"Figure 5 shows a sample tree at the early stage of propagation that has been correctly classified as a false rumor by both recursive models.",
"We can see that this false rumor demonstrates typical patterns in subtrees and propagation paths indicative of the falsehood, where a set of responses supporting the parent posts that deny or question the source post are captured by our bottom-up model.",
"Similarly, some patterns of propagation from the root to leaf nodes like \"support→deny→support\" are also seized by our top-down model.",
"In comparison, sequential models may be confused because the supportive key terms such as \"be right\", \"yeah\", \"exactly!\"",
"dominate the responses, and the SVM-TK may miss similar subtrees by just comparing the surface words.",
"Conclusions and Future Work We propose a bottom-up and a top-down treestructured model based on recursive neural networks for rumor detection on Twitter.",
"The inher-ent nature of recursive models allows them using propagation tree to guide the learning of representations from tweets content, such as embedding various indicative signals hidden in the structure, for better identifying rumors.",
"Results on two public Twitter datasets show that our method improves rumor detection performance in very large margins as compared to state-of-the-art baselines.",
"In our future work, we plan to integrate other types of information such as user properties into the structured neural models to further enhance representation learning and detect rumor spreaders at the same time.",
"We also plan to use unsupervised models for the task by exploiting structural information."
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"4",
"4.1",
"4.2",
"4.3",
"4.4",
"5.1",
"5.2",
"5.3",
"5.4",
"6"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Problem Statement",
"RvNN-based Rumor Detection",
"Standard Recursive Neural Networks",
"Bottom-up RvNN",
"Top-down RvNN",
"Model Training",
"Datasets",
"Experimental Setup",
"Rumor Classification Performance",
"Early Rumor Detection Performance",
"Conclusions and Future Work"
]
} | GEM-SciDuet-train-135#paper-1364#slide-8 | Bottom up RvNN | Input: bottom-up tree (node: a post represented as a vector of words ) GRU equation at node
Structure: recursively visit every node from the leaves at the bottom to the root at the top (a natural extension to the original RvNN
Intuition: local rumor indicative features are aggregated along different branches (e.g., subtrees having a denial parent and a set of supportive children) (generate a feature vector for each subtree)
: #Walmart donates $10,000 to #DarrenWilson fund to continue police racial profiling
1:30 Idc if they killed a mf foreal. Ima always shop with @Walmart. I'm
: NEED SOURCE. have a feeling this is just hearsay ... just bein honest
I agree. I have been hearing this all day but no source 1:12
: Exactly, i don't think Wal-Mart would let everyone know this if they did!! 2:21 | Input: bottom-up tree (node: a post represented as a vector of words ) GRU equation at node
Structure: recursively visit every node from the leaves at the bottom to the root at the top (a natural extension to the original RvNN
Intuition: local rumor indicative features are aggregated along different branches (e.g., subtrees having a denial parent and a set of supportive children) (generate a feature vector for each subtree)
: #Walmart donates $10,000 to #DarrenWilson fund to continue police racial profiling
1:30 Idc if they killed a mf foreal. Ima always shop with @Walmart. I'm
: NEED SOURCE. have a feeling this is just hearsay ... just bein honest
I agree. I have been hearing this all day but no source 1:12
: Exactly, i don't think Wal-Mart would let everyone know this if they did!! 2:21 | [] |
GEM-SciDuet-train-135#paper-1364#slide-9 | 1364 | Rumor Detection on Twitter with Tree-structured Recursive Neural Networks | Automatic rumor detection is technically very challenging. In this work, we try to learn discriminative features from tweets content by following their non-sequential propagation structure and generate more powerful representations for identifying different type of rumors. We propose two recursive neural models based on a bottom-up and a top-down tree-structured neural networks for rumor representation learning and classification, which naturally conform to the propagation layout of tweets. Results on two public Twitter datasets demonstrate that our recursive neural models 1) achieve much better performance than state-of-the-art approaches; 2) demonstrate superior capacity on detecting rumors at very early stage. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173
],
"paper_content_text": [
"Introduction Rumors have always been a social disease.",
"In recent years, it has become unprecedentedly convenient for the \"evil-doers\" to create and disseminate rumors in massive scale with low cost thanks to the popularity of social media outlets on Twitter, Facebook, etc.",
"The worst effect of false rumors could be devastating to individual and/or society.",
"Research pertaining rumors spans multiple disciplines, such as philosophy and humanities (Di-Fonzo and Bordia, 2007; Donovan, 2007) , social psychology (Allport and Postman, 1965; Jaeger et al., 1980; Rosnow and Foster, 2005) , political studies (Allport and Postman, 1946; Berinsky, 2017) , management science (DiFonzo et al., 1994; Kimmel, 2004) and recently computer science and artificial intelligence (Qazvinian et al., 2011; Ratkiewicz et al., 2011; Castillo et al., 2011; Hannak et al., 2014; Zhao et al., 2015; Ma et al., 2015) .",
"Rumor is commonly defined as information that emerge and spread among people whose truth value is unverified or intentionally false (Di-Fonzo and Bordia, 2007; Qazvinian et al., 2011) .",
"Analysis shows that people tend to stop spreading a rumor if it is known as false (Zubiaga et al., 2016b) .",
"However, identifying such misinformation is non-trivial and needs investigative journalism to fact check the suspected claim, which is labor-intensive and time-consuming.",
"The proliferation of social media makes it worse due to the ever-increasing information load and dynamics.",
"Therefore, it is necessary to develop automatic and assistant approaches to facilitate real-time rumor tracking and debunking.",
"For automating rumor detection, most of the previous studies focused on text mining from sequential microblog streams using supervised models based on feature engineering (Castillo et al., 2011; Kwon et al., 2013; Liu et al., 2015; Ma et al., 2015) , and more recently deep neural models (Ma et al., 2016; Chen et al., 2017; Ruchansky et al., 2017) .",
"These methods largely ignore or oversimplify the structural information associated with message propagation which however has been shown conducive to provide useful clues for identifying rumors.",
"Kernel-based method (Wu et al., 2015; Ma et al., 2017) was thus proposed to model the structure as propagation trees in order to differentiate rumorous and non-rumorous claims by comparing their tree-based similarities.",
"But such kind of approach cannot directly classify a tree without pairwise comparison with all other trees imposing unnecessary overhead, and it also cannot automatically learn any high-level feature representations out of the noisy surface features.",
"In this paper, we present a neural rumor detection approach based on recursive neural networks (RvNN) to bridge the content semantics and propagation clues.",
"RvNN and its variants were originally used to compose phrase or sentence representation for syntactic and semantic parsing (Socher et al., 2011 (Socher et al., , 2012 .",
"Unlike parsing, the input into our model is a propagation tree rooted from a source post rather than the parse tree of an individual sentence, and each tree node is a responsive post instead of an individual words.",
"The content semantics of posts and the responsive relationship among them can be jointly captured via the recursive feature learning process along the tree structure.",
"So, why can such neural model do better for the task?",
"Analysis has generally found that Twitter could \"self-correct\" some inaccurate information as users share opinions, conjectures and evidences (Zubiaga et al., 2017) .",
"To illustrate our intuition, Figure 1 exemplifies the propagation trees of two rumors in our dataset, one being false and the other being true 1 .",
"Structure-insensitive methods basically relying on the relative ratio of different stances in the text cannot do well when such clue is unclear like this example.",
"However, it can be seen that when a post denies the false rumor, it tends to spark supportive or affirmative replies confirming the denial; in contrast, denial to a true rumor tends to trigger question or denial in its replies.",
"This observation may suggest a more general hypothesis that the repliers tend to disagree with (or question) who support a false rumor or deny a true rumor, and also they tend to agree with who deny a false rumor or support a true rumor.",
"Meanwhile, a reply, rather than directly responding to the source tweet (i.e., the root), is usually responsive to its immediate ancestor (Lukasik et al., 2016; Zubiaga et al., 2016a) , suggesting obvious local characteristic of the interaction.",
"The recursive network naturally models such structures for learning to capture the rumor indicative signals and enhance the representation by recursively aggregating the signals from different branches.",
"To this end, we extend the standard RvNN into two variants, i.e., a bottom-up (BU) model and a top-down (TD) model, which represent the propagation tree structure from different angles, in order to visit the nodes and combine their representations following distinct directions.",
"The important merit of such architecture is that the node features can be selectively refined by the recursion given the connection and direction of all paths of the 1 False (true) rumor means the veracity of the rumorous claim is false (true).",
"Figure 1 : Propagation trees of two rumorous source tweets.",
"Nodes may express stances on their parent as commenting, supporting, questioning or denying.",
"The edge arrow indicates the direction from a response to its responded node, and the polarity is marked as '+' ('-') for support (denial).",
"The same node color indicates the same stance on the veracity of root node (i.e., source tweet).",
"tree.",
"As a result, it can be expected that the discriminative signals are better embedded into the learned representations.",
"We evaluate our proposed approach based on two public Twitter datasets.",
"The results show that our method outperforms strong rumor detection baselines with large margin and also demonstrate much higher effectiveness for detection at early stage of propagation, which is promising for realtime intervention and debunking.",
"Our contributions are summarized as follows in three folds: • This is the first study that deeply integrates both structure and content semantics based on tree-structured recursive neural networks for detecting rumors from microblog posts.",
"• We propose two variants of RvNN models based on bottom-up and top-down tree structures to generate better integrated representations for a claim by capturing both structural and textural properties signaling rumors.",
"• Our experiments based on real-world Twitter datasets achieve superior improvements over state-of-the-art baselines on both rumor classification and early detection tasks.",
"We make the source codes in our experiments publicly accessible 2 .",
"Related Work Most previous automatic approaches for rumor detection (Castillo et al., 2011; Yang et al., 2012; Liu et al., 2015) intended to learn a supervised classifier by utilizing a wide range of features crafted from post contents, user profiles and propagation patterns.",
"Subsequent studies were then conducted to engineer new features such as those representing rumor diffusion and cascades (Friggeri et al., 2014; Hannak et al., 2014) characterized by comments with links to debunking websites.",
"Kwon et al.",
"(2013) introduced a time-series-fitting model based on the volume of tweets over time.",
"Ma et al.",
"(2015) extended their model with more chronological social context features.",
"These approaches typically require heavy preprocessing and feature engineering.",
"Zhao et al.",
"(2015) alleviated the engineering effort by using a set of regular expressions (such as \"really?",
"\", \"not true\", etc) to find questing and denying tweets, but the approach was oversimplified and suffered from very low recall.",
"Ma et al.",
"(2016) used recurrent neural networks (RNN) to learn automatically the representations from tweets content based on time series.",
"Recently, they studied to mutually reinforce stance detection and rumor classification in a neural multi-task learning framework (Ma et al., 2018) .",
"However, the approaches cannot embed features reflecting how the posts are propagated and requires careful data segmentation to prepare for time sequence.",
"Some kernel-based methods were exploited to model the propagation structure.",
"Wu et al.",
"(2015) proposed a hybrid SVM classifier which combines a RBF kernel and a random-walk-based graph kernel to capture both flat and propagation patterns for detecting rumors on Sina Weibo.",
"Ma et al.",
"(2017) used tree kernel to capture the similarity of propagation trees by counting their similar substructures in order to identify different types of rumors on Twitter.",
"Compared to their studies, our model can learn the useful features via a more natural and general approach, i.e., the tree-structured neural network, to jointly generate representations from both structure and content.",
"RvNN has demonstrated state-of-the-art performances in a variety of tasks, e.g., images segmentation (Socher et al., 2011) , phrase representation from word vectors (Socher et al., 2012) , and sentiment classification in sentences (Socher et al., 2013) .",
"More recently, a deep RvNN was proposed to model the compositionality in natural language for fine-grained sentiment classification by stacking multiple recursive layers (Irsoy and Cardie, 2014) .",
"In order to avoid gradient vanishing, some studies integrated Long Short Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) to RvNN Tai et al., 2015) .",
"Mou et al.",
"(2015) used a convolutional network over tree structures for syntactic tree parsing of natural language sentences.",
"Problem Statement We define a Twitter rumor detection dataset as a set of claims C = {C 1 , C 2 , · · · , C |C| }, where each claim C i corresponds to a source tweet r i which consists of ideally all its relevant responsive tweets in chronological order, i.e., C i = {r i , x i1 , x i2 , · · · , x im } where each x i * is a responsive tweet of the root r i .",
"Note that although the tweets are notated sequentially, there are connections among them based on their reply or repost relationships, which can form a propagation tree structure (Wu et al., 2015; Ma et al., 2017) with r i being the root node.",
"We formulate this task as a supervised classification problem, which learns a classifier f from labeled claims, that is f : C i → Y i , where Y i takes one of the four finer-grained classes: non-rumor, false rumor, true rumor, and unverified rumor that are introduced in the literature (Ma et al., 2017; Zubiaga et al., 2016b ).",
"An important issue of the tree structure is concerned about the direction of edges, which can result in two different architectures of the model: 1) a bottom-up tree; 2) a top-down tree, which are defined as follows: • Bottom-up tree takes the similar shape as shown in Figure 1 , where responsive nodes always point to their responded nodes and leaf nodes not having any response are laid out at the furthest level.",
"We represent a tree as T i = V i , E i , where V i = C i which con- sists of all relevant posts as nodes, and E i denotes a set of all directed links, where for any u, v ∈ V i , u ← v exists if v responses to u.",
"This structure is similar to a citation network where a response mimics a reference.",
"• Top-down tree naturally conforms to the direction of information propagation, in which a link u → v means the information flows from u to v and v sees it and provides a response to u.",
"This structure reverses bottomup tree and simulates how information cas- cades from a source tweet, i.e., the root, to all its receivers, i.e., the decedents, which is similar as (Wu et al., 2015; Ma et al., 2017) .",
"RvNN-based Rumor Detection The core idea of our method is to strengthen the high-level representation of tree nodes by the recursion following the propagation structure over different branches in the tree.",
"For instance, the responsive nodes confirming or supporting a node (e.g., \"I agree\", \"be right\", etc) can further reinforce the stance of that node while denial or questioning responses (e.g., \"disagree, \"really?!)",
"otherwise weaken its stance.",
"Compared to the kernelbased method using propagation tree (Wu et al., 2015; Ma et al., 2017) , our method does not need pairwise comparison among large number of subtrees, and can learn much stronger representation of content following the response structure.",
"In this section, we will describe our extension to the standard RvNN for modeling rumor detection based on the bottom-up and top-down architectures presented in Section 3.",
"Standard Recursive Neural Networks RvNN is a type of tree-structured neural networks.",
"The original version of RvNN utilized binarized sentence parse trees (Socher et al., 2012) , in which the representation associated with each node of a parse tree is computed from its direct children.",
"The overall structure of the standard RvNN is illustrated as the right side of Figure 2 , corresponding to the input parse tree at the left side.",
"Leaf nodes are the words in an input sentence, each represented by a low-dimensional word embedding.",
"Non-leaf nodes are sentence constituents, computed by recursion based on the presentations of child nodes.",
"Let p be the feature vector of a parent node whose children are c 1 and c 2 , the representation of the parent is computed by p = f (W ·[c 1 ; c 2 ]+b), where f (·) is the activation function with W and b as parameters.",
"This computation is done recursively over all tree nodes; the learned hidden vectors of the nodes can then be used for various classification tasks.",
"Bottom-up RvNN The core idea of bottom-up model is to generate a feature vector for each subtree by recursively visiting every node from the leaves at the bottom to the root at the top.",
"In this way, the subtrees with similar contexts, such as those subtrees having a denial parent and a set of supportive children, will be projected into the proximity in the representation space.",
"And thus such local rumor indicative features are aggregated along different branches into some global representation of the whole tree.",
"For this purpose, we make a natural extension to the original RvNN.",
"The overall structure of our proposed bottom-up model is illustrated in Figure 3(b) , taking a bottom-up tree (see Figure 3 (a)) as input.",
"Different from the standard RvNN, the input of each node in the bottom-up model is a post represented as a vector of words in the vocabulary in terms of tf idf values.",
"Here, every node has an input vector, and the number of children of nodes varies significantly 3 .",
"In rumor detection, long short-term memory (LSTM) (Hochreiter and Schmidhuber, 1997) and gated recurrent units (GRU) (Cho et al., 2014) were used to learn textual representation, which adopts memory units to store information over long time steps (Ma et al., 2016) .",
"In this paper, we choose to extend GRU as hidden unit to model long-distance interactions over the tree nodes because it is more efficient due to fewer parameters.",
"Let S(j) denote the set of direct children of the node j.",
"The transition equations of node j in the bottom-up model are formulated as follows: where x j is the original input vector of node j, E denotes the parameter matrix for transforming this input post,x j is the transformed representation of j, [W * , U * ] are the weight connections inside GRU, and h j and h s refer to the hidden state of j and its s-th child.",
"Thus h S denotes the sum of the hidden state of all the children of j assuming that all children are equally important to j.",
"As with the standard GRU, denotes element-wise multiplication; a reset gate r j determines how to combine the current inputx j with the memory of children, and an update gate z j defines how much memory from the children is cascaded into the current node; andh j denotes the candidate activation of the hidden state of the current node.",
"Different from the standard GRU unit, the gating vectors in our variant of GRU are dependent on the states of many child units, allowing our model to incorporate representations from different children.",
"After recursive aggregation from bottom to up, the state of root node (i.e., source tweet) can be regard as the representation of the whole tree which is used for supervised classification.",
"So, an output layer is connected to the root node for predicting the class of the tree using a softmax function: x j = x j E h S = s∈S(j) h s r j = σ (W rxj + U r h S ) z j = σ (W zxj + U z h S ) h j = tanh (W hxj + U h (h S r j )) h j = (1 − z j ) h S + z j h j y = Sof tmax(Vh 0 + b) (2) where h 0 is the learned hidden vector of root node; V and b are the weights and bias in output layer.",
"Top-down RvNN This model is designed to leverage the structure of top-down tree to capture complex propagation patterns for classifying rumorous claims, which is shown in Figure 3 (c).",
"It models how the informa-tion flows from source post to the current node.",
"The idea of this top-down approach is to generate a strengthened feature vector for each post considering its propagation path, where rumor-indicative features are aggregated along the propagation history in the path.",
"For example, if current post agree with its parent's stance which denies the source post, the denial stance from the root node down to the current node on this path should be reinforced.",
"Due to different branches of any non-leaf node, the top-down visit to its subtree nodes is also recursive.",
"However, the nature of top-down tree lends this model different from the bottom-up one.",
"The representation of each node is computed by combining its own input and its parent node instead of its children nodes.",
"This process proceeds recursively from the root node to its children until all leaf nodes are reached.",
"Suppose that the hidden state of a non-leaf node can be passed synchronously to all its child nodes without loss.",
"Then the hidden state h j of a node j can be computed by combining the hidden state h P(j) of its parent node P(j) and its own input vector x j .",
"Therefore, the transition equations of node j can be formulated as a standard GRU: x j = x j E r j = σ W rxj + U r h P(j) z j = σ W zxj + U z h P(j) h j = tanh W hxj + U h (h P(j) r j ) h j = (1 − z j ) h P(j) + z j h j (3) Through the top-down recursion, the learned representations are eventually embedded into the hidden vector of all the leaf nodes.",
"Since the num-ber of leaf nodes varies, the resulting vectors cannot be directly fed into a fixed-size neural layer for output.",
"Therefore, we add a max-pooling layer to take the maximum value of each dimension of the vectors over all the leaf nodes.",
"This can also help capture the most appealing indicative features from all the propagation paths.",
"Based on the pooling result, we finally use a softmax function in the output layer to predict the label of the tree: y = Sof tmax(Vh ∞ + b) (4) where h ∞ is the pooling vector over all leaf nodes, V and b are parameters in the output layer.",
"Although both of the two RvNN models aim to capture the structural properties by recursively visiting all nodes, we can conjecture that the topdown model would be better.",
"The hypothesis is that in the bottom-up case the final output relies on the representation of single root, and its information loss can be larger than the top-down one since in the top-down case the representations embedded into all leaf nodes along different propagation paths can be incorporated via pooling holistically.",
"Model Training The model is trained to minimize the squared error between the probability distributions of the predictions and the ground truth: L(y,ŷ) = N n=1 C c=1 (y c −ŷ c ) 2 + λ||θ|| 2 2 (5) where y c is the ground truth andŷ c is the prediction probability of a class, N is the number of training claims, C is the number of classes, ||.|| 2 is the L 2 regularization term over all model parameters θ, and λ is the trade-off coefficient.",
"During training, all the model parameters are updated using efficient back-propagation through structure (Goller and Kuchler, 1996; Socher et al., 2013) , and the optimization is gradient-based following the Ada-grad update rule (Duchi et al., 2011) to speed up the convergence.",
"We empirically initialize the model parameters with uniform distribution and set the vocabulary size as 5,000, the size of embedding and hidden units as 100.",
"We iterate over all the training examples in each epoch and continue until the loss value converges or the maximum epoch number is met.",
"Experiments and Results Datasets For experimental evaluation, we use two publicly available Twitter datasets released by Ma et al.",
"(2017) , namely Twitter15 and Twitter16 4 , which respectively contains 1,381 and 1,181 propagation trees (see (Ma et al., 2017) for detailed statistics).",
"In each dataset, a group of wide spread source tweets along with their propagation threads, i.e., replies and retweets, are provided in the form of tree structure.",
"Each tree is annotated with one of the four class labels, i.e., non-rumor, false rumor, true rumor and unverified rumor.",
"We remove the retweets from the trees since they do not provide any extra information or evidence contentwise.",
"We build two versions for each tree, one for the bottom-up tree and the other for the top-down tree, by flipping the edges' direction.",
"Experimental Setup We make comprehensive comparisons between our models and some state-of-the-art baselines on rumor classification and early detection tasks.",
"-DTR: Zhao et al.",
"(2015) proposed a Decision-Tree-based Ranking model to identify trending rumors by searching for inquiry phrases.",
"-DTC: The information credibility model using a Decision-Tree Classifier (Castillo et al., 2011) based on manually engineering various statistical features of the tweets.",
"-RFC: The Random Forest Classier using 3 fitting parameters as temporal properties and a set of handcrafted features on user, linguistic and structural properties (Kwon et al., 2013) .",
"-SVM-TS: A linear SVM classifier that uses time-series to model the variation of handcrafted social context features (Ma et al., 2015) .",
"-SVM-BOW: A naive baseline we built by representing text content using bag-of-words and using linear SVM for rumor classification.",
"-SVM-TK and SVM-HK: SVM classifier uses a Tree Kernel (Ma et al., 2017) and that uses a Hybrid Kernel (Wu et al., 2015) , respectively, both of which model propagation structures with kernels.",
"-GRU-RNN: A detection model based on recurrent neural networks (Ma et al., 2016) with GRU units for learning rumor representations by modeling sequential structure of relevant posts.",
"We implement DTC and RFC using Weka 5 , SVM-based models using LibSVM 6 and all neural-network-based models with Theano 7 .",
"We conduct 5-fold cross-validation on the datasets and use accuracy over all the four categories and F1 measure on each class to evaluate the performance of models.",
"Rumor Classification Performance As shown in Table 1 , our proposed models basically yield much better performance than other methods on both datasets via the modeling of interaction structures of posts in the propagation.",
"It is observed that the performance of the 4 baselines in the first group based on handcrafted features is obviously poor, varying between 0.409 and 0.585 in accuracy, indicating that they fail to generalize due to the lack of capacity capturing helpful features.",
"Among these baselines, SVM-TS and RFC perform relatively better because they 5 www.cs.waikato.ac.nz/ml/weka 6 www.csie.ntu.edu.tw/˜cjlin/libsvm 7 deeplearning.net/software/theano use additional temporal traits, but they are still clearly worse than the models not relying on feature engineering.",
"DTR uses a set of regular expressions indicative of stances.",
"However, only 19.6% and 22.2% tweets in the two datasets contain strings covered by these regular expressions, rendering unsatisfactory result.",
"Among the two kernel methods that are based on comparing propagation structures, we observe that SVM-TK is much more effective than SVM-HK.",
"There are two reasons: 1) SVM-HK was originally proposed and experimented on Sina Weibo (Wu et al., 2015) , which may not be generalize well on Twitter.",
"2) SVM-HK loosely couples two separate kernels: a RBF kernel based on handcrafted features, plus a random walk-based kernel which relies on a set of pre-defined keywords for jumping over the nodes probabilistically.",
"This under utilizes the propagation information due to such oversimplified treatment of tree structure.",
"In contrast, SVM-TK is an integrated kernel and can fully utilize the structure by comparing the trees based on both textual and structural similarities.",
"It appears that using bag-of-words is already a decent model evidenced as the fairly good performance of SVM-BOW which is even better than SVM-HK.",
"This is because the features of SVM-HK are handcrafted for binary classification (i.e., non-rumor vs rumor), ignoring the importance of indicative words or units that benefit finer-grained classification which can be captured more effectively by SVM-BOW.",
"The sequential neural model GRU-RNN performs slightly worse than SVM-TK, but much worse than our recursive models.",
"This is because it is a special case of the recursive model where each non-leaf node has only one child.",
"It has to rely on a linear chain as input, which missed out valuable structural information.",
"However, it does learn high-level features from the post content via hidden units of the neural model while SVM-TK cannot which can only evaluates similarities based on the overlapping words among subtrees.",
"Our recursive models are inherently tree-structured and take advantages of representation learning following the propagation structure, thus beats SVM-TK.",
"In the two recursive models, TD-RvNN outperforms BU-RvNN, which indicates that the bottomup model may suffer from larger information loss than the top-down one.",
"This verifies the hypothesis we made in Section 4.3 that the pooling layer For only the non-rumor class, it seems that our method does not perform so well as some featureengineering baselines.",
"This can be explained by the fact that these baselines are trained with additional features such as user information (e.g., profile, verification status, etc) which may contain clues for differentiating non-rumors from rumors.",
"Also, the responses to non-rumors are usually much more diverse with little informative indication, making identification of non-rumors more difficult based on content even with the structure.",
"Early Rumor Detection Performance Detecting rumors at early state of propagation is important so that interventions can be made in a timely manner.",
"We compared different methods in term of different time delays measured by either tweet count received or time elapsed since the source tweet is posted.",
"The performance is evaluated by the accuracy obtained when we incrementally add test data up to the check point given the targeted time delay or tweets volume.",
"Figure 4 shows that the performance of our recursive models climbs more rapidly and starts to supersede the other models at the early stage.",
"Although all the methods are getting to their best per-formance in the end, TD-RvNN and BU-RvNN only need around 8 hours or about 90 tweets to achieve the comparable performance of the best baseline model, i.e., SVM-TK, which needs about 36 hours or around 300 posts, indicating superior early detection performance of our method.",
"Figure 5 shows a sample tree at the early stage of propagation that has been correctly classified as a false rumor by both recursive models.",
"We can see that this false rumor demonstrates typical patterns in subtrees and propagation paths indicative of the falsehood, where a set of responses supporting the parent posts that deny or question the source post are captured by our bottom-up model.",
"Similarly, some patterns of propagation from the root to leaf nodes like \"support→deny→support\" are also seized by our top-down model.",
"In comparison, sequential models may be confused because the supportive key terms such as \"be right\", \"yeah\", \"exactly!\"",
"dominate the responses, and the SVM-TK may miss similar subtrees by just comparing the surface words.",
"Conclusions and Future Work We propose a bottom-up and a top-down treestructured model based on recursive neural networks for rumor detection on Twitter.",
"The inher-ent nature of recursive models allows them using propagation tree to guide the learning of representations from tweets content, such as embedding various indicative signals hidden in the structure, for better identifying rumors.",
"Results on two public Twitter datasets show that our method improves rumor detection performance in very large margins as compared to state-of-the-art baselines.",
"In our future work, we plan to integrate other types of information such as user properties into the structured neural models to further enhance representation learning and detect rumor spreaders at the same time.",
"We also plan to use unsupervised models for the task by exploiting structural information."
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"4",
"4.1",
"4.2",
"4.3",
"4.4",
"5.1",
"5.2",
"5.3",
"5.4",
"6"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Problem Statement",
"RvNN-based Rumor Detection",
"Standard Recursive Neural Networks",
"Bottom-up RvNN",
"Top-down RvNN",
"Model Training",
"Datasets",
"Experimental Setup",
"Rumor Classification Performance",
"Early Rumor Detection Performance",
"Conclusions and Future Work"
]
} | GEM-SciDuet-train-135#paper-1364#slide-9 | Top down RvNN | Input: top-down tree GRU transition equation at node
Own input Parent node
Structure: recursively visit from the root node to its children until reaching all leaf nodes. (reverse Bottom-up RvNN)
Intuition: rumor-indicative features are aggregated along the propagation path (e.g., if a post agree with its parents stance, the parents stance should be reinforced) (models how information flows from source post to the current node)
: #Walmart donates $10,000 to #DarrenWilson fund to continue police racial profiling
1:30 Idc if they killed a mf foreal. Ima always shop with @Walmart. I'm
: NEED SOURCE. have a feeling this is just hearsay ... just bein honest
I agree. I have been hearing this all day but no source 1:12
: Exactly, i don't think Wal-Mart would let everyone know this if they did!! 2:21 | Input: top-down tree GRU transition equation at node
Own input Parent node
Structure: recursively visit from the root node to its children until reaching all leaf nodes. (reverse Bottom-up RvNN)
Intuition: rumor-indicative features are aggregated along the propagation path (e.g., if a post agree with its parents stance, the parents stance should be reinforced) (models how information flows from source post to the current node)
: #Walmart donates $10,000 to #DarrenWilson fund to continue police racial profiling
1:30 Idc if they killed a mf foreal. Ima always shop with @Walmart. I'm
: NEED SOURCE. have a feeling this is just hearsay ... just bein honest
I agree. I have been hearing this all day but no source 1:12
: Exactly, i don't think Wal-Mart would let everyone know this if they did!! 2:21 | [] |
GEM-SciDuet-train-135#paper-1364#slide-10 | 1364 | Rumor Detection on Twitter with Tree-structured Recursive Neural Networks | Automatic rumor detection is technically very challenging. In this work, we try to learn discriminative features from tweets content by following their non-sequential propagation structure and generate more powerful representations for identifying different type of rumors. We propose two recursive neural models based on a bottom-up and a top-down tree-structured neural networks for rumor representation learning and classification, which naturally conform to the propagation layout of tweets. Results on two public Twitter datasets demonstrate that our recursive neural models 1) achieve much better performance than state-of-the-art approaches; 2) demonstrate superior capacity on detecting rumors at very early stage. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173
],
"paper_content_text": [
"Introduction Rumors have always been a social disease.",
"In recent years, it has become unprecedentedly convenient for the \"evil-doers\" to create and disseminate rumors in massive scale with low cost thanks to the popularity of social media outlets on Twitter, Facebook, etc.",
"The worst effect of false rumors could be devastating to individual and/or society.",
"Research pertaining rumors spans multiple disciplines, such as philosophy and humanities (Di-Fonzo and Bordia, 2007; Donovan, 2007) , social psychology (Allport and Postman, 1965; Jaeger et al., 1980; Rosnow and Foster, 2005) , political studies (Allport and Postman, 1946; Berinsky, 2017) , management science (DiFonzo et al., 1994; Kimmel, 2004) and recently computer science and artificial intelligence (Qazvinian et al., 2011; Ratkiewicz et al., 2011; Castillo et al., 2011; Hannak et al., 2014; Zhao et al., 2015; Ma et al., 2015) .",
"Rumor is commonly defined as information that emerge and spread among people whose truth value is unverified or intentionally false (Di-Fonzo and Bordia, 2007; Qazvinian et al., 2011) .",
"Analysis shows that people tend to stop spreading a rumor if it is known as false (Zubiaga et al., 2016b) .",
"However, identifying such misinformation is non-trivial and needs investigative journalism to fact check the suspected claim, which is labor-intensive and time-consuming.",
"The proliferation of social media makes it worse due to the ever-increasing information load and dynamics.",
"Therefore, it is necessary to develop automatic and assistant approaches to facilitate real-time rumor tracking and debunking.",
"For automating rumor detection, most of the previous studies focused on text mining from sequential microblog streams using supervised models based on feature engineering (Castillo et al., 2011; Kwon et al., 2013; Liu et al., 2015; Ma et al., 2015) , and more recently deep neural models (Ma et al., 2016; Chen et al., 2017; Ruchansky et al., 2017) .",
"These methods largely ignore or oversimplify the structural information associated with message propagation which however has been shown conducive to provide useful clues for identifying rumors.",
"Kernel-based method (Wu et al., 2015; Ma et al., 2017) was thus proposed to model the structure as propagation trees in order to differentiate rumorous and non-rumorous claims by comparing their tree-based similarities.",
"But such kind of approach cannot directly classify a tree without pairwise comparison with all other trees imposing unnecessary overhead, and it also cannot automatically learn any high-level feature representations out of the noisy surface features.",
"In this paper, we present a neural rumor detection approach based on recursive neural networks (RvNN) to bridge the content semantics and propagation clues.",
"RvNN and its variants were originally used to compose phrase or sentence representation for syntactic and semantic parsing (Socher et al., 2011 (Socher et al., , 2012 .",
"Unlike parsing, the input into our model is a propagation tree rooted from a source post rather than the parse tree of an individual sentence, and each tree node is a responsive post instead of an individual words.",
"The content semantics of posts and the responsive relationship among them can be jointly captured via the recursive feature learning process along the tree structure.",
"So, why can such neural model do better for the task?",
"Analysis has generally found that Twitter could \"self-correct\" some inaccurate information as users share opinions, conjectures and evidences (Zubiaga et al., 2017) .",
"To illustrate our intuition, Figure 1 exemplifies the propagation trees of two rumors in our dataset, one being false and the other being true 1 .",
"Structure-insensitive methods basically relying on the relative ratio of different stances in the text cannot do well when such clue is unclear like this example.",
"However, it can be seen that when a post denies the false rumor, it tends to spark supportive or affirmative replies confirming the denial; in contrast, denial to a true rumor tends to trigger question or denial in its replies.",
"This observation may suggest a more general hypothesis that the repliers tend to disagree with (or question) who support a false rumor or deny a true rumor, and also they tend to agree with who deny a false rumor or support a true rumor.",
"Meanwhile, a reply, rather than directly responding to the source tweet (i.e., the root), is usually responsive to its immediate ancestor (Lukasik et al., 2016; Zubiaga et al., 2016a) , suggesting obvious local characteristic of the interaction.",
"The recursive network naturally models such structures for learning to capture the rumor indicative signals and enhance the representation by recursively aggregating the signals from different branches.",
"To this end, we extend the standard RvNN into two variants, i.e., a bottom-up (BU) model and a top-down (TD) model, which represent the propagation tree structure from different angles, in order to visit the nodes and combine their representations following distinct directions.",
"The important merit of such architecture is that the node features can be selectively refined by the recursion given the connection and direction of all paths of the 1 False (true) rumor means the veracity of the rumorous claim is false (true).",
"Figure 1 : Propagation trees of two rumorous source tweets.",
"Nodes may express stances on their parent as commenting, supporting, questioning or denying.",
"The edge arrow indicates the direction from a response to its responded node, and the polarity is marked as '+' ('-') for support (denial).",
"The same node color indicates the same stance on the veracity of root node (i.e., source tweet).",
"tree.",
"As a result, it can be expected that the discriminative signals are better embedded into the learned representations.",
"We evaluate our proposed approach based on two public Twitter datasets.",
"The results show that our method outperforms strong rumor detection baselines with large margin and also demonstrate much higher effectiveness for detection at early stage of propagation, which is promising for realtime intervention and debunking.",
"Our contributions are summarized as follows in three folds: • This is the first study that deeply integrates both structure and content semantics based on tree-structured recursive neural networks for detecting rumors from microblog posts.",
"• We propose two variants of RvNN models based on bottom-up and top-down tree structures to generate better integrated representations for a claim by capturing both structural and textural properties signaling rumors.",
"• Our experiments based on real-world Twitter datasets achieve superior improvements over state-of-the-art baselines on both rumor classification and early detection tasks.",
"We make the source codes in our experiments publicly accessible 2 .",
"Related Work Most previous automatic approaches for rumor detection (Castillo et al., 2011; Yang et al., 2012; Liu et al., 2015) intended to learn a supervised classifier by utilizing a wide range of features crafted from post contents, user profiles and propagation patterns.",
"Subsequent studies were then conducted to engineer new features such as those representing rumor diffusion and cascades (Friggeri et al., 2014; Hannak et al., 2014) characterized by comments with links to debunking websites.",
"Kwon et al.",
"(2013) introduced a time-series-fitting model based on the volume of tweets over time.",
"Ma et al.",
"(2015) extended their model with more chronological social context features.",
"These approaches typically require heavy preprocessing and feature engineering.",
"Zhao et al.",
"(2015) alleviated the engineering effort by using a set of regular expressions (such as \"really?",
"\", \"not true\", etc) to find questing and denying tweets, but the approach was oversimplified and suffered from very low recall.",
"Ma et al.",
"(2016) used recurrent neural networks (RNN) to learn automatically the representations from tweets content based on time series.",
"Recently, they studied to mutually reinforce stance detection and rumor classification in a neural multi-task learning framework (Ma et al., 2018) .",
"However, the approaches cannot embed features reflecting how the posts are propagated and requires careful data segmentation to prepare for time sequence.",
"Some kernel-based methods were exploited to model the propagation structure.",
"Wu et al.",
"(2015) proposed a hybrid SVM classifier which combines a RBF kernel and a random-walk-based graph kernel to capture both flat and propagation patterns for detecting rumors on Sina Weibo.",
"Ma et al.",
"(2017) used tree kernel to capture the similarity of propagation trees by counting their similar substructures in order to identify different types of rumors on Twitter.",
"Compared to their studies, our model can learn the useful features via a more natural and general approach, i.e., the tree-structured neural network, to jointly generate representations from both structure and content.",
"RvNN has demonstrated state-of-the-art performances in a variety of tasks, e.g., images segmentation (Socher et al., 2011) , phrase representation from word vectors (Socher et al., 2012) , and sentiment classification in sentences (Socher et al., 2013) .",
"More recently, a deep RvNN was proposed to model the compositionality in natural language for fine-grained sentiment classification by stacking multiple recursive layers (Irsoy and Cardie, 2014) .",
"In order to avoid gradient vanishing, some studies integrated Long Short Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) to RvNN Tai et al., 2015) .",
"Mou et al.",
"(2015) used a convolutional network over tree structures for syntactic tree parsing of natural language sentences.",
"Problem Statement We define a Twitter rumor detection dataset as a set of claims C = {C 1 , C 2 , · · · , C |C| }, where each claim C i corresponds to a source tweet r i which consists of ideally all its relevant responsive tweets in chronological order, i.e., C i = {r i , x i1 , x i2 , · · · , x im } where each x i * is a responsive tweet of the root r i .",
"Note that although the tweets are notated sequentially, there are connections among them based on their reply or repost relationships, which can form a propagation tree structure (Wu et al., 2015; Ma et al., 2017) with r i being the root node.",
"We formulate this task as a supervised classification problem, which learns a classifier f from labeled claims, that is f : C i → Y i , where Y i takes one of the four finer-grained classes: non-rumor, false rumor, true rumor, and unverified rumor that are introduced in the literature (Ma et al., 2017; Zubiaga et al., 2016b ).",
"An important issue of the tree structure is concerned about the direction of edges, which can result in two different architectures of the model: 1) a bottom-up tree; 2) a top-down tree, which are defined as follows: • Bottom-up tree takes the similar shape as shown in Figure 1 , where responsive nodes always point to their responded nodes and leaf nodes not having any response are laid out at the furthest level.",
"We represent a tree as T i = V i , E i , where V i = C i which con- sists of all relevant posts as nodes, and E i denotes a set of all directed links, where for any u, v ∈ V i , u ← v exists if v responses to u.",
"This structure is similar to a citation network where a response mimics a reference.",
"• Top-down tree naturally conforms to the direction of information propagation, in which a link u → v means the information flows from u to v and v sees it and provides a response to u.",
"This structure reverses bottomup tree and simulates how information cas- cades from a source tweet, i.e., the root, to all its receivers, i.e., the decedents, which is similar as (Wu et al., 2015; Ma et al., 2017) .",
"RvNN-based Rumor Detection The core idea of our method is to strengthen the high-level representation of tree nodes by the recursion following the propagation structure over different branches in the tree.",
"For instance, the responsive nodes confirming or supporting a node (e.g., \"I agree\", \"be right\", etc) can further reinforce the stance of that node while denial or questioning responses (e.g., \"disagree, \"really?!)",
"otherwise weaken its stance.",
"Compared to the kernelbased method using propagation tree (Wu et al., 2015; Ma et al., 2017) , our method does not need pairwise comparison among large number of subtrees, and can learn much stronger representation of content following the response structure.",
"In this section, we will describe our extension to the standard RvNN for modeling rumor detection based on the bottom-up and top-down architectures presented in Section 3.",
"Standard Recursive Neural Networks RvNN is a type of tree-structured neural networks.",
"The original version of RvNN utilized binarized sentence parse trees (Socher et al., 2012) , in which the representation associated with each node of a parse tree is computed from its direct children.",
"The overall structure of the standard RvNN is illustrated as the right side of Figure 2 , corresponding to the input parse tree at the left side.",
"Leaf nodes are the words in an input sentence, each represented by a low-dimensional word embedding.",
"Non-leaf nodes are sentence constituents, computed by recursion based on the presentations of child nodes.",
"Let p be the feature vector of a parent node whose children are c 1 and c 2 , the representation of the parent is computed by p = f (W ·[c 1 ; c 2 ]+b), where f (·) is the activation function with W and b as parameters.",
"This computation is done recursively over all tree nodes; the learned hidden vectors of the nodes can then be used for various classification tasks.",
"Bottom-up RvNN The core idea of bottom-up model is to generate a feature vector for each subtree by recursively visiting every node from the leaves at the bottom to the root at the top.",
"In this way, the subtrees with similar contexts, such as those subtrees having a denial parent and a set of supportive children, will be projected into the proximity in the representation space.",
"And thus such local rumor indicative features are aggregated along different branches into some global representation of the whole tree.",
"For this purpose, we make a natural extension to the original RvNN.",
"The overall structure of our proposed bottom-up model is illustrated in Figure 3(b) , taking a bottom-up tree (see Figure 3 (a)) as input.",
"Different from the standard RvNN, the input of each node in the bottom-up model is a post represented as a vector of words in the vocabulary in terms of tf idf values.",
"Here, every node has an input vector, and the number of children of nodes varies significantly 3 .",
"In rumor detection, long short-term memory (LSTM) (Hochreiter and Schmidhuber, 1997) and gated recurrent units (GRU) (Cho et al., 2014) were used to learn textual representation, which adopts memory units to store information over long time steps (Ma et al., 2016) .",
"In this paper, we choose to extend GRU as hidden unit to model long-distance interactions over the tree nodes because it is more efficient due to fewer parameters.",
"Let S(j) denote the set of direct children of the node j.",
"The transition equations of node j in the bottom-up model are formulated as follows: where x j is the original input vector of node j, E denotes the parameter matrix for transforming this input post,x j is the transformed representation of j, [W * , U * ] are the weight connections inside GRU, and h j and h s refer to the hidden state of j and its s-th child.",
"Thus h S denotes the sum of the hidden state of all the children of j assuming that all children are equally important to j.",
"As with the standard GRU, denotes element-wise multiplication; a reset gate r j determines how to combine the current inputx j with the memory of children, and an update gate z j defines how much memory from the children is cascaded into the current node; andh j denotes the candidate activation of the hidden state of the current node.",
"Different from the standard GRU unit, the gating vectors in our variant of GRU are dependent on the states of many child units, allowing our model to incorporate representations from different children.",
"After recursive aggregation from bottom to up, the state of root node (i.e., source tweet) can be regard as the representation of the whole tree which is used for supervised classification.",
"So, an output layer is connected to the root node for predicting the class of the tree using a softmax function: x j = x j E h S = s∈S(j) h s r j = σ (W rxj + U r h S ) z j = σ (W zxj + U z h S ) h j = tanh (W hxj + U h (h S r j )) h j = (1 − z j ) h S + z j h j y = Sof tmax(Vh 0 + b) (2) where h 0 is the learned hidden vector of root node; V and b are the weights and bias in output layer.",
"Top-down RvNN This model is designed to leverage the structure of top-down tree to capture complex propagation patterns for classifying rumorous claims, which is shown in Figure 3 (c).",
"It models how the informa-tion flows from source post to the current node.",
"The idea of this top-down approach is to generate a strengthened feature vector for each post considering its propagation path, where rumor-indicative features are aggregated along the propagation history in the path.",
"For example, if current post agree with its parent's stance which denies the source post, the denial stance from the root node down to the current node on this path should be reinforced.",
"Due to different branches of any non-leaf node, the top-down visit to its subtree nodes is also recursive.",
"However, the nature of top-down tree lends this model different from the bottom-up one.",
"The representation of each node is computed by combining its own input and its parent node instead of its children nodes.",
"This process proceeds recursively from the root node to its children until all leaf nodes are reached.",
"Suppose that the hidden state of a non-leaf node can be passed synchronously to all its child nodes without loss.",
"Then the hidden state h j of a node j can be computed by combining the hidden state h P(j) of its parent node P(j) and its own input vector x j .",
"Therefore, the transition equations of node j can be formulated as a standard GRU: x j = x j E r j = σ W rxj + U r h P(j) z j = σ W zxj + U z h P(j) h j = tanh W hxj + U h (h P(j) r j ) h j = (1 − z j ) h P(j) + z j h j (3) Through the top-down recursion, the learned representations are eventually embedded into the hidden vector of all the leaf nodes.",
"Since the num-ber of leaf nodes varies, the resulting vectors cannot be directly fed into a fixed-size neural layer for output.",
"Therefore, we add a max-pooling layer to take the maximum value of each dimension of the vectors over all the leaf nodes.",
"This can also help capture the most appealing indicative features from all the propagation paths.",
"Based on the pooling result, we finally use a softmax function in the output layer to predict the label of the tree: y = Sof tmax(Vh ∞ + b) (4) where h ∞ is the pooling vector over all leaf nodes, V and b are parameters in the output layer.",
"Although both of the two RvNN models aim to capture the structural properties by recursively visiting all nodes, we can conjecture that the topdown model would be better.",
"The hypothesis is that in the bottom-up case the final output relies on the representation of single root, and its information loss can be larger than the top-down one since in the top-down case the representations embedded into all leaf nodes along different propagation paths can be incorporated via pooling holistically.",
"Model Training The model is trained to minimize the squared error between the probability distributions of the predictions and the ground truth: L(y,ŷ) = N n=1 C c=1 (y c −ŷ c ) 2 + λ||θ|| 2 2 (5) where y c is the ground truth andŷ c is the prediction probability of a class, N is the number of training claims, C is the number of classes, ||.|| 2 is the L 2 regularization term over all model parameters θ, and λ is the trade-off coefficient.",
"During training, all the model parameters are updated using efficient back-propagation through structure (Goller and Kuchler, 1996; Socher et al., 2013) , and the optimization is gradient-based following the Ada-grad update rule (Duchi et al., 2011) to speed up the convergence.",
"We empirically initialize the model parameters with uniform distribution and set the vocabulary size as 5,000, the size of embedding and hidden units as 100.",
"We iterate over all the training examples in each epoch and continue until the loss value converges or the maximum epoch number is met.",
"Experiments and Results Datasets For experimental evaluation, we use two publicly available Twitter datasets released by Ma et al.",
"(2017) , namely Twitter15 and Twitter16 4 , which respectively contains 1,381 and 1,181 propagation trees (see (Ma et al., 2017) for detailed statistics).",
"In each dataset, a group of wide spread source tweets along with their propagation threads, i.e., replies and retweets, are provided in the form of tree structure.",
"Each tree is annotated with one of the four class labels, i.e., non-rumor, false rumor, true rumor and unverified rumor.",
"We remove the retweets from the trees since they do not provide any extra information or evidence contentwise.",
"We build two versions for each tree, one for the bottom-up tree and the other for the top-down tree, by flipping the edges' direction.",
"Experimental Setup We make comprehensive comparisons between our models and some state-of-the-art baselines on rumor classification and early detection tasks.",
"-DTR: Zhao et al.",
"(2015) proposed a Decision-Tree-based Ranking model to identify trending rumors by searching for inquiry phrases.",
"-DTC: The information credibility model using a Decision-Tree Classifier (Castillo et al., 2011) based on manually engineering various statistical features of the tweets.",
"-RFC: The Random Forest Classier using 3 fitting parameters as temporal properties and a set of handcrafted features on user, linguistic and structural properties (Kwon et al., 2013) .",
"-SVM-TS: A linear SVM classifier that uses time-series to model the variation of handcrafted social context features (Ma et al., 2015) .",
"-SVM-BOW: A naive baseline we built by representing text content using bag-of-words and using linear SVM for rumor classification.",
"-SVM-TK and SVM-HK: SVM classifier uses a Tree Kernel (Ma et al., 2017) and that uses a Hybrid Kernel (Wu et al., 2015) , respectively, both of which model propagation structures with kernels.",
"-GRU-RNN: A detection model based on recurrent neural networks (Ma et al., 2016) with GRU units for learning rumor representations by modeling sequential structure of relevant posts.",
"We implement DTC and RFC using Weka 5 , SVM-based models using LibSVM 6 and all neural-network-based models with Theano 7 .",
"We conduct 5-fold cross-validation on the datasets and use accuracy over all the four categories and F1 measure on each class to evaluate the performance of models.",
"Rumor Classification Performance As shown in Table 1 , our proposed models basically yield much better performance than other methods on both datasets via the modeling of interaction structures of posts in the propagation.",
"It is observed that the performance of the 4 baselines in the first group based on handcrafted features is obviously poor, varying between 0.409 and 0.585 in accuracy, indicating that they fail to generalize due to the lack of capacity capturing helpful features.",
"Among these baselines, SVM-TS and RFC perform relatively better because they 5 www.cs.waikato.ac.nz/ml/weka 6 www.csie.ntu.edu.tw/˜cjlin/libsvm 7 deeplearning.net/software/theano use additional temporal traits, but they are still clearly worse than the models not relying on feature engineering.",
"DTR uses a set of regular expressions indicative of stances.",
"However, only 19.6% and 22.2% tweets in the two datasets contain strings covered by these regular expressions, rendering unsatisfactory result.",
"Among the two kernel methods that are based on comparing propagation structures, we observe that SVM-TK is much more effective than SVM-HK.",
"There are two reasons: 1) SVM-HK was originally proposed and experimented on Sina Weibo (Wu et al., 2015) , which may not be generalize well on Twitter.",
"2) SVM-HK loosely couples two separate kernels: a RBF kernel based on handcrafted features, plus a random walk-based kernel which relies on a set of pre-defined keywords for jumping over the nodes probabilistically.",
"This under utilizes the propagation information due to such oversimplified treatment of tree structure.",
"In contrast, SVM-TK is an integrated kernel and can fully utilize the structure by comparing the trees based on both textual and structural similarities.",
"It appears that using bag-of-words is already a decent model evidenced as the fairly good performance of SVM-BOW which is even better than SVM-HK.",
"This is because the features of SVM-HK are handcrafted for binary classification (i.e., non-rumor vs rumor), ignoring the importance of indicative words or units that benefit finer-grained classification which can be captured more effectively by SVM-BOW.",
"The sequential neural model GRU-RNN performs slightly worse than SVM-TK, but much worse than our recursive models.",
"This is because it is a special case of the recursive model where each non-leaf node has only one child.",
"It has to rely on a linear chain as input, which missed out valuable structural information.",
"However, it does learn high-level features from the post content via hidden units of the neural model while SVM-TK cannot which can only evaluates similarities based on the overlapping words among subtrees.",
"Our recursive models are inherently tree-structured and take advantages of representation learning following the propagation structure, thus beats SVM-TK.",
"In the two recursive models, TD-RvNN outperforms BU-RvNN, which indicates that the bottomup model may suffer from larger information loss than the top-down one.",
"This verifies the hypothesis we made in Section 4.3 that the pooling layer For only the non-rumor class, it seems that our method does not perform so well as some featureengineering baselines.",
"This can be explained by the fact that these baselines are trained with additional features such as user information (e.g., profile, verification status, etc) which may contain clues for differentiating non-rumors from rumors.",
"Also, the responses to non-rumors are usually much more diverse with little informative indication, making identification of non-rumors more difficult based on content even with the structure.",
"Early Rumor Detection Performance Detecting rumors at early state of propagation is important so that interventions can be made in a timely manner.",
"We compared different methods in term of different time delays measured by either tweet count received or time elapsed since the source tweet is posted.",
"The performance is evaluated by the accuracy obtained when we incrementally add test data up to the check point given the targeted time delay or tweets volume.",
"Figure 4 shows that the performance of our recursive models climbs more rapidly and starts to supersede the other models at the early stage.",
"Although all the methods are getting to their best per-formance in the end, TD-RvNN and BU-RvNN only need around 8 hours or about 90 tweets to achieve the comparable performance of the best baseline model, i.e., SVM-TK, which needs about 36 hours or around 300 posts, indicating superior early detection performance of our method.",
"Figure 5 shows a sample tree at the early stage of propagation that has been correctly classified as a false rumor by both recursive models.",
"We can see that this false rumor demonstrates typical patterns in subtrees and propagation paths indicative of the falsehood, where a set of responses supporting the parent posts that deny or question the source post are captured by our bottom-up model.",
"Similarly, some patterns of propagation from the root to leaf nodes like \"support→deny→support\" are also seized by our top-down model.",
"In comparison, sequential models may be confused because the supportive key terms such as \"be right\", \"yeah\", \"exactly!\"",
"dominate the responses, and the SVM-TK may miss similar subtrees by just comparing the surface words.",
"Conclusions and Future Work We propose a bottom-up and a top-down treestructured model based on recursive neural networks for rumor detection on Twitter.",
"The inher-ent nature of recursive models allows them using propagation tree to guide the learning of representations from tweets content, such as embedding various indicative signals hidden in the structure, for better identifying rumors.",
"Results on two public Twitter datasets show that our method improves rumor detection performance in very large margins as compared to state-of-the-art baselines.",
"In our future work, we plan to integrate other types of information such as user properties into the structured neural models to further enhance representation learning and detect rumor spreaders at the same time.",
"We also plan to use unsupervised models for the task by exploiting structural information."
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"4",
"4.1",
"4.2",
"4.3",
"4.4",
"5.1",
"5.2",
"5.3",
"5.4",
"6"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Problem Statement",
"RvNN-based Rumor Detection",
"Standard Recursive Neural Networks",
"Bottom-up RvNN",
"Top-down RvNN",
"Model Training",
"Datasets",
"Experimental Setup",
"Rumor Classification Performance",
"Early Rumor Detection Performance",
"Conclusions and Future Work"
]
} | GEM-SciDuet-train-135#paper-1364#slide-10 | Model Training | Comparison: both of the two RvNN models aim to capture the structural properties by recursively visiting all nodes
Bottom-up RvNN: the state of root node (i.e., source tweet) can be regard as the representation of the whole tree (can be used for supervised classification).
Top-down RvNN: the representation of each path are eventually embedded into the hidden vector of all the leaf nodes. learned vector of root node
Bottom-up RvNN: = K
Top-down RvNN: = L
the pooling vector over all leaf nodes
Objective Function: T US; R OS; O QO =
Training Procedure parameters are updated using efficient back-propagation through structure (Goller and Kuchler, 1996; Socher et al., 2013) | Comparison: both of the two RvNN models aim to capture the structural properties by recursively visiting all nodes
Bottom-up RvNN: the state of root node (i.e., source tweet) can be regard as the representation of the whole tree (can be used for supervised classification).
Top-down RvNN: the representation of each path are eventually embedded into the hidden vector of all the leaf nodes. learned vector of root node
Bottom-up RvNN: = K
Top-down RvNN: = L
the pooling vector over all leaf nodes
Objective Function: T US; R OS; O QO =
Training Procedure parameters are updated using efficient back-propagation through structure (Goller and Kuchler, 1996; Socher et al., 2013) | [] |
GEM-SciDuet-train-135#paper-1364#slide-11 | 1364 | Rumor Detection on Twitter with Tree-structured Recursive Neural Networks | Automatic rumor detection is technically very challenging. In this work, we try to learn discriminative features from tweets content by following their non-sequential propagation structure and generate more powerful representations for identifying different type of rumors. We propose two recursive neural models based on a bottom-up and a top-down tree-structured neural networks for rumor representation learning and classification, which naturally conform to the propagation layout of tweets. Results on two public Twitter datasets demonstrate that our recursive neural models 1) achieve much better performance than state-of-the-art approaches; 2) demonstrate superior capacity on detecting rumors at very early stage. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173
],
"paper_content_text": [
"Introduction Rumors have always been a social disease.",
"In recent years, it has become unprecedentedly convenient for the \"evil-doers\" to create and disseminate rumors in massive scale with low cost thanks to the popularity of social media outlets on Twitter, Facebook, etc.",
"The worst effect of false rumors could be devastating to individual and/or society.",
"Research pertaining rumors spans multiple disciplines, such as philosophy and humanities (Di-Fonzo and Bordia, 2007; Donovan, 2007) , social psychology (Allport and Postman, 1965; Jaeger et al., 1980; Rosnow and Foster, 2005) , political studies (Allport and Postman, 1946; Berinsky, 2017) , management science (DiFonzo et al., 1994; Kimmel, 2004) and recently computer science and artificial intelligence (Qazvinian et al., 2011; Ratkiewicz et al., 2011; Castillo et al., 2011; Hannak et al., 2014; Zhao et al., 2015; Ma et al., 2015) .",
"Rumor is commonly defined as information that emerge and spread among people whose truth value is unverified or intentionally false (Di-Fonzo and Bordia, 2007; Qazvinian et al., 2011) .",
"Analysis shows that people tend to stop spreading a rumor if it is known as false (Zubiaga et al., 2016b) .",
"However, identifying such misinformation is non-trivial and needs investigative journalism to fact check the suspected claim, which is labor-intensive and time-consuming.",
"The proliferation of social media makes it worse due to the ever-increasing information load and dynamics.",
"Therefore, it is necessary to develop automatic and assistant approaches to facilitate real-time rumor tracking and debunking.",
"For automating rumor detection, most of the previous studies focused on text mining from sequential microblog streams using supervised models based on feature engineering (Castillo et al., 2011; Kwon et al., 2013; Liu et al., 2015; Ma et al., 2015) , and more recently deep neural models (Ma et al., 2016; Chen et al., 2017; Ruchansky et al., 2017) .",
"These methods largely ignore or oversimplify the structural information associated with message propagation which however has been shown conducive to provide useful clues for identifying rumors.",
"Kernel-based method (Wu et al., 2015; Ma et al., 2017) was thus proposed to model the structure as propagation trees in order to differentiate rumorous and non-rumorous claims by comparing their tree-based similarities.",
"But such kind of approach cannot directly classify a tree without pairwise comparison with all other trees imposing unnecessary overhead, and it also cannot automatically learn any high-level feature representations out of the noisy surface features.",
"In this paper, we present a neural rumor detection approach based on recursive neural networks (RvNN) to bridge the content semantics and propagation clues.",
"RvNN and its variants were originally used to compose phrase or sentence representation for syntactic and semantic parsing (Socher et al., 2011 (Socher et al., , 2012 .",
"Unlike parsing, the input into our model is a propagation tree rooted from a source post rather than the parse tree of an individual sentence, and each tree node is a responsive post instead of an individual words.",
"The content semantics of posts and the responsive relationship among them can be jointly captured via the recursive feature learning process along the tree structure.",
"So, why can such neural model do better for the task?",
"Analysis has generally found that Twitter could \"self-correct\" some inaccurate information as users share opinions, conjectures and evidences (Zubiaga et al., 2017) .",
"To illustrate our intuition, Figure 1 exemplifies the propagation trees of two rumors in our dataset, one being false and the other being true 1 .",
"Structure-insensitive methods basically relying on the relative ratio of different stances in the text cannot do well when such clue is unclear like this example.",
"However, it can be seen that when a post denies the false rumor, it tends to spark supportive or affirmative replies confirming the denial; in contrast, denial to a true rumor tends to trigger question or denial in its replies.",
"This observation may suggest a more general hypothesis that the repliers tend to disagree with (or question) who support a false rumor or deny a true rumor, and also they tend to agree with who deny a false rumor or support a true rumor.",
"Meanwhile, a reply, rather than directly responding to the source tweet (i.e., the root), is usually responsive to its immediate ancestor (Lukasik et al., 2016; Zubiaga et al., 2016a) , suggesting obvious local characteristic of the interaction.",
"The recursive network naturally models such structures for learning to capture the rumor indicative signals and enhance the representation by recursively aggregating the signals from different branches.",
"To this end, we extend the standard RvNN into two variants, i.e., a bottom-up (BU) model and a top-down (TD) model, which represent the propagation tree structure from different angles, in order to visit the nodes and combine their representations following distinct directions.",
"The important merit of such architecture is that the node features can be selectively refined by the recursion given the connection and direction of all paths of the 1 False (true) rumor means the veracity of the rumorous claim is false (true).",
"Figure 1 : Propagation trees of two rumorous source tweets.",
"Nodes may express stances on their parent as commenting, supporting, questioning or denying.",
"The edge arrow indicates the direction from a response to its responded node, and the polarity is marked as '+' ('-') for support (denial).",
"The same node color indicates the same stance on the veracity of root node (i.e., source tweet).",
"tree.",
"As a result, it can be expected that the discriminative signals are better embedded into the learned representations.",
"We evaluate our proposed approach based on two public Twitter datasets.",
"The results show that our method outperforms strong rumor detection baselines with large margin and also demonstrate much higher effectiveness for detection at early stage of propagation, which is promising for realtime intervention and debunking.",
"Our contributions are summarized as follows in three folds: • This is the first study that deeply integrates both structure and content semantics based on tree-structured recursive neural networks for detecting rumors from microblog posts.",
"• We propose two variants of RvNN models based on bottom-up and top-down tree structures to generate better integrated representations for a claim by capturing both structural and textural properties signaling rumors.",
"• Our experiments based on real-world Twitter datasets achieve superior improvements over state-of-the-art baselines on both rumor classification and early detection tasks.",
"We make the source codes in our experiments publicly accessible 2 .",
"Related Work Most previous automatic approaches for rumor detection (Castillo et al., 2011; Yang et al., 2012; Liu et al., 2015) intended to learn a supervised classifier by utilizing a wide range of features crafted from post contents, user profiles and propagation patterns.",
"Subsequent studies were then conducted to engineer new features such as those representing rumor diffusion and cascades (Friggeri et al., 2014; Hannak et al., 2014) characterized by comments with links to debunking websites.",
"Kwon et al.",
"(2013) introduced a time-series-fitting model based on the volume of tweets over time.",
"Ma et al.",
"(2015) extended their model with more chronological social context features.",
"These approaches typically require heavy preprocessing and feature engineering.",
"Zhao et al.",
"(2015) alleviated the engineering effort by using a set of regular expressions (such as \"really?",
"\", \"not true\", etc) to find questing and denying tweets, but the approach was oversimplified and suffered from very low recall.",
"Ma et al.",
"(2016) used recurrent neural networks (RNN) to learn automatically the representations from tweets content based on time series.",
"Recently, they studied to mutually reinforce stance detection and rumor classification in a neural multi-task learning framework (Ma et al., 2018) .",
"However, the approaches cannot embed features reflecting how the posts are propagated and requires careful data segmentation to prepare for time sequence.",
"Some kernel-based methods were exploited to model the propagation structure.",
"Wu et al.",
"(2015) proposed a hybrid SVM classifier which combines a RBF kernel and a random-walk-based graph kernel to capture both flat and propagation patterns for detecting rumors on Sina Weibo.",
"Ma et al.",
"(2017) used tree kernel to capture the similarity of propagation trees by counting their similar substructures in order to identify different types of rumors on Twitter.",
"Compared to their studies, our model can learn the useful features via a more natural and general approach, i.e., the tree-structured neural network, to jointly generate representations from both structure and content.",
"RvNN has demonstrated state-of-the-art performances in a variety of tasks, e.g., images segmentation (Socher et al., 2011) , phrase representation from word vectors (Socher et al., 2012) , and sentiment classification in sentences (Socher et al., 2013) .",
"More recently, a deep RvNN was proposed to model the compositionality in natural language for fine-grained sentiment classification by stacking multiple recursive layers (Irsoy and Cardie, 2014) .",
"In order to avoid gradient vanishing, some studies integrated Long Short Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) to RvNN Tai et al., 2015) .",
"Mou et al.",
"(2015) used a convolutional network over tree structures for syntactic tree parsing of natural language sentences.",
"Problem Statement We define a Twitter rumor detection dataset as a set of claims C = {C 1 , C 2 , · · · , C |C| }, where each claim C i corresponds to a source tweet r i which consists of ideally all its relevant responsive tweets in chronological order, i.e., C i = {r i , x i1 , x i2 , · · · , x im } where each x i * is a responsive tweet of the root r i .",
"Note that although the tweets are notated sequentially, there are connections among them based on their reply or repost relationships, which can form a propagation tree structure (Wu et al., 2015; Ma et al., 2017) with r i being the root node.",
"We formulate this task as a supervised classification problem, which learns a classifier f from labeled claims, that is f : C i → Y i , where Y i takes one of the four finer-grained classes: non-rumor, false rumor, true rumor, and unverified rumor that are introduced in the literature (Ma et al., 2017; Zubiaga et al., 2016b ).",
"An important issue of the tree structure is concerned about the direction of edges, which can result in two different architectures of the model: 1) a bottom-up tree; 2) a top-down tree, which are defined as follows: • Bottom-up tree takes the similar shape as shown in Figure 1 , where responsive nodes always point to their responded nodes and leaf nodes not having any response are laid out at the furthest level.",
"We represent a tree as T i = V i , E i , where V i = C i which con- sists of all relevant posts as nodes, and E i denotes a set of all directed links, where for any u, v ∈ V i , u ← v exists if v responses to u.",
"This structure is similar to a citation network where a response mimics a reference.",
"• Top-down tree naturally conforms to the direction of information propagation, in which a link u → v means the information flows from u to v and v sees it and provides a response to u.",
"This structure reverses bottomup tree and simulates how information cas- cades from a source tweet, i.e., the root, to all its receivers, i.e., the decedents, which is similar as (Wu et al., 2015; Ma et al., 2017) .",
"RvNN-based Rumor Detection The core idea of our method is to strengthen the high-level representation of tree nodes by the recursion following the propagation structure over different branches in the tree.",
"For instance, the responsive nodes confirming or supporting a node (e.g., \"I agree\", \"be right\", etc) can further reinforce the stance of that node while denial or questioning responses (e.g., \"disagree, \"really?!)",
"otherwise weaken its stance.",
"Compared to the kernelbased method using propagation tree (Wu et al., 2015; Ma et al., 2017) , our method does not need pairwise comparison among large number of subtrees, and can learn much stronger representation of content following the response structure.",
"In this section, we will describe our extension to the standard RvNN for modeling rumor detection based on the bottom-up and top-down architectures presented in Section 3.",
"Standard Recursive Neural Networks RvNN is a type of tree-structured neural networks.",
"The original version of RvNN utilized binarized sentence parse trees (Socher et al., 2012) , in which the representation associated with each node of a parse tree is computed from its direct children.",
"The overall structure of the standard RvNN is illustrated as the right side of Figure 2 , corresponding to the input parse tree at the left side.",
"Leaf nodes are the words in an input sentence, each represented by a low-dimensional word embedding.",
"Non-leaf nodes are sentence constituents, computed by recursion based on the presentations of child nodes.",
"Let p be the feature vector of a parent node whose children are c 1 and c 2 , the representation of the parent is computed by p = f (W ·[c 1 ; c 2 ]+b), where f (·) is the activation function with W and b as parameters.",
"This computation is done recursively over all tree nodes; the learned hidden vectors of the nodes can then be used for various classification tasks.",
"Bottom-up RvNN The core idea of bottom-up model is to generate a feature vector for each subtree by recursively visiting every node from the leaves at the bottom to the root at the top.",
"In this way, the subtrees with similar contexts, such as those subtrees having a denial parent and a set of supportive children, will be projected into the proximity in the representation space.",
"And thus such local rumor indicative features are aggregated along different branches into some global representation of the whole tree.",
"For this purpose, we make a natural extension to the original RvNN.",
"The overall structure of our proposed bottom-up model is illustrated in Figure 3(b) , taking a bottom-up tree (see Figure 3 (a)) as input.",
"Different from the standard RvNN, the input of each node in the bottom-up model is a post represented as a vector of words in the vocabulary in terms of tf idf values.",
"Here, every node has an input vector, and the number of children of nodes varies significantly 3 .",
"In rumor detection, long short-term memory (LSTM) (Hochreiter and Schmidhuber, 1997) and gated recurrent units (GRU) (Cho et al., 2014) were used to learn textual representation, which adopts memory units to store information over long time steps (Ma et al., 2016) .",
"In this paper, we choose to extend GRU as hidden unit to model long-distance interactions over the tree nodes because it is more efficient due to fewer parameters.",
"Let S(j) denote the set of direct children of the node j.",
"The transition equations of node j in the bottom-up model are formulated as follows: where x j is the original input vector of node j, E denotes the parameter matrix for transforming this input post,x j is the transformed representation of j, [W * , U * ] are the weight connections inside GRU, and h j and h s refer to the hidden state of j and its s-th child.",
"Thus h S denotes the sum of the hidden state of all the children of j assuming that all children are equally important to j.",
"As with the standard GRU, denotes element-wise multiplication; a reset gate r j determines how to combine the current inputx j with the memory of children, and an update gate z j defines how much memory from the children is cascaded into the current node; andh j denotes the candidate activation of the hidden state of the current node.",
"Different from the standard GRU unit, the gating vectors in our variant of GRU are dependent on the states of many child units, allowing our model to incorporate representations from different children.",
"After recursive aggregation from bottom to up, the state of root node (i.e., source tweet) can be regard as the representation of the whole tree which is used for supervised classification.",
"So, an output layer is connected to the root node for predicting the class of the tree using a softmax function: x j = x j E h S = s∈S(j) h s r j = σ (W rxj + U r h S ) z j = σ (W zxj + U z h S ) h j = tanh (W hxj + U h (h S r j )) h j = (1 − z j ) h S + z j h j y = Sof tmax(Vh 0 + b) (2) where h 0 is the learned hidden vector of root node; V and b are the weights and bias in output layer.",
"Top-down RvNN This model is designed to leverage the structure of top-down tree to capture complex propagation patterns for classifying rumorous claims, which is shown in Figure 3 (c).",
"It models how the informa-tion flows from source post to the current node.",
"The idea of this top-down approach is to generate a strengthened feature vector for each post considering its propagation path, where rumor-indicative features are aggregated along the propagation history in the path.",
"For example, if current post agree with its parent's stance which denies the source post, the denial stance from the root node down to the current node on this path should be reinforced.",
"Due to different branches of any non-leaf node, the top-down visit to its subtree nodes is also recursive.",
"However, the nature of top-down tree lends this model different from the bottom-up one.",
"The representation of each node is computed by combining its own input and its parent node instead of its children nodes.",
"This process proceeds recursively from the root node to its children until all leaf nodes are reached.",
"Suppose that the hidden state of a non-leaf node can be passed synchronously to all its child nodes without loss.",
"Then the hidden state h j of a node j can be computed by combining the hidden state h P(j) of its parent node P(j) and its own input vector x j .",
"Therefore, the transition equations of node j can be formulated as a standard GRU: x j = x j E r j = σ W rxj + U r h P(j) z j = σ W zxj + U z h P(j) h j = tanh W hxj + U h (h P(j) r j ) h j = (1 − z j ) h P(j) + z j h j (3) Through the top-down recursion, the learned representations are eventually embedded into the hidden vector of all the leaf nodes.",
"Since the num-ber of leaf nodes varies, the resulting vectors cannot be directly fed into a fixed-size neural layer for output.",
"Therefore, we add a max-pooling layer to take the maximum value of each dimension of the vectors over all the leaf nodes.",
"This can also help capture the most appealing indicative features from all the propagation paths.",
"Based on the pooling result, we finally use a softmax function in the output layer to predict the label of the tree: y = Sof tmax(Vh ∞ + b) (4) where h ∞ is the pooling vector over all leaf nodes, V and b are parameters in the output layer.",
"Although both of the two RvNN models aim to capture the structural properties by recursively visiting all nodes, we can conjecture that the topdown model would be better.",
"The hypothesis is that in the bottom-up case the final output relies on the representation of single root, and its information loss can be larger than the top-down one since in the top-down case the representations embedded into all leaf nodes along different propagation paths can be incorporated via pooling holistically.",
"Model Training The model is trained to minimize the squared error between the probability distributions of the predictions and the ground truth: L(y,ŷ) = N n=1 C c=1 (y c −ŷ c ) 2 + λ||θ|| 2 2 (5) where y c is the ground truth andŷ c is the prediction probability of a class, N is the number of training claims, C is the number of classes, ||.|| 2 is the L 2 regularization term over all model parameters θ, and λ is the trade-off coefficient.",
"During training, all the model parameters are updated using efficient back-propagation through structure (Goller and Kuchler, 1996; Socher et al., 2013) , and the optimization is gradient-based following the Ada-grad update rule (Duchi et al., 2011) to speed up the convergence.",
"We empirically initialize the model parameters with uniform distribution and set the vocabulary size as 5,000, the size of embedding and hidden units as 100.",
"We iterate over all the training examples in each epoch and continue until the loss value converges or the maximum epoch number is met.",
"Experiments and Results Datasets For experimental evaluation, we use two publicly available Twitter datasets released by Ma et al.",
"(2017) , namely Twitter15 and Twitter16 4 , which respectively contains 1,381 and 1,181 propagation trees (see (Ma et al., 2017) for detailed statistics).",
"In each dataset, a group of wide spread source tweets along with their propagation threads, i.e., replies and retweets, are provided in the form of tree structure.",
"Each tree is annotated with one of the four class labels, i.e., non-rumor, false rumor, true rumor and unverified rumor.",
"We remove the retweets from the trees since they do not provide any extra information or evidence contentwise.",
"We build two versions for each tree, one for the bottom-up tree and the other for the top-down tree, by flipping the edges' direction.",
"Experimental Setup We make comprehensive comparisons between our models and some state-of-the-art baselines on rumor classification and early detection tasks.",
"-DTR: Zhao et al.",
"(2015) proposed a Decision-Tree-based Ranking model to identify trending rumors by searching for inquiry phrases.",
"-DTC: The information credibility model using a Decision-Tree Classifier (Castillo et al., 2011) based on manually engineering various statistical features of the tweets.",
"-RFC: The Random Forest Classier using 3 fitting parameters as temporal properties and a set of handcrafted features on user, linguistic and structural properties (Kwon et al., 2013) .",
"-SVM-TS: A linear SVM classifier that uses time-series to model the variation of handcrafted social context features (Ma et al., 2015) .",
"-SVM-BOW: A naive baseline we built by representing text content using bag-of-words and using linear SVM for rumor classification.",
"-SVM-TK and SVM-HK: SVM classifier uses a Tree Kernel (Ma et al., 2017) and that uses a Hybrid Kernel (Wu et al., 2015) , respectively, both of which model propagation structures with kernels.",
"-GRU-RNN: A detection model based on recurrent neural networks (Ma et al., 2016) with GRU units for learning rumor representations by modeling sequential structure of relevant posts.",
"We implement DTC and RFC using Weka 5 , SVM-based models using LibSVM 6 and all neural-network-based models with Theano 7 .",
"We conduct 5-fold cross-validation on the datasets and use accuracy over all the four categories and F1 measure on each class to evaluate the performance of models.",
"Rumor Classification Performance As shown in Table 1 , our proposed models basically yield much better performance than other methods on both datasets via the modeling of interaction structures of posts in the propagation.",
"It is observed that the performance of the 4 baselines in the first group based on handcrafted features is obviously poor, varying between 0.409 and 0.585 in accuracy, indicating that they fail to generalize due to the lack of capacity capturing helpful features.",
"Among these baselines, SVM-TS and RFC perform relatively better because they 5 www.cs.waikato.ac.nz/ml/weka 6 www.csie.ntu.edu.tw/˜cjlin/libsvm 7 deeplearning.net/software/theano use additional temporal traits, but they are still clearly worse than the models not relying on feature engineering.",
"DTR uses a set of regular expressions indicative of stances.",
"However, only 19.6% and 22.2% tweets in the two datasets contain strings covered by these regular expressions, rendering unsatisfactory result.",
"Among the two kernel methods that are based on comparing propagation structures, we observe that SVM-TK is much more effective than SVM-HK.",
"There are two reasons: 1) SVM-HK was originally proposed and experimented on Sina Weibo (Wu et al., 2015) , which may not be generalize well on Twitter.",
"2) SVM-HK loosely couples two separate kernels: a RBF kernel based on handcrafted features, plus a random walk-based kernel which relies on a set of pre-defined keywords for jumping over the nodes probabilistically.",
"This under utilizes the propagation information due to such oversimplified treatment of tree structure.",
"In contrast, SVM-TK is an integrated kernel and can fully utilize the structure by comparing the trees based on both textual and structural similarities.",
"It appears that using bag-of-words is already a decent model evidenced as the fairly good performance of SVM-BOW which is even better than SVM-HK.",
"This is because the features of SVM-HK are handcrafted for binary classification (i.e., non-rumor vs rumor), ignoring the importance of indicative words or units that benefit finer-grained classification which can be captured more effectively by SVM-BOW.",
"The sequential neural model GRU-RNN performs slightly worse than SVM-TK, but much worse than our recursive models.",
"This is because it is a special case of the recursive model where each non-leaf node has only one child.",
"It has to rely on a linear chain as input, which missed out valuable structural information.",
"However, it does learn high-level features from the post content via hidden units of the neural model while SVM-TK cannot which can only evaluates similarities based on the overlapping words among subtrees.",
"Our recursive models are inherently tree-structured and take advantages of representation learning following the propagation structure, thus beats SVM-TK.",
"In the two recursive models, TD-RvNN outperforms BU-RvNN, which indicates that the bottomup model may suffer from larger information loss than the top-down one.",
"This verifies the hypothesis we made in Section 4.3 that the pooling layer For only the non-rumor class, it seems that our method does not perform so well as some featureengineering baselines.",
"This can be explained by the fact that these baselines are trained with additional features such as user information (e.g., profile, verification status, etc) which may contain clues for differentiating non-rumors from rumors.",
"Also, the responses to non-rumors are usually much more diverse with little informative indication, making identification of non-rumors more difficult based on content even with the structure.",
"Early Rumor Detection Performance Detecting rumors at early state of propagation is important so that interventions can be made in a timely manner.",
"We compared different methods in term of different time delays measured by either tweet count received or time elapsed since the source tweet is posted.",
"The performance is evaluated by the accuracy obtained when we incrementally add test data up to the check point given the targeted time delay or tweets volume.",
"Figure 4 shows that the performance of our recursive models climbs more rapidly and starts to supersede the other models at the early stage.",
"Although all the methods are getting to their best per-formance in the end, TD-RvNN and BU-RvNN only need around 8 hours or about 90 tweets to achieve the comparable performance of the best baseline model, i.e., SVM-TK, which needs about 36 hours or around 300 posts, indicating superior early detection performance of our method.",
"Figure 5 shows a sample tree at the early stage of propagation that has been correctly classified as a false rumor by both recursive models.",
"We can see that this false rumor demonstrates typical patterns in subtrees and propagation paths indicative of the falsehood, where a set of responses supporting the parent posts that deny or question the source post are captured by our bottom-up model.",
"Similarly, some patterns of propagation from the root to leaf nodes like \"support→deny→support\" are also seized by our top-down model.",
"In comparison, sequential models may be confused because the supportive key terms such as \"be right\", \"yeah\", \"exactly!\"",
"dominate the responses, and the SVM-TK may miss similar subtrees by just comparing the surface words.",
"Conclusions and Future Work We propose a bottom-up and a top-down treestructured model based on recursive neural networks for rumor detection on Twitter.",
"The inher-ent nature of recursive models allows them using propagation tree to guide the learning of representations from tweets content, such as embedding various indicative signals hidden in the structure, for better identifying rumors.",
"Results on two public Twitter datasets show that our method improves rumor detection performance in very large margins as compared to state-of-the-art baselines.",
"In our future work, we plan to integrate other types of information such as user properties into the structured neural models to further enhance representation learning and detect rumor spreaders at the same time.",
"We also plan to use unsupervised models for the task by exploiting structural information."
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"4",
"4.1",
"4.2",
"4.3",
"4.4",
"5.1",
"5.2",
"5.3",
"5.4",
"6"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Problem Statement",
"RvNN-based Rumor Detection",
"Standard Recursive Neural Networks",
"Bottom-up RvNN",
"Top-down RvNN",
"Model Training",
"Datasets",
"Experimental Setup",
"Rumor Classification Performance",
"Early Rumor Detection Performance",
"Conclusions and Future Work"
]
} | GEM-SciDuet-train-135#paper-1364#slide-11 | Data Collection | Use two reference Tree datasets:
URL of the datasets: https://www.dropbox.com/s/0jhsfwep3ywvpca/rumdetect2017.zip?dl=0 | Use two reference Tree datasets:
URL of the datasets: https://www.dropbox.com/s/0jhsfwep3ywvpca/rumdetect2017.zip?dl=0 | [] |
GEM-SciDuet-train-135#paper-1364#slide-12 | 1364 | Rumor Detection on Twitter with Tree-structured Recursive Neural Networks | Automatic rumor detection is technically very challenging. In this work, we try to learn discriminative features from tweets content by following their non-sequential propagation structure and generate more powerful representations for identifying different type of rumors. We propose two recursive neural models based on a bottom-up and a top-down tree-structured neural networks for rumor representation learning and classification, which naturally conform to the propagation layout of tweets. Results on two public Twitter datasets demonstrate that our recursive neural models 1) achieve much better performance than state-of-the-art approaches; 2) demonstrate superior capacity on detecting rumors at very early stage. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173
],
"paper_content_text": [
"Introduction Rumors have always been a social disease.",
"In recent years, it has become unprecedentedly convenient for the \"evil-doers\" to create and disseminate rumors in massive scale with low cost thanks to the popularity of social media outlets on Twitter, Facebook, etc.",
"The worst effect of false rumors could be devastating to individual and/or society.",
"Research pertaining rumors spans multiple disciplines, such as philosophy and humanities (Di-Fonzo and Bordia, 2007; Donovan, 2007) , social psychology (Allport and Postman, 1965; Jaeger et al., 1980; Rosnow and Foster, 2005) , political studies (Allport and Postman, 1946; Berinsky, 2017) , management science (DiFonzo et al., 1994; Kimmel, 2004) and recently computer science and artificial intelligence (Qazvinian et al., 2011; Ratkiewicz et al., 2011; Castillo et al., 2011; Hannak et al., 2014; Zhao et al., 2015; Ma et al., 2015) .",
"Rumor is commonly defined as information that emerge and spread among people whose truth value is unverified or intentionally false (Di-Fonzo and Bordia, 2007; Qazvinian et al., 2011) .",
"Analysis shows that people tend to stop spreading a rumor if it is known as false (Zubiaga et al., 2016b) .",
"However, identifying such misinformation is non-trivial and needs investigative journalism to fact check the suspected claim, which is labor-intensive and time-consuming.",
"The proliferation of social media makes it worse due to the ever-increasing information load and dynamics.",
"Therefore, it is necessary to develop automatic and assistant approaches to facilitate real-time rumor tracking and debunking.",
"For automating rumor detection, most of the previous studies focused on text mining from sequential microblog streams using supervised models based on feature engineering (Castillo et al., 2011; Kwon et al., 2013; Liu et al., 2015; Ma et al., 2015) , and more recently deep neural models (Ma et al., 2016; Chen et al., 2017; Ruchansky et al., 2017) .",
"These methods largely ignore or oversimplify the structural information associated with message propagation which however has been shown conducive to provide useful clues for identifying rumors.",
"Kernel-based method (Wu et al., 2015; Ma et al., 2017) was thus proposed to model the structure as propagation trees in order to differentiate rumorous and non-rumorous claims by comparing their tree-based similarities.",
"But such kind of approach cannot directly classify a tree without pairwise comparison with all other trees imposing unnecessary overhead, and it also cannot automatically learn any high-level feature representations out of the noisy surface features.",
"In this paper, we present a neural rumor detection approach based on recursive neural networks (RvNN) to bridge the content semantics and propagation clues.",
"RvNN and its variants were originally used to compose phrase or sentence representation for syntactic and semantic parsing (Socher et al., 2011 (Socher et al., , 2012 .",
"Unlike parsing, the input into our model is a propagation tree rooted from a source post rather than the parse tree of an individual sentence, and each tree node is a responsive post instead of an individual words.",
"The content semantics of posts and the responsive relationship among them can be jointly captured via the recursive feature learning process along the tree structure.",
"So, why can such neural model do better for the task?",
"Analysis has generally found that Twitter could \"self-correct\" some inaccurate information as users share opinions, conjectures and evidences (Zubiaga et al., 2017) .",
"To illustrate our intuition, Figure 1 exemplifies the propagation trees of two rumors in our dataset, one being false and the other being true 1 .",
"Structure-insensitive methods basically relying on the relative ratio of different stances in the text cannot do well when such clue is unclear like this example.",
"However, it can be seen that when a post denies the false rumor, it tends to spark supportive or affirmative replies confirming the denial; in contrast, denial to a true rumor tends to trigger question or denial in its replies.",
"This observation may suggest a more general hypothesis that the repliers tend to disagree with (or question) who support a false rumor or deny a true rumor, and also they tend to agree with who deny a false rumor or support a true rumor.",
"Meanwhile, a reply, rather than directly responding to the source tweet (i.e., the root), is usually responsive to its immediate ancestor (Lukasik et al., 2016; Zubiaga et al., 2016a) , suggesting obvious local characteristic of the interaction.",
"The recursive network naturally models such structures for learning to capture the rumor indicative signals and enhance the representation by recursively aggregating the signals from different branches.",
"To this end, we extend the standard RvNN into two variants, i.e., a bottom-up (BU) model and a top-down (TD) model, which represent the propagation tree structure from different angles, in order to visit the nodes and combine their representations following distinct directions.",
"The important merit of such architecture is that the node features can be selectively refined by the recursion given the connection and direction of all paths of the 1 False (true) rumor means the veracity of the rumorous claim is false (true).",
"Figure 1 : Propagation trees of two rumorous source tweets.",
"Nodes may express stances on their parent as commenting, supporting, questioning or denying.",
"The edge arrow indicates the direction from a response to its responded node, and the polarity is marked as '+' ('-') for support (denial).",
"The same node color indicates the same stance on the veracity of root node (i.e., source tweet).",
"tree.",
"As a result, it can be expected that the discriminative signals are better embedded into the learned representations.",
"We evaluate our proposed approach based on two public Twitter datasets.",
"The results show that our method outperforms strong rumor detection baselines with large margin and also demonstrate much higher effectiveness for detection at early stage of propagation, which is promising for realtime intervention and debunking.",
"Our contributions are summarized as follows in three folds: • This is the first study that deeply integrates both structure and content semantics based on tree-structured recursive neural networks for detecting rumors from microblog posts.",
"• We propose two variants of RvNN models based on bottom-up and top-down tree structures to generate better integrated representations for a claim by capturing both structural and textural properties signaling rumors.",
"• Our experiments based on real-world Twitter datasets achieve superior improvements over state-of-the-art baselines on both rumor classification and early detection tasks.",
"We make the source codes in our experiments publicly accessible 2 .",
"Related Work Most previous automatic approaches for rumor detection (Castillo et al., 2011; Yang et al., 2012; Liu et al., 2015) intended to learn a supervised classifier by utilizing a wide range of features crafted from post contents, user profiles and propagation patterns.",
"Subsequent studies were then conducted to engineer new features such as those representing rumor diffusion and cascades (Friggeri et al., 2014; Hannak et al., 2014) characterized by comments with links to debunking websites.",
"Kwon et al.",
"(2013) introduced a time-series-fitting model based on the volume of tweets over time.",
"Ma et al.",
"(2015) extended their model with more chronological social context features.",
"These approaches typically require heavy preprocessing and feature engineering.",
"Zhao et al.",
"(2015) alleviated the engineering effort by using a set of regular expressions (such as \"really?",
"\", \"not true\", etc) to find questing and denying tweets, but the approach was oversimplified and suffered from very low recall.",
"Ma et al.",
"(2016) used recurrent neural networks (RNN) to learn automatically the representations from tweets content based on time series.",
"Recently, they studied to mutually reinforce stance detection and rumor classification in a neural multi-task learning framework (Ma et al., 2018) .",
"However, the approaches cannot embed features reflecting how the posts are propagated and requires careful data segmentation to prepare for time sequence.",
"Some kernel-based methods were exploited to model the propagation structure.",
"Wu et al.",
"(2015) proposed a hybrid SVM classifier which combines a RBF kernel and a random-walk-based graph kernel to capture both flat and propagation patterns for detecting rumors on Sina Weibo.",
"Ma et al.",
"(2017) used tree kernel to capture the similarity of propagation trees by counting their similar substructures in order to identify different types of rumors on Twitter.",
"Compared to their studies, our model can learn the useful features via a more natural and general approach, i.e., the tree-structured neural network, to jointly generate representations from both structure and content.",
"RvNN has demonstrated state-of-the-art performances in a variety of tasks, e.g., images segmentation (Socher et al., 2011) , phrase representation from word vectors (Socher et al., 2012) , and sentiment classification in sentences (Socher et al., 2013) .",
"More recently, a deep RvNN was proposed to model the compositionality in natural language for fine-grained sentiment classification by stacking multiple recursive layers (Irsoy and Cardie, 2014) .",
"In order to avoid gradient vanishing, some studies integrated Long Short Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) to RvNN Tai et al., 2015) .",
"Mou et al.",
"(2015) used a convolutional network over tree structures for syntactic tree parsing of natural language sentences.",
"Problem Statement We define a Twitter rumor detection dataset as a set of claims C = {C 1 , C 2 , · · · , C |C| }, where each claim C i corresponds to a source tweet r i which consists of ideally all its relevant responsive tweets in chronological order, i.e., C i = {r i , x i1 , x i2 , · · · , x im } where each x i * is a responsive tweet of the root r i .",
"Note that although the tweets are notated sequentially, there are connections among them based on their reply or repost relationships, which can form a propagation tree structure (Wu et al., 2015; Ma et al., 2017) with r i being the root node.",
"We formulate this task as a supervised classification problem, which learns a classifier f from labeled claims, that is f : C i → Y i , where Y i takes one of the four finer-grained classes: non-rumor, false rumor, true rumor, and unverified rumor that are introduced in the literature (Ma et al., 2017; Zubiaga et al., 2016b ).",
"An important issue of the tree structure is concerned about the direction of edges, which can result in two different architectures of the model: 1) a bottom-up tree; 2) a top-down tree, which are defined as follows: • Bottom-up tree takes the similar shape as shown in Figure 1 , where responsive nodes always point to their responded nodes and leaf nodes not having any response are laid out at the furthest level.",
"We represent a tree as T i = V i , E i , where V i = C i which con- sists of all relevant posts as nodes, and E i denotes a set of all directed links, where for any u, v ∈ V i , u ← v exists if v responses to u.",
"This structure is similar to a citation network where a response mimics a reference.",
"• Top-down tree naturally conforms to the direction of information propagation, in which a link u → v means the information flows from u to v and v sees it and provides a response to u.",
"This structure reverses bottomup tree and simulates how information cas- cades from a source tweet, i.e., the root, to all its receivers, i.e., the decedents, which is similar as (Wu et al., 2015; Ma et al., 2017) .",
"RvNN-based Rumor Detection The core idea of our method is to strengthen the high-level representation of tree nodes by the recursion following the propagation structure over different branches in the tree.",
"For instance, the responsive nodes confirming or supporting a node (e.g., \"I agree\", \"be right\", etc) can further reinforce the stance of that node while denial or questioning responses (e.g., \"disagree, \"really?!)",
"otherwise weaken its stance.",
"Compared to the kernelbased method using propagation tree (Wu et al., 2015; Ma et al., 2017) , our method does not need pairwise comparison among large number of subtrees, and can learn much stronger representation of content following the response structure.",
"In this section, we will describe our extension to the standard RvNN for modeling rumor detection based on the bottom-up and top-down architectures presented in Section 3.",
"Standard Recursive Neural Networks RvNN is a type of tree-structured neural networks.",
"The original version of RvNN utilized binarized sentence parse trees (Socher et al., 2012) , in which the representation associated with each node of a parse tree is computed from its direct children.",
"The overall structure of the standard RvNN is illustrated as the right side of Figure 2 , corresponding to the input parse tree at the left side.",
"Leaf nodes are the words in an input sentence, each represented by a low-dimensional word embedding.",
"Non-leaf nodes are sentence constituents, computed by recursion based on the presentations of child nodes.",
"Let p be the feature vector of a parent node whose children are c 1 and c 2 , the representation of the parent is computed by p = f (W ·[c 1 ; c 2 ]+b), where f (·) is the activation function with W and b as parameters.",
"This computation is done recursively over all tree nodes; the learned hidden vectors of the nodes can then be used for various classification tasks.",
"Bottom-up RvNN The core idea of bottom-up model is to generate a feature vector for each subtree by recursively visiting every node from the leaves at the bottom to the root at the top.",
"In this way, the subtrees with similar contexts, such as those subtrees having a denial parent and a set of supportive children, will be projected into the proximity in the representation space.",
"And thus such local rumor indicative features are aggregated along different branches into some global representation of the whole tree.",
"For this purpose, we make a natural extension to the original RvNN.",
"The overall structure of our proposed bottom-up model is illustrated in Figure 3(b) , taking a bottom-up tree (see Figure 3 (a)) as input.",
"Different from the standard RvNN, the input of each node in the bottom-up model is a post represented as a vector of words in the vocabulary in terms of tf idf values.",
"Here, every node has an input vector, and the number of children of nodes varies significantly 3 .",
"In rumor detection, long short-term memory (LSTM) (Hochreiter and Schmidhuber, 1997) and gated recurrent units (GRU) (Cho et al., 2014) were used to learn textual representation, which adopts memory units to store information over long time steps (Ma et al., 2016) .",
"In this paper, we choose to extend GRU as hidden unit to model long-distance interactions over the tree nodes because it is more efficient due to fewer parameters.",
"Let S(j) denote the set of direct children of the node j.",
"The transition equations of node j in the bottom-up model are formulated as follows: where x j is the original input vector of node j, E denotes the parameter matrix for transforming this input post,x j is the transformed representation of j, [W * , U * ] are the weight connections inside GRU, and h j and h s refer to the hidden state of j and its s-th child.",
"Thus h S denotes the sum of the hidden state of all the children of j assuming that all children are equally important to j.",
"As with the standard GRU, denotes element-wise multiplication; a reset gate r j determines how to combine the current inputx j with the memory of children, and an update gate z j defines how much memory from the children is cascaded into the current node; andh j denotes the candidate activation of the hidden state of the current node.",
"Different from the standard GRU unit, the gating vectors in our variant of GRU are dependent on the states of many child units, allowing our model to incorporate representations from different children.",
"After recursive aggregation from bottom to up, the state of root node (i.e., source tweet) can be regard as the representation of the whole tree which is used for supervised classification.",
"So, an output layer is connected to the root node for predicting the class of the tree using a softmax function: x j = x j E h S = s∈S(j) h s r j = σ (W rxj + U r h S ) z j = σ (W zxj + U z h S ) h j = tanh (W hxj + U h (h S r j )) h j = (1 − z j ) h S + z j h j y = Sof tmax(Vh 0 + b) (2) where h 0 is the learned hidden vector of root node; V and b are the weights and bias in output layer.",
"Top-down RvNN This model is designed to leverage the structure of top-down tree to capture complex propagation patterns for classifying rumorous claims, which is shown in Figure 3 (c).",
"It models how the informa-tion flows from source post to the current node.",
"The idea of this top-down approach is to generate a strengthened feature vector for each post considering its propagation path, where rumor-indicative features are aggregated along the propagation history in the path.",
"For example, if current post agree with its parent's stance which denies the source post, the denial stance from the root node down to the current node on this path should be reinforced.",
"Due to different branches of any non-leaf node, the top-down visit to its subtree nodes is also recursive.",
"However, the nature of top-down tree lends this model different from the bottom-up one.",
"The representation of each node is computed by combining its own input and its parent node instead of its children nodes.",
"This process proceeds recursively from the root node to its children until all leaf nodes are reached.",
"Suppose that the hidden state of a non-leaf node can be passed synchronously to all its child nodes without loss.",
"Then the hidden state h j of a node j can be computed by combining the hidden state h P(j) of its parent node P(j) and its own input vector x j .",
"Therefore, the transition equations of node j can be formulated as a standard GRU: x j = x j E r j = σ W rxj + U r h P(j) z j = σ W zxj + U z h P(j) h j = tanh W hxj + U h (h P(j) r j ) h j = (1 − z j ) h P(j) + z j h j (3) Through the top-down recursion, the learned representations are eventually embedded into the hidden vector of all the leaf nodes.",
"Since the num-ber of leaf nodes varies, the resulting vectors cannot be directly fed into a fixed-size neural layer for output.",
"Therefore, we add a max-pooling layer to take the maximum value of each dimension of the vectors over all the leaf nodes.",
"This can also help capture the most appealing indicative features from all the propagation paths.",
"Based on the pooling result, we finally use a softmax function in the output layer to predict the label of the tree: y = Sof tmax(Vh ∞ + b) (4) where h ∞ is the pooling vector over all leaf nodes, V and b are parameters in the output layer.",
"Although both of the two RvNN models aim to capture the structural properties by recursively visiting all nodes, we can conjecture that the topdown model would be better.",
"The hypothesis is that in the bottom-up case the final output relies on the representation of single root, and its information loss can be larger than the top-down one since in the top-down case the representations embedded into all leaf nodes along different propagation paths can be incorporated via pooling holistically.",
"Model Training The model is trained to minimize the squared error between the probability distributions of the predictions and the ground truth: L(y,ŷ) = N n=1 C c=1 (y c −ŷ c ) 2 + λ||θ|| 2 2 (5) where y c is the ground truth andŷ c is the prediction probability of a class, N is the number of training claims, C is the number of classes, ||.|| 2 is the L 2 regularization term over all model parameters θ, and λ is the trade-off coefficient.",
"During training, all the model parameters are updated using efficient back-propagation through structure (Goller and Kuchler, 1996; Socher et al., 2013) , and the optimization is gradient-based following the Ada-grad update rule (Duchi et al., 2011) to speed up the convergence.",
"We empirically initialize the model parameters with uniform distribution and set the vocabulary size as 5,000, the size of embedding and hidden units as 100.",
"We iterate over all the training examples in each epoch and continue until the loss value converges or the maximum epoch number is met.",
"Experiments and Results Datasets For experimental evaluation, we use two publicly available Twitter datasets released by Ma et al.",
"(2017) , namely Twitter15 and Twitter16 4 , which respectively contains 1,381 and 1,181 propagation trees (see (Ma et al., 2017) for detailed statistics).",
"In each dataset, a group of wide spread source tweets along with their propagation threads, i.e., replies and retweets, are provided in the form of tree structure.",
"Each tree is annotated with one of the four class labels, i.e., non-rumor, false rumor, true rumor and unverified rumor.",
"We remove the retweets from the trees since they do not provide any extra information or evidence contentwise.",
"We build two versions for each tree, one for the bottom-up tree and the other for the top-down tree, by flipping the edges' direction.",
"Experimental Setup We make comprehensive comparisons between our models and some state-of-the-art baselines on rumor classification and early detection tasks.",
"-DTR: Zhao et al.",
"(2015) proposed a Decision-Tree-based Ranking model to identify trending rumors by searching for inquiry phrases.",
"-DTC: The information credibility model using a Decision-Tree Classifier (Castillo et al., 2011) based on manually engineering various statistical features of the tweets.",
"-RFC: The Random Forest Classier using 3 fitting parameters as temporal properties and a set of handcrafted features on user, linguistic and structural properties (Kwon et al., 2013) .",
"-SVM-TS: A linear SVM classifier that uses time-series to model the variation of handcrafted social context features (Ma et al., 2015) .",
"-SVM-BOW: A naive baseline we built by representing text content using bag-of-words and using linear SVM for rumor classification.",
"-SVM-TK and SVM-HK: SVM classifier uses a Tree Kernel (Ma et al., 2017) and that uses a Hybrid Kernel (Wu et al., 2015) , respectively, both of which model propagation structures with kernels.",
"-GRU-RNN: A detection model based on recurrent neural networks (Ma et al., 2016) with GRU units for learning rumor representations by modeling sequential structure of relevant posts.",
"We implement DTC and RFC using Weka 5 , SVM-based models using LibSVM 6 and all neural-network-based models with Theano 7 .",
"We conduct 5-fold cross-validation on the datasets and use accuracy over all the four categories and F1 measure on each class to evaluate the performance of models.",
"Rumor Classification Performance As shown in Table 1 , our proposed models basically yield much better performance than other methods on both datasets via the modeling of interaction structures of posts in the propagation.",
"It is observed that the performance of the 4 baselines in the first group based on handcrafted features is obviously poor, varying between 0.409 and 0.585 in accuracy, indicating that they fail to generalize due to the lack of capacity capturing helpful features.",
"Among these baselines, SVM-TS and RFC perform relatively better because they 5 www.cs.waikato.ac.nz/ml/weka 6 www.csie.ntu.edu.tw/˜cjlin/libsvm 7 deeplearning.net/software/theano use additional temporal traits, but they are still clearly worse than the models not relying on feature engineering.",
"DTR uses a set of regular expressions indicative of stances.",
"However, only 19.6% and 22.2% tweets in the two datasets contain strings covered by these regular expressions, rendering unsatisfactory result.",
"Among the two kernel methods that are based on comparing propagation structures, we observe that SVM-TK is much more effective than SVM-HK.",
"There are two reasons: 1) SVM-HK was originally proposed and experimented on Sina Weibo (Wu et al., 2015) , which may not be generalize well on Twitter.",
"2) SVM-HK loosely couples two separate kernels: a RBF kernel based on handcrafted features, plus a random walk-based kernel which relies on a set of pre-defined keywords for jumping over the nodes probabilistically.",
"This under utilizes the propagation information due to such oversimplified treatment of tree structure.",
"In contrast, SVM-TK is an integrated kernel and can fully utilize the structure by comparing the trees based on both textual and structural similarities.",
"It appears that using bag-of-words is already a decent model evidenced as the fairly good performance of SVM-BOW which is even better than SVM-HK.",
"This is because the features of SVM-HK are handcrafted for binary classification (i.e., non-rumor vs rumor), ignoring the importance of indicative words or units that benefit finer-grained classification which can be captured more effectively by SVM-BOW.",
"The sequential neural model GRU-RNN performs slightly worse than SVM-TK, but much worse than our recursive models.",
"This is because it is a special case of the recursive model where each non-leaf node has only one child.",
"It has to rely on a linear chain as input, which missed out valuable structural information.",
"However, it does learn high-level features from the post content via hidden units of the neural model while SVM-TK cannot which can only evaluates similarities based on the overlapping words among subtrees.",
"Our recursive models are inherently tree-structured and take advantages of representation learning following the propagation structure, thus beats SVM-TK.",
"In the two recursive models, TD-RvNN outperforms BU-RvNN, which indicates that the bottomup model may suffer from larger information loss than the top-down one.",
"This verifies the hypothesis we made in Section 4.3 that the pooling layer For only the non-rumor class, it seems that our method does not perform so well as some featureengineering baselines.",
"This can be explained by the fact that these baselines are trained with additional features such as user information (e.g., profile, verification status, etc) which may contain clues for differentiating non-rumors from rumors.",
"Also, the responses to non-rumors are usually much more diverse with little informative indication, making identification of non-rumors more difficult based on content even with the structure.",
"Early Rumor Detection Performance Detecting rumors at early state of propagation is important so that interventions can be made in a timely manner.",
"We compared different methods in term of different time delays measured by either tweet count received or time elapsed since the source tweet is posted.",
"The performance is evaluated by the accuracy obtained when we incrementally add test data up to the check point given the targeted time delay or tweets volume.",
"Figure 4 shows that the performance of our recursive models climbs more rapidly and starts to supersede the other models at the early stage.",
"Although all the methods are getting to their best per-formance in the end, TD-RvNN and BU-RvNN only need around 8 hours or about 90 tweets to achieve the comparable performance of the best baseline model, i.e., SVM-TK, which needs about 36 hours or around 300 posts, indicating superior early detection performance of our method.",
"Figure 5 shows a sample tree at the early stage of propagation that has been correctly classified as a false rumor by both recursive models.",
"We can see that this false rumor demonstrates typical patterns in subtrees and propagation paths indicative of the falsehood, where a set of responses supporting the parent posts that deny or question the source post are captured by our bottom-up model.",
"Similarly, some patterns of propagation from the root to leaf nodes like \"support→deny→support\" are also seized by our top-down model.",
"In comparison, sequential models may be confused because the supportive key terms such as \"be right\", \"yeah\", \"exactly!\"",
"dominate the responses, and the SVM-TK may miss similar subtrees by just comparing the surface words.",
"Conclusions and Future Work We propose a bottom-up and a top-down treestructured model based on recursive neural networks for rumor detection on Twitter.",
"The inher-ent nature of recursive models allows them using propagation tree to guide the learning of representations from tweets content, such as embedding various indicative signals hidden in the structure, for better identifying rumors.",
"Results on two public Twitter datasets show that our method improves rumor detection performance in very large margins as compared to state-of-the-art baselines.",
"In our future work, we plan to integrate other types of information such as user properties into the structured neural models to further enhance representation learning and detect rumor spreaders at the same time.",
"We also plan to use unsupervised models for the task by exploiting structural information."
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"4",
"4.1",
"4.2",
"4.3",
"4.4",
"5.1",
"5.2",
"5.3",
"5.4",
"6"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Problem Statement",
"RvNN-based Rumor Detection",
"Standard Recursive Neural Networks",
"Bottom-up RvNN",
"Top-down RvNN",
"Model Training",
"Datasets",
"Experimental Setup",
"Rumor Classification Performance",
"Early Rumor Detection Performance",
"Conclusions and Future Work"
]
} | GEM-SciDuet-train-135#paper-1364#slide-12 | Approaches to compare with | DTR: Decision tree-based ranking model using enquiry phrases to identify trending rumors (Zhao et al., 2015)
DTC: Twitter information credibility model using Decision
RFC: Random Forest Classifier using three parameters to fit the temporal tweets volume curve (Kwon et al., 2013)
SVM-TS: Linear SVM classifier using time-series structures to model the variation of social context features. (Ma et al., 2015)
SVM-BOW: linear SVM classifier using bag-of-words.
SVM-TK and SVM-HK: SVM classifier uses a Tree Kernel
(Ma et al., 2017) and that uses a Hybrid Kernel (Wu et al.,
2015), both model propagation structures with kernels.
GRU-RNN: The RNN-based rumor detection model. (Ma et
Ours (BU-RvNN and TD-RvNN): Our bottom-up and top- down recursive models. | DTR: Decision tree-based ranking model using enquiry phrases to identify trending rumors (Zhao et al., 2015)
DTC: Twitter information credibility model using Decision
RFC: Random Forest Classifier using three parameters to fit the temporal tweets volume curve (Kwon et al., 2013)
SVM-TS: Linear SVM classifier using time-series structures to model the variation of social context features. (Ma et al., 2015)
SVM-BOW: linear SVM classifier using bag-of-words.
SVM-TK and SVM-HK: SVM classifier uses a Tree Kernel
(Ma et al., 2017) and that uses a Hybrid Kernel (Wu et al.,
2015), both model propagation structures with kernels.
GRU-RNN: The RNN-based rumor detection model. (Ma et
Ours (BU-RvNN and TD-RvNN): Our bottom-up and top- down recursive models. | [] |
GEM-SciDuet-train-135#paper-1364#slide-13 | 1364 | Rumor Detection on Twitter with Tree-structured Recursive Neural Networks | Automatic rumor detection is technically very challenging. In this work, we try to learn discriminative features from tweets content by following their non-sequential propagation structure and generate more powerful representations for identifying different type of rumors. We propose two recursive neural models based on a bottom-up and a top-down tree-structured neural networks for rumor representation learning and classification, which naturally conform to the propagation layout of tweets. Results on two public Twitter datasets demonstrate that our recursive neural models 1) achieve much better performance than state-of-the-art approaches; 2) demonstrate superior capacity on detecting rumors at very early stage. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173
],
"paper_content_text": [
"Introduction Rumors have always been a social disease.",
"In recent years, it has become unprecedentedly convenient for the \"evil-doers\" to create and disseminate rumors in massive scale with low cost thanks to the popularity of social media outlets on Twitter, Facebook, etc.",
"The worst effect of false rumors could be devastating to individual and/or society.",
"Research pertaining rumors spans multiple disciplines, such as philosophy and humanities (Di-Fonzo and Bordia, 2007; Donovan, 2007) , social psychology (Allport and Postman, 1965; Jaeger et al., 1980; Rosnow and Foster, 2005) , political studies (Allport and Postman, 1946; Berinsky, 2017) , management science (DiFonzo et al., 1994; Kimmel, 2004) and recently computer science and artificial intelligence (Qazvinian et al., 2011; Ratkiewicz et al., 2011; Castillo et al., 2011; Hannak et al., 2014; Zhao et al., 2015; Ma et al., 2015) .",
"Rumor is commonly defined as information that emerge and spread among people whose truth value is unverified or intentionally false (Di-Fonzo and Bordia, 2007; Qazvinian et al., 2011) .",
"Analysis shows that people tend to stop spreading a rumor if it is known as false (Zubiaga et al., 2016b) .",
"However, identifying such misinformation is non-trivial and needs investigative journalism to fact check the suspected claim, which is labor-intensive and time-consuming.",
"The proliferation of social media makes it worse due to the ever-increasing information load and dynamics.",
"Therefore, it is necessary to develop automatic and assistant approaches to facilitate real-time rumor tracking and debunking.",
"For automating rumor detection, most of the previous studies focused on text mining from sequential microblog streams using supervised models based on feature engineering (Castillo et al., 2011; Kwon et al., 2013; Liu et al., 2015; Ma et al., 2015) , and more recently deep neural models (Ma et al., 2016; Chen et al., 2017; Ruchansky et al., 2017) .",
"These methods largely ignore or oversimplify the structural information associated with message propagation which however has been shown conducive to provide useful clues for identifying rumors.",
"Kernel-based method (Wu et al., 2015; Ma et al., 2017) was thus proposed to model the structure as propagation trees in order to differentiate rumorous and non-rumorous claims by comparing their tree-based similarities.",
"But such kind of approach cannot directly classify a tree without pairwise comparison with all other trees imposing unnecessary overhead, and it also cannot automatically learn any high-level feature representations out of the noisy surface features.",
"In this paper, we present a neural rumor detection approach based on recursive neural networks (RvNN) to bridge the content semantics and propagation clues.",
"RvNN and its variants were originally used to compose phrase or sentence representation for syntactic and semantic parsing (Socher et al., 2011 (Socher et al., , 2012 .",
"Unlike parsing, the input into our model is a propagation tree rooted from a source post rather than the parse tree of an individual sentence, and each tree node is a responsive post instead of an individual words.",
"The content semantics of posts and the responsive relationship among them can be jointly captured via the recursive feature learning process along the tree structure.",
"So, why can such neural model do better for the task?",
"Analysis has generally found that Twitter could \"self-correct\" some inaccurate information as users share opinions, conjectures and evidences (Zubiaga et al., 2017) .",
"To illustrate our intuition, Figure 1 exemplifies the propagation trees of two rumors in our dataset, one being false and the other being true 1 .",
"Structure-insensitive methods basically relying on the relative ratio of different stances in the text cannot do well when such clue is unclear like this example.",
"However, it can be seen that when a post denies the false rumor, it tends to spark supportive or affirmative replies confirming the denial; in contrast, denial to a true rumor tends to trigger question or denial in its replies.",
"This observation may suggest a more general hypothesis that the repliers tend to disagree with (or question) who support a false rumor or deny a true rumor, and also they tend to agree with who deny a false rumor or support a true rumor.",
"Meanwhile, a reply, rather than directly responding to the source tweet (i.e., the root), is usually responsive to its immediate ancestor (Lukasik et al., 2016; Zubiaga et al., 2016a) , suggesting obvious local characteristic of the interaction.",
"The recursive network naturally models such structures for learning to capture the rumor indicative signals and enhance the representation by recursively aggregating the signals from different branches.",
"To this end, we extend the standard RvNN into two variants, i.e., a bottom-up (BU) model and a top-down (TD) model, which represent the propagation tree structure from different angles, in order to visit the nodes and combine their representations following distinct directions.",
"The important merit of such architecture is that the node features can be selectively refined by the recursion given the connection and direction of all paths of the 1 False (true) rumor means the veracity of the rumorous claim is false (true).",
"Figure 1 : Propagation trees of two rumorous source tweets.",
"Nodes may express stances on their parent as commenting, supporting, questioning or denying.",
"The edge arrow indicates the direction from a response to its responded node, and the polarity is marked as '+' ('-') for support (denial).",
"The same node color indicates the same stance on the veracity of root node (i.e., source tweet).",
"tree.",
"As a result, it can be expected that the discriminative signals are better embedded into the learned representations.",
"We evaluate our proposed approach based on two public Twitter datasets.",
"The results show that our method outperforms strong rumor detection baselines with large margin and also demonstrate much higher effectiveness for detection at early stage of propagation, which is promising for realtime intervention and debunking.",
"Our contributions are summarized as follows in three folds: • This is the first study that deeply integrates both structure and content semantics based on tree-structured recursive neural networks for detecting rumors from microblog posts.",
"• We propose two variants of RvNN models based on bottom-up and top-down tree structures to generate better integrated representations for a claim by capturing both structural and textural properties signaling rumors.",
"• Our experiments based on real-world Twitter datasets achieve superior improvements over state-of-the-art baselines on both rumor classification and early detection tasks.",
"We make the source codes in our experiments publicly accessible 2 .",
"Related Work Most previous automatic approaches for rumor detection (Castillo et al., 2011; Yang et al., 2012; Liu et al., 2015) intended to learn a supervised classifier by utilizing a wide range of features crafted from post contents, user profiles and propagation patterns.",
"Subsequent studies were then conducted to engineer new features such as those representing rumor diffusion and cascades (Friggeri et al., 2014; Hannak et al., 2014) characterized by comments with links to debunking websites.",
"Kwon et al.",
"(2013) introduced a time-series-fitting model based on the volume of tweets over time.",
"Ma et al.",
"(2015) extended their model with more chronological social context features.",
"These approaches typically require heavy preprocessing and feature engineering.",
"Zhao et al.",
"(2015) alleviated the engineering effort by using a set of regular expressions (such as \"really?",
"\", \"not true\", etc) to find questing and denying tweets, but the approach was oversimplified and suffered from very low recall.",
"Ma et al.",
"(2016) used recurrent neural networks (RNN) to learn automatically the representations from tweets content based on time series.",
"Recently, they studied to mutually reinforce stance detection and rumor classification in a neural multi-task learning framework (Ma et al., 2018) .",
"However, the approaches cannot embed features reflecting how the posts are propagated and requires careful data segmentation to prepare for time sequence.",
"Some kernel-based methods were exploited to model the propagation structure.",
"Wu et al.",
"(2015) proposed a hybrid SVM classifier which combines a RBF kernel and a random-walk-based graph kernel to capture both flat and propagation patterns for detecting rumors on Sina Weibo.",
"Ma et al.",
"(2017) used tree kernel to capture the similarity of propagation trees by counting their similar substructures in order to identify different types of rumors on Twitter.",
"Compared to their studies, our model can learn the useful features via a more natural and general approach, i.e., the tree-structured neural network, to jointly generate representations from both structure and content.",
"RvNN has demonstrated state-of-the-art performances in a variety of tasks, e.g., images segmentation (Socher et al., 2011) , phrase representation from word vectors (Socher et al., 2012) , and sentiment classification in sentences (Socher et al., 2013) .",
"More recently, a deep RvNN was proposed to model the compositionality in natural language for fine-grained sentiment classification by stacking multiple recursive layers (Irsoy and Cardie, 2014) .",
"In order to avoid gradient vanishing, some studies integrated Long Short Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) to RvNN Tai et al., 2015) .",
"Mou et al.",
"(2015) used a convolutional network over tree structures for syntactic tree parsing of natural language sentences.",
"Problem Statement We define a Twitter rumor detection dataset as a set of claims C = {C 1 , C 2 , · · · , C |C| }, where each claim C i corresponds to a source tweet r i which consists of ideally all its relevant responsive tweets in chronological order, i.e., C i = {r i , x i1 , x i2 , · · · , x im } where each x i * is a responsive tweet of the root r i .",
"Note that although the tweets are notated sequentially, there are connections among them based on their reply or repost relationships, which can form a propagation tree structure (Wu et al., 2015; Ma et al., 2017) with r i being the root node.",
"We formulate this task as a supervised classification problem, which learns a classifier f from labeled claims, that is f : C i → Y i , where Y i takes one of the four finer-grained classes: non-rumor, false rumor, true rumor, and unverified rumor that are introduced in the literature (Ma et al., 2017; Zubiaga et al., 2016b ).",
"An important issue of the tree structure is concerned about the direction of edges, which can result in two different architectures of the model: 1) a bottom-up tree; 2) a top-down tree, which are defined as follows: • Bottom-up tree takes the similar shape as shown in Figure 1 , where responsive nodes always point to their responded nodes and leaf nodes not having any response are laid out at the furthest level.",
"We represent a tree as T i = V i , E i , where V i = C i which con- sists of all relevant posts as nodes, and E i denotes a set of all directed links, where for any u, v ∈ V i , u ← v exists if v responses to u.",
"This structure is similar to a citation network where a response mimics a reference.",
"• Top-down tree naturally conforms to the direction of information propagation, in which a link u → v means the information flows from u to v and v sees it and provides a response to u.",
"This structure reverses bottomup tree and simulates how information cas- cades from a source tweet, i.e., the root, to all its receivers, i.e., the decedents, which is similar as (Wu et al., 2015; Ma et al., 2017) .",
"RvNN-based Rumor Detection The core idea of our method is to strengthen the high-level representation of tree nodes by the recursion following the propagation structure over different branches in the tree.",
"For instance, the responsive nodes confirming or supporting a node (e.g., \"I agree\", \"be right\", etc) can further reinforce the stance of that node while denial or questioning responses (e.g., \"disagree, \"really?!)",
"otherwise weaken its stance.",
"Compared to the kernelbased method using propagation tree (Wu et al., 2015; Ma et al., 2017) , our method does not need pairwise comparison among large number of subtrees, and can learn much stronger representation of content following the response structure.",
"In this section, we will describe our extension to the standard RvNN for modeling rumor detection based on the bottom-up and top-down architectures presented in Section 3.",
"Standard Recursive Neural Networks RvNN is a type of tree-structured neural networks.",
"The original version of RvNN utilized binarized sentence parse trees (Socher et al., 2012) , in which the representation associated with each node of a parse tree is computed from its direct children.",
"The overall structure of the standard RvNN is illustrated as the right side of Figure 2 , corresponding to the input parse tree at the left side.",
"Leaf nodes are the words in an input sentence, each represented by a low-dimensional word embedding.",
"Non-leaf nodes are sentence constituents, computed by recursion based on the presentations of child nodes.",
"Let p be the feature vector of a parent node whose children are c 1 and c 2 , the representation of the parent is computed by p = f (W ·[c 1 ; c 2 ]+b), where f (·) is the activation function with W and b as parameters.",
"This computation is done recursively over all tree nodes; the learned hidden vectors of the nodes can then be used for various classification tasks.",
"Bottom-up RvNN The core idea of bottom-up model is to generate a feature vector for each subtree by recursively visiting every node from the leaves at the bottom to the root at the top.",
"In this way, the subtrees with similar contexts, such as those subtrees having a denial parent and a set of supportive children, will be projected into the proximity in the representation space.",
"And thus such local rumor indicative features are aggregated along different branches into some global representation of the whole tree.",
"For this purpose, we make a natural extension to the original RvNN.",
"The overall structure of our proposed bottom-up model is illustrated in Figure 3(b) , taking a bottom-up tree (see Figure 3 (a)) as input.",
"Different from the standard RvNN, the input of each node in the bottom-up model is a post represented as a vector of words in the vocabulary in terms of tf idf values.",
"Here, every node has an input vector, and the number of children of nodes varies significantly 3 .",
"In rumor detection, long short-term memory (LSTM) (Hochreiter and Schmidhuber, 1997) and gated recurrent units (GRU) (Cho et al., 2014) were used to learn textual representation, which adopts memory units to store information over long time steps (Ma et al., 2016) .",
"In this paper, we choose to extend GRU as hidden unit to model long-distance interactions over the tree nodes because it is more efficient due to fewer parameters.",
"Let S(j) denote the set of direct children of the node j.",
"The transition equations of node j in the bottom-up model are formulated as follows: where x j is the original input vector of node j, E denotes the parameter matrix for transforming this input post,x j is the transformed representation of j, [W * , U * ] are the weight connections inside GRU, and h j and h s refer to the hidden state of j and its s-th child.",
"Thus h S denotes the sum of the hidden state of all the children of j assuming that all children are equally important to j.",
"As with the standard GRU, denotes element-wise multiplication; a reset gate r j determines how to combine the current inputx j with the memory of children, and an update gate z j defines how much memory from the children is cascaded into the current node; andh j denotes the candidate activation of the hidden state of the current node.",
"Different from the standard GRU unit, the gating vectors in our variant of GRU are dependent on the states of many child units, allowing our model to incorporate representations from different children.",
"After recursive aggregation from bottom to up, the state of root node (i.e., source tweet) can be regard as the representation of the whole tree which is used for supervised classification.",
"So, an output layer is connected to the root node for predicting the class of the tree using a softmax function: x j = x j E h S = s∈S(j) h s r j = σ (W rxj + U r h S ) z j = σ (W zxj + U z h S ) h j = tanh (W hxj + U h (h S r j )) h j = (1 − z j ) h S + z j h j y = Sof tmax(Vh 0 + b) (2) where h 0 is the learned hidden vector of root node; V and b are the weights and bias in output layer.",
"Top-down RvNN This model is designed to leverage the structure of top-down tree to capture complex propagation patterns for classifying rumorous claims, which is shown in Figure 3 (c).",
"It models how the informa-tion flows from source post to the current node.",
"The idea of this top-down approach is to generate a strengthened feature vector for each post considering its propagation path, where rumor-indicative features are aggregated along the propagation history in the path.",
"For example, if current post agree with its parent's stance which denies the source post, the denial stance from the root node down to the current node on this path should be reinforced.",
"Due to different branches of any non-leaf node, the top-down visit to its subtree nodes is also recursive.",
"However, the nature of top-down tree lends this model different from the bottom-up one.",
"The representation of each node is computed by combining its own input and its parent node instead of its children nodes.",
"This process proceeds recursively from the root node to its children until all leaf nodes are reached.",
"Suppose that the hidden state of a non-leaf node can be passed synchronously to all its child nodes without loss.",
"Then the hidden state h j of a node j can be computed by combining the hidden state h P(j) of its parent node P(j) and its own input vector x j .",
"Therefore, the transition equations of node j can be formulated as a standard GRU: x j = x j E r j = σ W rxj + U r h P(j) z j = σ W zxj + U z h P(j) h j = tanh W hxj + U h (h P(j) r j ) h j = (1 − z j ) h P(j) + z j h j (3) Through the top-down recursion, the learned representations are eventually embedded into the hidden vector of all the leaf nodes.",
"Since the num-ber of leaf nodes varies, the resulting vectors cannot be directly fed into a fixed-size neural layer for output.",
"Therefore, we add a max-pooling layer to take the maximum value of each dimension of the vectors over all the leaf nodes.",
"This can also help capture the most appealing indicative features from all the propagation paths.",
"Based on the pooling result, we finally use a softmax function in the output layer to predict the label of the tree: y = Sof tmax(Vh ∞ + b) (4) where h ∞ is the pooling vector over all leaf nodes, V and b are parameters in the output layer.",
"Although both of the two RvNN models aim to capture the structural properties by recursively visiting all nodes, we can conjecture that the topdown model would be better.",
"The hypothesis is that in the bottom-up case the final output relies on the representation of single root, and its information loss can be larger than the top-down one since in the top-down case the representations embedded into all leaf nodes along different propagation paths can be incorporated via pooling holistically.",
"Model Training The model is trained to minimize the squared error between the probability distributions of the predictions and the ground truth: L(y,ŷ) = N n=1 C c=1 (y c −ŷ c ) 2 + λ||θ|| 2 2 (5) where y c is the ground truth andŷ c is the prediction probability of a class, N is the number of training claims, C is the number of classes, ||.|| 2 is the L 2 regularization term over all model parameters θ, and λ is the trade-off coefficient.",
"During training, all the model parameters are updated using efficient back-propagation through structure (Goller and Kuchler, 1996; Socher et al., 2013) , and the optimization is gradient-based following the Ada-grad update rule (Duchi et al., 2011) to speed up the convergence.",
"We empirically initialize the model parameters with uniform distribution and set the vocabulary size as 5,000, the size of embedding and hidden units as 100.",
"We iterate over all the training examples in each epoch and continue until the loss value converges or the maximum epoch number is met.",
"Experiments and Results Datasets For experimental evaluation, we use two publicly available Twitter datasets released by Ma et al.",
"(2017) , namely Twitter15 and Twitter16 4 , which respectively contains 1,381 and 1,181 propagation trees (see (Ma et al., 2017) for detailed statistics).",
"In each dataset, a group of wide spread source tweets along with their propagation threads, i.e., replies and retweets, are provided in the form of tree structure.",
"Each tree is annotated with one of the four class labels, i.e., non-rumor, false rumor, true rumor and unverified rumor.",
"We remove the retweets from the trees since they do not provide any extra information or evidence contentwise.",
"We build two versions for each tree, one for the bottom-up tree and the other for the top-down tree, by flipping the edges' direction.",
"Experimental Setup We make comprehensive comparisons between our models and some state-of-the-art baselines on rumor classification and early detection tasks.",
"-DTR: Zhao et al.",
"(2015) proposed a Decision-Tree-based Ranking model to identify trending rumors by searching for inquiry phrases.",
"-DTC: The information credibility model using a Decision-Tree Classifier (Castillo et al., 2011) based on manually engineering various statistical features of the tweets.",
"-RFC: The Random Forest Classier using 3 fitting parameters as temporal properties and a set of handcrafted features on user, linguistic and structural properties (Kwon et al., 2013) .",
"-SVM-TS: A linear SVM classifier that uses time-series to model the variation of handcrafted social context features (Ma et al., 2015) .",
"-SVM-BOW: A naive baseline we built by representing text content using bag-of-words and using linear SVM for rumor classification.",
"-SVM-TK and SVM-HK: SVM classifier uses a Tree Kernel (Ma et al., 2017) and that uses a Hybrid Kernel (Wu et al., 2015) , respectively, both of which model propagation structures with kernels.",
"-GRU-RNN: A detection model based on recurrent neural networks (Ma et al., 2016) with GRU units for learning rumor representations by modeling sequential structure of relevant posts.",
"We implement DTC and RFC using Weka 5 , SVM-based models using LibSVM 6 and all neural-network-based models with Theano 7 .",
"We conduct 5-fold cross-validation on the datasets and use accuracy over all the four categories and F1 measure on each class to evaluate the performance of models.",
"Rumor Classification Performance As shown in Table 1 , our proposed models basically yield much better performance than other methods on both datasets via the modeling of interaction structures of posts in the propagation.",
"It is observed that the performance of the 4 baselines in the first group based on handcrafted features is obviously poor, varying between 0.409 and 0.585 in accuracy, indicating that they fail to generalize due to the lack of capacity capturing helpful features.",
"Among these baselines, SVM-TS and RFC perform relatively better because they 5 www.cs.waikato.ac.nz/ml/weka 6 www.csie.ntu.edu.tw/˜cjlin/libsvm 7 deeplearning.net/software/theano use additional temporal traits, but they are still clearly worse than the models not relying on feature engineering.",
"DTR uses a set of regular expressions indicative of stances.",
"However, only 19.6% and 22.2% tweets in the two datasets contain strings covered by these regular expressions, rendering unsatisfactory result.",
"Among the two kernel methods that are based on comparing propagation structures, we observe that SVM-TK is much more effective than SVM-HK.",
"There are two reasons: 1) SVM-HK was originally proposed and experimented on Sina Weibo (Wu et al., 2015) , which may not be generalize well on Twitter.",
"2) SVM-HK loosely couples two separate kernels: a RBF kernel based on handcrafted features, plus a random walk-based kernel which relies on a set of pre-defined keywords for jumping over the nodes probabilistically.",
"This under utilizes the propagation information due to such oversimplified treatment of tree structure.",
"In contrast, SVM-TK is an integrated kernel and can fully utilize the structure by comparing the trees based on both textual and structural similarities.",
"It appears that using bag-of-words is already a decent model evidenced as the fairly good performance of SVM-BOW which is even better than SVM-HK.",
"This is because the features of SVM-HK are handcrafted for binary classification (i.e., non-rumor vs rumor), ignoring the importance of indicative words or units that benefit finer-grained classification which can be captured more effectively by SVM-BOW.",
"The sequential neural model GRU-RNN performs slightly worse than SVM-TK, but much worse than our recursive models.",
"This is because it is a special case of the recursive model where each non-leaf node has only one child.",
"It has to rely on a linear chain as input, which missed out valuable structural information.",
"However, it does learn high-level features from the post content via hidden units of the neural model while SVM-TK cannot which can only evaluates similarities based on the overlapping words among subtrees.",
"Our recursive models are inherently tree-structured and take advantages of representation learning following the propagation structure, thus beats SVM-TK.",
"In the two recursive models, TD-RvNN outperforms BU-RvNN, which indicates that the bottomup model may suffer from larger information loss than the top-down one.",
"This verifies the hypothesis we made in Section 4.3 that the pooling layer For only the non-rumor class, it seems that our method does not perform so well as some featureengineering baselines.",
"This can be explained by the fact that these baselines are trained with additional features such as user information (e.g., profile, verification status, etc) which may contain clues for differentiating non-rumors from rumors.",
"Also, the responses to non-rumors are usually much more diverse with little informative indication, making identification of non-rumors more difficult based on content even with the structure.",
"Early Rumor Detection Performance Detecting rumors at early state of propagation is important so that interventions can be made in a timely manner.",
"We compared different methods in term of different time delays measured by either tweet count received or time elapsed since the source tweet is posted.",
"The performance is evaluated by the accuracy obtained when we incrementally add test data up to the check point given the targeted time delay or tweets volume.",
"Figure 4 shows that the performance of our recursive models climbs more rapidly and starts to supersede the other models at the early stage.",
"Although all the methods are getting to their best per-formance in the end, TD-RvNN and BU-RvNN only need around 8 hours or about 90 tweets to achieve the comparable performance of the best baseline model, i.e., SVM-TK, which needs about 36 hours or around 300 posts, indicating superior early detection performance of our method.",
"Figure 5 shows a sample tree at the early stage of propagation that has been correctly classified as a false rumor by both recursive models.",
"We can see that this false rumor demonstrates typical patterns in subtrees and propagation paths indicative of the falsehood, where a set of responses supporting the parent posts that deny or question the source post are captured by our bottom-up model.",
"Similarly, some patterns of propagation from the root to leaf nodes like \"support→deny→support\" are also seized by our top-down model.",
"In comparison, sequential models may be confused because the supportive key terms such as \"be right\", \"yeah\", \"exactly!\"",
"dominate the responses, and the SVM-TK may miss similar subtrees by just comparing the surface words.",
"Conclusions and Future Work We propose a bottom-up and a top-down treestructured model based on recursive neural networks for rumor detection on Twitter.",
"The inher-ent nature of recursive models allows them using propagation tree to guide the learning of representations from tweets content, such as embedding various indicative signals hidden in the structure, for better identifying rumors.",
"Results on two public Twitter datasets show that our method improves rumor detection performance in very large margins as compared to state-of-the-art baselines.",
"In our future work, we plan to integrate other types of information such as user properties into the structured neural models to further enhance representation learning and detect rumor spreaders at the same time.",
"We also plan to use unsupervised models for the task by exploiting structural information."
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"4",
"4.1",
"4.2",
"4.3",
"4.4",
"5.1",
"5.2",
"5.3",
"5.4",
"6"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Problem Statement",
"RvNN-based Rumor Detection",
"Standard Recursive Neural Networks",
"Bottom-up RvNN",
"Top-down RvNN",
"Model Training",
"Datasets",
"Experimental Setup",
"Rumor Classification Performance",
"Early Rumor Detection Performance",
"Conclusions and Future Work"
]
} | GEM-SciDuet-train-135#paper-1364#slide-13 | Results on Twitter15 | NR: Non-Rumor; FR: False Rumor;
TR: True Rumor; UR: Unverified Rumor;
user info NR vs others) | NR: Non-Rumor; FR: False Rumor;
TR: True Rumor; UR: Unverified Rumor;
user info NR vs others) | [] |
GEM-SciDuet-train-135#paper-1364#slide-14 | 1364 | Rumor Detection on Twitter with Tree-structured Recursive Neural Networks | Automatic rumor detection is technically very challenging. In this work, we try to learn discriminative features from tweets content by following their non-sequential propagation structure and generate more powerful representations for identifying different type of rumors. We propose two recursive neural models based on a bottom-up and a top-down tree-structured neural networks for rumor representation learning and classification, which naturally conform to the propagation layout of tweets. Results on two public Twitter datasets demonstrate that our recursive neural models 1) achieve much better performance than state-of-the-art approaches; 2) demonstrate superior capacity on detecting rumors at very early stage. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173
],
"paper_content_text": [
"Introduction Rumors have always been a social disease.",
"In recent years, it has become unprecedentedly convenient for the \"evil-doers\" to create and disseminate rumors in massive scale with low cost thanks to the popularity of social media outlets on Twitter, Facebook, etc.",
"The worst effect of false rumors could be devastating to individual and/or society.",
"Research pertaining rumors spans multiple disciplines, such as philosophy and humanities (Di-Fonzo and Bordia, 2007; Donovan, 2007) , social psychology (Allport and Postman, 1965; Jaeger et al., 1980; Rosnow and Foster, 2005) , political studies (Allport and Postman, 1946; Berinsky, 2017) , management science (DiFonzo et al., 1994; Kimmel, 2004) and recently computer science and artificial intelligence (Qazvinian et al., 2011; Ratkiewicz et al., 2011; Castillo et al., 2011; Hannak et al., 2014; Zhao et al., 2015; Ma et al., 2015) .",
"Rumor is commonly defined as information that emerge and spread among people whose truth value is unverified or intentionally false (Di-Fonzo and Bordia, 2007; Qazvinian et al., 2011) .",
"Analysis shows that people tend to stop spreading a rumor if it is known as false (Zubiaga et al., 2016b) .",
"However, identifying such misinformation is non-trivial and needs investigative journalism to fact check the suspected claim, which is labor-intensive and time-consuming.",
"The proliferation of social media makes it worse due to the ever-increasing information load and dynamics.",
"Therefore, it is necessary to develop automatic and assistant approaches to facilitate real-time rumor tracking and debunking.",
"For automating rumor detection, most of the previous studies focused on text mining from sequential microblog streams using supervised models based on feature engineering (Castillo et al., 2011; Kwon et al., 2013; Liu et al., 2015; Ma et al., 2015) , and more recently deep neural models (Ma et al., 2016; Chen et al., 2017; Ruchansky et al., 2017) .",
"These methods largely ignore or oversimplify the structural information associated with message propagation which however has been shown conducive to provide useful clues for identifying rumors.",
"Kernel-based method (Wu et al., 2015; Ma et al., 2017) was thus proposed to model the structure as propagation trees in order to differentiate rumorous and non-rumorous claims by comparing their tree-based similarities.",
"But such kind of approach cannot directly classify a tree without pairwise comparison with all other trees imposing unnecessary overhead, and it also cannot automatically learn any high-level feature representations out of the noisy surface features.",
"In this paper, we present a neural rumor detection approach based on recursive neural networks (RvNN) to bridge the content semantics and propagation clues.",
"RvNN and its variants were originally used to compose phrase or sentence representation for syntactic and semantic parsing (Socher et al., 2011 (Socher et al., , 2012 .",
"Unlike parsing, the input into our model is a propagation tree rooted from a source post rather than the parse tree of an individual sentence, and each tree node is a responsive post instead of an individual words.",
"The content semantics of posts and the responsive relationship among them can be jointly captured via the recursive feature learning process along the tree structure.",
"So, why can such neural model do better for the task?",
"Analysis has generally found that Twitter could \"self-correct\" some inaccurate information as users share opinions, conjectures and evidences (Zubiaga et al., 2017) .",
"To illustrate our intuition, Figure 1 exemplifies the propagation trees of two rumors in our dataset, one being false and the other being true 1 .",
"Structure-insensitive methods basically relying on the relative ratio of different stances in the text cannot do well when such clue is unclear like this example.",
"However, it can be seen that when a post denies the false rumor, it tends to spark supportive or affirmative replies confirming the denial; in contrast, denial to a true rumor tends to trigger question or denial in its replies.",
"This observation may suggest a more general hypothesis that the repliers tend to disagree with (or question) who support a false rumor or deny a true rumor, and also they tend to agree with who deny a false rumor or support a true rumor.",
"Meanwhile, a reply, rather than directly responding to the source tweet (i.e., the root), is usually responsive to its immediate ancestor (Lukasik et al., 2016; Zubiaga et al., 2016a) , suggesting obvious local characteristic of the interaction.",
"The recursive network naturally models such structures for learning to capture the rumor indicative signals and enhance the representation by recursively aggregating the signals from different branches.",
"To this end, we extend the standard RvNN into two variants, i.e., a bottom-up (BU) model and a top-down (TD) model, which represent the propagation tree structure from different angles, in order to visit the nodes and combine their representations following distinct directions.",
"The important merit of such architecture is that the node features can be selectively refined by the recursion given the connection and direction of all paths of the 1 False (true) rumor means the veracity of the rumorous claim is false (true).",
"Figure 1 : Propagation trees of two rumorous source tweets.",
"Nodes may express stances on their parent as commenting, supporting, questioning or denying.",
"The edge arrow indicates the direction from a response to its responded node, and the polarity is marked as '+' ('-') for support (denial).",
"The same node color indicates the same stance on the veracity of root node (i.e., source tweet).",
"tree.",
"As a result, it can be expected that the discriminative signals are better embedded into the learned representations.",
"We evaluate our proposed approach based on two public Twitter datasets.",
"The results show that our method outperforms strong rumor detection baselines with large margin and also demonstrate much higher effectiveness for detection at early stage of propagation, which is promising for realtime intervention and debunking.",
"Our contributions are summarized as follows in three folds: • This is the first study that deeply integrates both structure and content semantics based on tree-structured recursive neural networks for detecting rumors from microblog posts.",
"• We propose two variants of RvNN models based on bottom-up and top-down tree structures to generate better integrated representations for a claim by capturing both structural and textural properties signaling rumors.",
"• Our experiments based on real-world Twitter datasets achieve superior improvements over state-of-the-art baselines on both rumor classification and early detection tasks.",
"We make the source codes in our experiments publicly accessible 2 .",
"Related Work Most previous automatic approaches for rumor detection (Castillo et al., 2011; Yang et al., 2012; Liu et al., 2015) intended to learn a supervised classifier by utilizing a wide range of features crafted from post contents, user profiles and propagation patterns.",
"Subsequent studies were then conducted to engineer new features such as those representing rumor diffusion and cascades (Friggeri et al., 2014; Hannak et al., 2014) characterized by comments with links to debunking websites.",
"Kwon et al.",
"(2013) introduced a time-series-fitting model based on the volume of tweets over time.",
"Ma et al.",
"(2015) extended their model with more chronological social context features.",
"These approaches typically require heavy preprocessing and feature engineering.",
"Zhao et al.",
"(2015) alleviated the engineering effort by using a set of regular expressions (such as \"really?",
"\", \"not true\", etc) to find questing and denying tweets, but the approach was oversimplified and suffered from very low recall.",
"Ma et al.",
"(2016) used recurrent neural networks (RNN) to learn automatically the representations from tweets content based on time series.",
"Recently, they studied to mutually reinforce stance detection and rumor classification in a neural multi-task learning framework (Ma et al., 2018) .",
"However, the approaches cannot embed features reflecting how the posts are propagated and requires careful data segmentation to prepare for time sequence.",
"Some kernel-based methods were exploited to model the propagation structure.",
"Wu et al.",
"(2015) proposed a hybrid SVM classifier which combines a RBF kernel and a random-walk-based graph kernel to capture both flat and propagation patterns for detecting rumors on Sina Weibo.",
"Ma et al.",
"(2017) used tree kernel to capture the similarity of propagation trees by counting their similar substructures in order to identify different types of rumors on Twitter.",
"Compared to their studies, our model can learn the useful features via a more natural and general approach, i.e., the tree-structured neural network, to jointly generate representations from both structure and content.",
"RvNN has demonstrated state-of-the-art performances in a variety of tasks, e.g., images segmentation (Socher et al., 2011) , phrase representation from word vectors (Socher et al., 2012) , and sentiment classification in sentences (Socher et al., 2013) .",
"More recently, a deep RvNN was proposed to model the compositionality in natural language for fine-grained sentiment classification by stacking multiple recursive layers (Irsoy and Cardie, 2014) .",
"In order to avoid gradient vanishing, some studies integrated Long Short Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) to RvNN Tai et al., 2015) .",
"Mou et al.",
"(2015) used a convolutional network over tree structures for syntactic tree parsing of natural language sentences.",
"Problem Statement We define a Twitter rumor detection dataset as a set of claims C = {C 1 , C 2 , · · · , C |C| }, where each claim C i corresponds to a source tweet r i which consists of ideally all its relevant responsive tweets in chronological order, i.e., C i = {r i , x i1 , x i2 , · · · , x im } where each x i * is a responsive tweet of the root r i .",
"Note that although the tweets are notated sequentially, there are connections among them based on their reply or repost relationships, which can form a propagation tree structure (Wu et al., 2015; Ma et al., 2017) with r i being the root node.",
"We formulate this task as a supervised classification problem, which learns a classifier f from labeled claims, that is f : C i → Y i , where Y i takes one of the four finer-grained classes: non-rumor, false rumor, true rumor, and unverified rumor that are introduced in the literature (Ma et al., 2017; Zubiaga et al., 2016b ).",
"An important issue of the tree structure is concerned about the direction of edges, which can result in two different architectures of the model: 1) a bottom-up tree; 2) a top-down tree, which are defined as follows: • Bottom-up tree takes the similar shape as shown in Figure 1 , where responsive nodes always point to their responded nodes and leaf nodes not having any response are laid out at the furthest level.",
"We represent a tree as T i = V i , E i , where V i = C i which con- sists of all relevant posts as nodes, and E i denotes a set of all directed links, where for any u, v ∈ V i , u ← v exists if v responses to u.",
"This structure is similar to a citation network where a response mimics a reference.",
"• Top-down tree naturally conforms to the direction of information propagation, in which a link u → v means the information flows from u to v and v sees it and provides a response to u.",
"This structure reverses bottomup tree and simulates how information cas- cades from a source tweet, i.e., the root, to all its receivers, i.e., the decedents, which is similar as (Wu et al., 2015; Ma et al., 2017) .",
"RvNN-based Rumor Detection The core idea of our method is to strengthen the high-level representation of tree nodes by the recursion following the propagation structure over different branches in the tree.",
"For instance, the responsive nodes confirming or supporting a node (e.g., \"I agree\", \"be right\", etc) can further reinforce the stance of that node while denial or questioning responses (e.g., \"disagree, \"really?!)",
"otherwise weaken its stance.",
"Compared to the kernelbased method using propagation tree (Wu et al., 2015; Ma et al., 2017) , our method does not need pairwise comparison among large number of subtrees, and can learn much stronger representation of content following the response structure.",
"In this section, we will describe our extension to the standard RvNN for modeling rumor detection based on the bottom-up and top-down architectures presented in Section 3.",
"Standard Recursive Neural Networks RvNN is a type of tree-structured neural networks.",
"The original version of RvNN utilized binarized sentence parse trees (Socher et al., 2012) , in which the representation associated with each node of a parse tree is computed from its direct children.",
"The overall structure of the standard RvNN is illustrated as the right side of Figure 2 , corresponding to the input parse tree at the left side.",
"Leaf nodes are the words in an input sentence, each represented by a low-dimensional word embedding.",
"Non-leaf nodes are sentence constituents, computed by recursion based on the presentations of child nodes.",
"Let p be the feature vector of a parent node whose children are c 1 and c 2 , the representation of the parent is computed by p = f (W ·[c 1 ; c 2 ]+b), where f (·) is the activation function with W and b as parameters.",
"This computation is done recursively over all tree nodes; the learned hidden vectors of the nodes can then be used for various classification tasks.",
"Bottom-up RvNN The core idea of bottom-up model is to generate a feature vector for each subtree by recursively visiting every node from the leaves at the bottom to the root at the top.",
"In this way, the subtrees with similar contexts, such as those subtrees having a denial parent and a set of supportive children, will be projected into the proximity in the representation space.",
"And thus such local rumor indicative features are aggregated along different branches into some global representation of the whole tree.",
"For this purpose, we make a natural extension to the original RvNN.",
"The overall structure of our proposed bottom-up model is illustrated in Figure 3(b) , taking a bottom-up tree (see Figure 3 (a)) as input.",
"Different from the standard RvNN, the input of each node in the bottom-up model is a post represented as a vector of words in the vocabulary in terms of tf idf values.",
"Here, every node has an input vector, and the number of children of nodes varies significantly 3 .",
"In rumor detection, long short-term memory (LSTM) (Hochreiter and Schmidhuber, 1997) and gated recurrent units (GRU) (Cho et al., 2014) were used to learn textual representation, which adopts memory units to store information over long time steps (Ma et al., 2016) .",
"In this paper, we choose to extend GRU as hidden unit to model long-distance interactions over the tree nodes because it is more efficient due to fewer parameters.",
"Let S(j) denote the set of direct children of the node j.",
"The transition equations of node j in the bottom-up model are formulated as follows: where x j is the original input vector of node j, E denotes the parameter matrix for transforming this input post,x j is the transformed representation of j, [W * , U * ] are the weight connections inside GRU, and h j and h s refer to the hidden state of j and its s-th child.",
"Thus h S denotes the sum of the hidden state of all the children of j assuming that all children are equally important to j.",
"As with the standard GRU, denotes element-wise multiplication; a reset gate r j determines how to combine the current inputx j with the memory of children, and an update gate z j defines how much memory from the children is cascaded into the current node; andh j denotes the candidate activation of the hidden state of the current node.",
"Different from the standard GRU unit, the gating vectors in our variant of GRU are dependent on the states of many child units, allowing our model to incorporate representations from different children.",
"After recursive aggregation from bottom to up, the state of root node (i.e., source tweet) can be regard as the representation of the whole tree which is used for supervised classification.",
"So, an output layer is connected to the root node for predicting the class of the tree using a softmax function: x j = x j E h S = s∈S(j) h s r j = σ (W rxj + U r h S ) z j = σ (W zxj + U z h S ) h j = tanh (W hxj + U h (h S r j )) h j = (1 − z j ) h S + z j h j y = Sof tmax(Vh 0 + b) (2) where h 0 is the learned hidden vector of root node; V and b are the weights and bias in output layer.",
"Top-down RvNN This model is designed to leverage the structure of top-down tree to capture complex propagation patterns for classifying rumorous claims, which is shown in Figure 3 (c).",
"It models how the informa-tion flows from source post to the current node.",
"The idea of this top-down approach is to generate a strengthened feature vector for each post considering its propagation path, where rumor-indicative features are aggregated along the propagation history in the path.",
"For example, if current post agree with its parent's stance which denies the source post, the denial stance from the root node down to the current node on this path should be reinforced.",
"Due to different branches of any non-leaf node, the top-down visit to its subtree nodes is also recursive.",
"However, the nature of top-down tree lends this model different from the bottom-up one.",
"The representation of each node is computed by combining its own input and its parent node instead of its children nodes.",
"This process proceeds recursively from the root node to its children until all leaf nodes are reached.",
"Suppose that the hidden state of a non-leaf node can be passed synchronously to all its child nodes without loss.",
"Then the hidden state h j of a node j can be computed by combining the hidden state h P(j) of its parent node P(j) and its own input vector x j .",
"Therefore, the transition equations of node j can be formulated as a standard GRU: x j = x j E r j = σ W rxj + U r h P(j) z j = σ W zxj + U z h P(j) h j = tanh W hxj + U h (h P(j) r j ) h j = (1 − z j ) h P(j) + z j h j (3) Through the top-down recursion, the learned representations are eventually embedded into the hidden vector of all the leaf nodes.",
"Since the num-ber of leaf nodes varies, the resulting vectors cannot be directly fed into a fixed-size neural layer for output.",
"Therefore, we add a max-pooling layer to take the maximum value of each dimension of the vectors over all the leaf nodes.",
"This can also help capture the most appealing indicative features from all the propagation paths.",
"Based on the pooling result, we finally use a softmax function in the output layer to predict the label of the tree: y = Sof tmax(Vh ∞ + b) (4) where h ∞ is the pooling vector over all leaf nodes, V and b are parameters in the output layer.",
"Although both of the two RvNN models aim to capture the structural properties by recursively visiting all nodes, we can conjecture that the topdown model would be better.",
"The hypothesis is that in the bottom-up case the final output relies on the representation of single root, and its information loss can be larger than the top-down one since in the top-down case the representations embedded into all leaf nodes along different propagation paths can be incorporated via pooling holistically.",
"Model Training The model is trained to minimize the squared error between the probability distributions of the predictions and the ground truth: L(y,ŷ) = N n=1 C c=1 (y c −ŷ c ) 2 + λ||θ|| 2 2 (5) where y c is the ground truth andŷ c is the prediction probability of a class, N is the number of training claims, C is the number of classes, ||.|| 2 is the L 2 regularization term over all model parameters θ, and λ is the trade-off coefficient.",
"During training, all the model parameters are updated using efficient back-propagation through structure (Goller and Kuchler, 1996; Socher et al., 2013) , and the optimization is gradient-based following the Ada-grad update rule (Duchi et al., 2011) to speed up the convergence.",
"We empirically initialize the model parameters with uniform distribution and set the vocabulary size as 5,000, the size of embedding and hidden units as 100.",
"We iterate over all the training examples in each epoch and continue until the loss value converges or the maximum epoch number is met.",
"Experiments and Results Datasets For experimental evaluation, we use two publicly available Twitter datasets released by Ma et al.",
"(2017) , namely Twitter15 and Twitter16 4 , which respectively contains 1,381 and 1,181 propagation trees (see (Ma et al., 2017) for detailed statistics).",
"In each dataset, a group of wide spread source tweets along with their propagation threads, i.e., replies and retweets, are provided in the form of tree structure.",
"Each tree is annotated with one of the four class labels, i.e., non-rumor, false rumor, true rumor and unverified rumor.",
"We remove the retweets from the trees since they do not provide any extra information or evidence contentwise.",
"We build two versions for each tree, one for the bottom-up tree and the other for the top-down tree, by flipping the edges' direction.",
"Experimental Setup We make comprehensive comparisons between our models and some state-of-the-art baselines on rumor classification and early detection tasks.",
"-DTR: Zhao et al.",
"(2015) proposed a Decision-Tree-based Ranking model to identify trending rumors by searching for inquiry phrases.",
"-DTC: The information credibility model using a Decision-Tree Classifier (Castillo et al., 2011) based on manually engineering various statistical features of the tweets.",
"-RFC: The Random Forest Classier using 3 fitting parameters as temporal properties and a set of handcrafted features on user, linguistic and structural properties (Kwon et al., 2013) .",
"-SVM-TS: A linear SVM classifier that uses time-series to model the variation of handcrafted social context features (Ma et al., 2015) .",
"-SVM-BOW: A naive baseline we built by representing text content using bag-of-words and using linear SVM for rumor classification.",
"-SVM-TK and SVM-HK: SVM classifier uses a Tree Kernel (Ma et al., 2017) and that uses a Hybrid Kernel (Wu et al., 2015) , respectively, both of which model propagation structures with kernels.",
"-GRU-RNN: A detection model based on recurrent neural networks (Ma et al., 2016) with GRU units for learning rumor representations by modeling sequential structure of relevant posts.",
"We implement DTC and RFC using Weka 5 , SVM-based models using LibSVM 6 and all neural-network-based models with Theano 7 .",
"We conduct 5-fold cross-validation on the datasets and use accuracy over all the four categories and F1 measure on each class to evaluate the performance of models.",
"Rumor Classification Performance As shown in Table 1 , our proposed models basically yield much better performance than other methods on both datasets via the modeling of interaction structures of posts in the propagation.",
"It is observed that the performance of the 4 baselines in the first group based on handcrafted features is obviously poor, varying between 0.409 and 0.585 in accuracy, indicating that they fail to generalize due to the lack of capacity capturing helpful features.",
"Among these baselines, SVM-TS and RFC perform relatively better because they 5 www.cs.waikato.ac.nz/ml/weka 6 www.csie.ntu.edu.tw/˜cjlin/libsvm 7 deeplearning.net/software/theano use additional temporal traits, but they are still clearly worse than the models not relying on feature engineering.",
"DTR uses a set of regular expressions indicative of stances.",
"However, only 19.6% and 22.2% tweets in the two datasets contain strings covered by these regular expressions, rendering unsatisfactory result.",
"Among the two kernel methods that are based on comparing propagation structures, we observe that SVM-TK is much more effective than SVM-HK.",
"There are two reasons: 1) SVM-HK was originally proposed and experimented on Sina Weibo (Wu et al., 2015) , which may not be generalize well on Twitter.",
"2) SVM-HK loosely couples two separate kernels: a RBF kernel based on handcrafted features, plus a random walk-based kernel which relies on a set of pre-defined keywords for jumping over the nodes probabilistically.",
"This under utilizes the propagation information due to such oversimplified treatment of tree structure.",
"In contrast, SVM-TK is an integrated kernel and can fully utilize the structure by comparing the trees based on both textual and structural similarities.",
"It appears that using bag-of-words is already a decent model evidenced as the fairly good performance of SVM-BOW which is even better than SVM-HK.",
"This is because the features of SVM-HK are handcrafted for binary classification (i.e., non-rumor vs rumor), ignoring the importance of indicative words or units that benefit finer-grained classification which can be captured more effectively by SVM-BOW.",
"The sequential neural model GRU-RNN performs slightly worse than SVM-TK, but much worse than our recursive models.",
"This is because it is a special case of the recursive model where each non-leaf node has only one child.",
"It has to rely on a linear chain as input, which missed out valuable structural information.",
"However, it does learn high-level features from the post content via hidden units of the neural model while SVM-TK cannot which can only evaluates similarities based on the overlapping words among subtrees.",
"Our recursive models are inherently tree-structured and take advantages of representation learning following the propagation structure, thus beats SVM-TK.",
"In the two recursive models, TD-RvNN outperforms BU-RvNN, which indicates that the bottomup model may suffer from larger information loss than the top-down one.",
"This verifies the hypothesis we made in Section 4.3 that the pooling layer For only the non-rumor class, it seems that our method does not perform so well as some featureengineering baselines.",
"This can be explained by the fact that these baselines are trained with additional features such as user information (e.g., profile, verification status, etc) which may contain clues for differentiating non-rumors from rumors.",
"Also, the responses to non-rumors are usually much more diverse with little informative indication, making identification of non-rumors more difficult based on content even with the structure.",
"Early Rumor Detection Performance Detecting rumors at early state of propagation is important so that interventions can be made in a timely manner.",
"We compared different methods in term of different time delays measured by either tweet count received or time elapsed since the source tweet is posted.",
"The performance is evaluated by the accuracy obtained when we incrementally add test data up to the check point given the targeted time delay or tweets volume.",
"Figure 4 shows that the performance of our recursive models climbs more rapidly and starts to supersede the other models at the early stage.",
"Although all the methods are getting to their best per-formance in the end, TD-RvNN and BU-RvNN only need around 8 hours or about 90 tweets to achieve the comparable performance of the best baseline model, i.e., SVM-TK, which needs about 36 hours or around 300 posts, indicating superior early detection performance of our method.",
"Figure 5 shows a sample tree at the early stage of propagation that has been correctly classified as a false rumor by both recursive models.",
"We can see that this false rumor demonstrates typical patterns in subtrees and propagation paths indicative of the falsehood, where a set of responses supporting the parent posts that deny or question the source post are captured by our bottom-up model.",
"Similarly, some patterns of propagation from the root to leaf nodes like \"support→deny→support\" are also seized by our top-down model.",
"In comparison, sequential models may be confused because the supportive key terms such as \"be right\", \"yeah\", \"exactly!\"",
"dominate the responses, and the SVM-TK may miss similar subtrees by just comparing the surface words.",
"Conclusions and Future Work We propose a bottom-up and a top-down treestructured model based on recursive neural networks for rumor detection on Twitter.",
"The inher-ent nature of recursive models allows them using propagation tree to guide the learning of representations from tweets content, such as embedding various indicative signals hidden in the structure, for better identifying rumors.",
"Results on two public Twitter datasets show that our method improves rumor detection performance in very large margins as compared to state-of-the-art baselines.",
"In our future work, we plan to integrate other types of information such as user properties into the structured neural models to further enhance representation learning and detect rumor spreaders at the same time.",
"We also plan to use unsupervised models for the task by exploiting structural information."
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"4",
"4.1",
"4.2",
"4.3",
"4.4",
"5.1",
"5.2",
"5.3",
"5.4",
"6"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Problem Statement",
"RvNN-based Rumor Detection",
"Standard Recursive Neural Networks",
"Bottom-up RvNN",
"Top-down RvNN",
"Model Training",
"Datasets",
"Experimental Setup",
"Rumor Classification Performance",
"Early Rumor Detection Performance",
"Conclusions and Future Work"
]
} | GEM-SciDuet-train-135#paper-1364#slide-14 | Results on Twitter16 | NR: Non-Rumor; FR: False Rumor;
TR: True Rumor; UR: Unverified Rumor;
GRU-RNN models without hand-crafted features | NR: Non-Rumor; FR: False Rumor;
TR: True Rumor; UR: Unverified Rumor;
GRU-RNN models without hand-crafted features | [] |
GEM-SciDuet-train-135#paper-1364#slide-15 | 1364 | Rumor Detection on Twitter with Tree-structured Recursive Neural Networks | Automatic rumor detection is technically very challenging. In this work, we try to learn discriminative features from tweets content by following their non-sequential propagation structure and generate more powerful representations for identifying different type of rumors. We propose two recursive neural models based on a bottom-up and a top-down tree-structured neural networks for rumor representation learning and classification, which naturally conform to the propagation layout of tweets. Results on two public Twitter datasets demonstrate that our recursive neural models 1) achieve much better performance than state-of-the-art approaches; 2) demonstrate superior capacity on detecting rumors at very early stage. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173
],
"paper_content_text": [
"Introduction Rumors have always been a social disease.",
"In recent years, it has become unprecedentedly convenient for the \"evil-doers\" to create and disseminate rumors in massive scale with low cost thanks to the popularity of social media outlets on Twitter, Facebook, etc.",
"The worst effect of false rumors could be devastating to individual and/or society.",
"Research pertaining rumors spans multiple disciplines, such as philosophy and humanities (Di-Fonzo and Bordia, 2007; Donovan, 2007) , social psychology (Allport and Postman, 1965; Jaeger et al., 1980; Rosnow and Foster, 2005) , political studies (Allport and Postman, 1946; Berinsky, 2017) , management science (DiFonzo et al., 1994; Kimmel, 2004) and recently computer science and artificial intelligence (Qazvinian et al., 2011; Ratkiewicz et al., 2011; Castillo et al., 2011; Hannak et al., 2014; Zhao et al., 2015; Ma et al., 2015) .",
"Rumor is commonly defined as information that emerge and spread among people whose truth value is unverified or intentionally false (Di-Fonzo and Bordia, 2007; Qazvinian et al., 2011) .",
"Analysis shows that people tend to stop spreading a rumor if it is known as false (Zubiaga et al., 2016b) .",
"However, identifying such misinformation is non-trivial and needs investigative journalism to fact check the suspected claim, which is labor-intensive and time-consuming.",
"The proliferation of social media makes it worse due to the ever-increasing information load and dynamics.",
"Therefore, it is necessary to develop automatic and assistant approaches to facilitate real-time rumor tracking and debunking.",
"For automating rumor detection, most of the previous studies focused on text mining from sequential microblog streams using supervised models based on feature engineering (Castillo et al., 2011; Kwon et al., 2013; Liu et al., 2015; Ma et al., 2015) , and more recently deep neural models (Ma et al., 2016; Chen et al., 2017; Ruchansky et al., 2017) .",
"These methods largely ignore or oversimplify the structural information associated with message propagation which however has been shown conducive to provide useful clues for identifying rumors.",
"Kernel-based method (Wu et al., 2015; Ma et al., 2017) was thus proposed to model the structure as propagation trees in order to differentiate rumorous and non-rumorous claims by comparing their tree-based similarities.",
"But such kind of approach cannot directly classify a tree without pairwise comparison with all other trees imposing unnecessary overhead, and it also cannot automatically learn any high-level feature representations out of the noisy surface features.",
"In this paper, we present a neural rumor detection approach based on recursive neural networks (RvNN) to bridge the content semantics and propagation clues.",
"RvNN and its variants were originally used to compose phrase or sentence representation for syntactic and semantic parsing (Socher et al., 2011 (Socher et al., , 2012 .",
"Unlike parsing, the input into our model is a propagation tree rooted from a source post rather than the parse tree of an individual sentence, and each tree node is a responsive post instead of an individual words.",
"The content semantics of posts and the responsive relationship among them can be jointly captured via the recursive feature learning process along the tree structure.",
"So, why can such neural model do better for the task?",
"Analysis has generally found that Twitter could \"self-correct\" some inaccurate information as users share opinions, conjectures and evidences (Zubiaga et al., 2017) .",
"To illustrate our intuition, Figure 1 exemplifies the propagation trees of two rumors in our dataset, one being false and the other being true 1 .",
"Structure-insensitive methods basically relying on the relative ratio of different stances in the text cannot do well when such clue is unclear like this example.",
"However, it can be seen that when a post denies the false rumor, it tends to spark supportive or affirmative replies confirming the denial; in contrast, denial to a true rumor tends to trigger question or denial in its replies.",
"This observation may suggest a more general hypothesis that the repliers tend to disagree with (or question) who support a false rumor or deny a true rumor, and also they tend to agree with who deny a false rumor or support a true rumor.",
"Meanwhile, a reply, rather than directly responding to the source tweet (i.e., the root), is usually responsive to its immediate ancestor (Lukasik et al., 2016; Zubiaga et al., 2016a) , suggesting obvious local characteristic of the interaction.",
"The recursive network naturally models such structures for learning to capture the rumor indicative signals and enhance the representation by recursively aggregating the signals from different branches.",
"To this end, we extend the standard RvNN into two variants, i.e., a bottom-up (BU) model and a top-down (TD) model, which represent the propagation tree structure from different angles, in order to visit the nodes and combine their representations following distinct directions.",
"The important merit of such architecture is that the node features can be selectively refined by the recursion given the connection and direction of all paths of the 1 False (true) rumor means the veracity of the rumorous claim is false (true).",
"Figure 1 : Propagation trees of two rumorous source tweets.",
"Nodes may express stances on their parent as commenting, supporting, questioning or denying.",
"The edge arrow indicates the direction from a response to its responded node, and the polarity is marked as '+' ('-') for support (denial).",
"The same node color indicates the same stance on the veracity of root node (i.e., source tweet).",
"tree.",
"As a result, it can be expected that the discriminative signals are better embedded into the learned representations.",
"We evaluate our proposed approach based on two public Twitter datasets.",
"The results show that our method outperforms strong rumor detection baselines with large margin and also demonstrate much higher effectiveness for detection at early stage of propagation, which is promising for realtime intervention and debunking.",
"Our contributions are summarized as follows in three folds: • This is the first study that deeply integrates both structure and content semantics based on tree-structured recursive neural networks for detecting rumors from microblog posts.",
"• We propose two variants of RvNN models based on bottom-up and top-down tree structures to generate better integrated representations for a claim by capturing both structural and textural properties signaling rumors.",
"• Our experiments based on real-world Twitter datasets achieve superior improvements over state-of-the-art baselines on both rumor classification and early detection tasks.",
"We make the source codes in our experiments publicly accessible 2 .",
"Related Work Most previous automatic approaches for rumor detection (Castillo et al., 2011; Yang et al., 2012; Liu et al., 2015) intended to learn a supervised classifier by utilizing a wide range of features crafted from post contents, user profiles and propagation patterns.",
"Subsequent studies were then conducted to engineer new features such as those representing rumor diffusion and cascades (Friggeri et al., 2014; Hannak et al., 2014) characterized by comments with links to debunking websites.",
"Kwon et al.",
"(2013) introduced a time-series-fitting model based on the volume of tweets over time.",
"Ma et al.",
"(2015) extended their model with more chronological social context features.",
"These approaches typically require heavy preprocessing and feature engineering.",
"Zhao et al.",
"(2015) alleviated the engineering effort by using a set of regular expressions (such as \"really?",
"\", \"not true\", etc) to find questing and denying tweets, but the approach was oversimplified and suffered from very low recall.",
"Ma et al.",
"(2016) used recurrent neural networks (RNN) to learn automatically the representations from tweets content based on time series.",
"Recently, they studied to mutually reinforce stance detection and rumor classification in a neural multi-task learning framework (Ma et al., 2018) .",
"However, the approaches cannot embed features reflecting how the posts are propagated and requires careful data segmentation to prepare for time sequence.",
"Some kernel-based methods were exploited to model the propagation structure.",
"Wu et al.",
"(2015) proposed a hybrid SVM classifier which combines a RBF kernel and a random-walk-based graph kernel to capture both flat and propagation patterns for detecting rumors on Sina Weibo.",
"Ma et al.",
"(2017) used tree kernel to capture the similarity of propagation trees by counting their similar substructures in order to identify different types of rumors on Twitter.",
"Compared to their studies, our model can learn the useful features via a more natural and general approach, i.e., the tree-structured neural network, to jointly generate representations from both structure and content.",
"RvNN has demonstrated state-of-the-art performances in a variety of tasks, e.g., images segmentation (Socher et al., 2011) , phrase representation from word vectors (Socher et al., 2012) , and sentiment classification in sentences (Socher et al., 2013) .",
"More recently, a deep RvNN was proposed to model the compositionality in natural language for fine-grained sentiment classification by stacking multiple recursive layers (Irsoy and Cardie, 2014) .",
"In order to avoid gradient vanishing, some studies integrated Long Short Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) to RvNN Tai et al., 2015) .",
"Mou et al.",
"(2015) used a convolutional network over tree structures for syntactic tree parsing of natural language sentences.",
"Problem Statement We define a Twitter rumor detection dataset as a set of claims C = {C 1 , C 2 , · · · , C |C| }, where each claim C i corresponds to a source tweet r i which consists of ideally all its relevant responsive tweets in chronological order, i.e., C i = {r i , x i1 , x i2 , · · · , x im } where each x i * is a responsive tweet of the root r i .",
"Note that although the tweets are notated sequentially, there are connections among them based on their reply or repost relationships, which can form a propagation tree structure (Wu et al., 2015; Ma et al., 2017) with r i being the root node.",
"We formulate this task as a supervised classification problem, which learns a classifier f from labeled claims, that is f : C i → Y i , where Y i takes one of the four finer-grained classes: non-rumor, false rumor, true rumor, and unverified rumor that are introduced in the literature (Ma et al., 2017; Zubiaga et al., 2016b ).",
"An important issue of the tree structure is concerned about the direction of edges, which can result in two different architectures of the model: 1) a bottom-up tree; 2) a top-down tree, which are defined as follows: • Bottom-up tree takes the similar shape as shown in Figure 1 , where responsive nodes always point to their responded nodes and leaf nodes not having any response are laid out at the furthest level.",
"We represent a tree as T i = V i , E i , where V i = C i which con- sists of all relevant posts as nodes, and E i denotes a set of all directed links, where for any u, v ∈ V i , u ← v exists if v responses to u.",
"This structure is similar to a citation network where a response mimics a reference.",
"• Top-down tree naturally conforms to the direction of information propagation, in which a link u → v means the information flows from u to v and v sees it and provides a response to u.",
"This structure reverses bottomup tree and simulates how information cas- cades from a source tweet, i.e., the root, to all its receivers, i.e., the decedents, which is similar as (Wu et al., 2015; Ma et al., 2017) .",
"RvNN-based Rumor Detection The core idea of our method is to strengthen the high-level representation of tree nodes by the recursion following the propagation structure over different branches in the tree.",
"For instance, the responsive nodes confirming or supporting a node (e.g., \"I agree\", \"be right\", etc) can further reinforce the stance of that node while denial or questioning responses (e.g., \"disagree, \"really?!)",
"otherwise weaken its stance.",
"Compared to the kernelbased method using propagation tree (Wu et al., 2015; Ma et al., 2017) , our method does not need pairwise comparison among large number of subtrees, and can learn much stronger representation of content following the response structure.",
"In this section, we will describe our extension to the standard RvNN for modeling rumor detection based on the bottom-up and top-down architectures presented in Section 3.",
"Standard Recursive Neural Networks RvNN is a type of tree-structured neural networks.",
"The original version of RvNN utilized binarized sentence parse trees (Socher et al., 2012) , in which the representation associated with each node of a parse tree is computed from its direct children.",
"The overall structure of the standard RvNN is illustrated as the right side of Figure 2 , corresponding to the input parse tree at the left side.",
"Leaf nodes are the words in an input sentence, each represented by a low-dimensional word embedding.",
"Non-leaf nodes are sentence constituents, computed by recursion based on the presentations of child nodes.",
"Let p be the feature vector of a parent node whose children are c 1 and c 2 , the representation of the parent is computed by p = f (W ·[c 1 ; c 2 ]+b), where f (·) is the activation function with W and b as parameters.",
"This computation is done recursively over all tree nodes; the learned hidden vectors of the nodes can then be used for various classification tasks.",
"Bottom-up RvNN The core idea of bottom-up model is to generate a feature vector for each subtree by recursively visiting every node from the leaves at the bottom to the root at the top.",
"In this way, the subtrees with similar contexts, such as those subtrees having a denial parent and a set of supportive children, will be projected into the proximity in the representation space.",
"And thus such local rumor indicative features are aggregated along different branches into some global representation of the whole tree.",
"For this purpose, we make a natural extension to the original RvNN.",
"The overall structure of our proposed bottom-up model is illustrated in Figure 3(b) , taking a bottom-up tree (see Figure 3 (a)) as input.",
"Different from the standard RvNN, the input of each node in the bottom-up model is a post represented as a vector of words in the vocabulary in terms of tf idf values.",
"Here, every node has an input vector, and the number of children of nodes varies significantly 3 .",
"In rumor detection, long short-term memory (LSTM) (Hochreiter and Schmidhuber, 1997) and gated recurrent units (GRU) (Cho et al., 2014) were used to learn textual representation, which adopts memory units to store information over long time steps (Ma et al., 2016) .",
"In this paper, we choose to extend GRU as hidden unit to model long-distance interactions over the tree nodes because it is more efficient due to fewer parameters.",
"Let S(j) denote the set of direct children of the node j.",
"The transition equations of node j in the bottom-up model are formulated as follows: where x j is the original input vector of node j, E denotes the parameter matrix for transforming this input post,x j is the transformed representation of j, [W * , U * ] are the weight connections inside GRU, and h j and h s refer to the hidden state of j and its s-th child.",
"Thus h S denotes the sum of the hidden state of all the children of j assuming that all children are equally important to j.",
"As with the standard GRU, denotes element-wise multiplication; a reset gate r j determines how to combine the current inputx j with the memory of children, and an update gate z j defines how much memory from the children is cascaded into the current node; andh j denotes the candidate activation of the hidden state of the current node.",
"Different from the standard GRU unit, the gating vectors in our variant of GRU are dependent on the states of many child units, allowing our model to incorporate representations from different children.",
"After recursive aggregation from bottom to up, the state of root node (i.e., source tweet) can be regard as the representation of the whole tree which is used for supervised classification.",
"So, an output layer is connected to the root node for predicting the class of the tree using a softmax function: x j = x j E h S = s∈S(j) h s r j = σ (W rxj + U r h S ) z j = σ (W zxj + U z h S ) h j = tanh (W hxj + U h (h S r j )) h j = (1 − z j ) h S + z j h j y = Sof tmax(Vh 0 + b) (2) where h 0 is the learned hidden vector of root node; V and b are the weights and bias in output layer.",
"Top-down RvNN This model is designed to leverage the structure of top-down tree to capture complex propagation patterns for classifying rumorous claims, which is shown in Figure 3 (c).",
"It models how the informa-tion flows from source post to the current node.",
"The idea of this top-down approach is to generate a strengthened feature vector for each post considering its propagation path, where rumor-indicative features are aggregated along the propagation history in the path.",
"For example, if current post agree with its parent's stance which denies the source post, the denial stance from the root node down to the current node on this path should be reinforced.",
"Due to different branches of any non-leaf node, the top-down visit to its subtree nodes is also recursive.",
"However, the nature of top-down tree lends this model different from the bottom-up one.",
"The representation of each node is computed by combining its own input and its parent node instead of its children nodes.",
"This process proceeds recursively from the root node to its children until all leaf nodes are reached.",
"Suppose that the hidden state of a non-leaf node can be passed synchronously to all its child nodes without loss.",
"Then the hidden state h j of a node j can be computed by combining the hidden state h P(j) of its parent node P(j) and its own input vector x j .",
"Therefore, the transition equations of node j can be formulated as a standard GRU: x j = x j E r j = σ W rxj + U r h P(j) z j = σ W zxj + U z h P(j) h j = tanh W hxj + U h (h P(j) r j ) h j = (1 − z j ) h P(j) + z j h j (3) Through the top-down recursion, the learned representations are eventually embedded into the hidden vector of all the leaf nodes.",
"Since the num-ber of leaf nodes varies, the resulting vectors cannot be directly fed into a fixed-size neural layer for output.",
"Therefore, we add a max-pooling layer to take the maximum value of each dimension of the vectors over all the leaf nodes.",
"This can also help capture the most appealing indicative features from all the propagation paths.",
"Based on the pooling result, we finally use a softmax function in the output layer to predict the label of the tree: y = Sof tmax(Vh ∞ + b) (4) where h ∞ is the pooling vector over all leaf nodes, V and b are parameters in the output layer.",
"Although both of the two RvNN models aim to capture the structural properties by recursively visiting all nodes, we can conjecture that the topdown model would be better.",
"The hypothesis is that in the bottom-up case the final output relies on the representation of single root, and its information loss can be larger than the top-down one since in the top-down case the representations embedded into all leaf nodes along different propagation paths can be incorporated via pooling holistically.",
"Model Training The model is trained to minimize the squared error between the probability distributions of the predictions and the ground truth: L(y,ŷ) = N n=1 C c=1 (y c −ŷ c ) 2 + λ||θ|| 2 2 (5) where y c is the ground truth andŷ c is the prediction probability of a class, N is the number of training claims, C is the number of classes, ||.|| 2 is the L 2 regularization term over all model parameters θ, and λ is the trade-off coefficient.",
"During training, all the model parameters are updated using efficient back-propagation through structure (Goller and Kuchler, 1996; Socher et al., 2013) , and the optimization is gradient-based following the Ada-grad update rule (Duchi et al., 2011) to speed up the convergence.",
"We empirically initialize the model parameters with uniform distribution and set the vocabulary size as 5,000, the size of embedding and hidden units as 100.",
"We iterate over all the training examples in each epoch and continue until the loss value converges or the maximum epoch number is met.",
"Experiments and Results Datasets For experimental evaluation, we use two publicly available Twitter datasets released by Ma et al.",
"(2017) , namely Twitter15 and Twitter16 4 , which respectively contains 1,381 and 1,181 propagation trees (see (Ma et al., 2017) for detailed statistics).",
"In each dataset, a group of wide spread source tweets along with their propagation threads, i.e., replies and retweets, are provided in the form of tree structure.",
"Each tree is annotated with one of the four class labels, i.e., non-rumor, false rumor, true rumor and unverified rumor.",
"We remove the retweets from the trees since they do not provide any extra information or evidence contentwise.",
"We build two versions for each tree, one for the bottom-up tree and the other for the top-down tree, by flipping the edges' direction.",
"Experimental Setup We make comprehensive comparisons between our models and some state-of-the-art baselines on rumor classification and early detection tasks.",
"-DTR: Zhao et al.",
"(2015) proposed a Decision-Tree-based Ranking model to identify trending rumors by searching for inquiry phrases.",
"-DTC: The information credibility model using a Decision-Tree Classifier (Castillo et al., 2011) based on manually engineering various statistical features of the tweets.",
"-RFC: The Random Forest Classier using 3 fitting parameters as temporal properties and a set of handcrafted features on user, linguistic and structural properties (Kwon et al., 2013) .",
"-SVM-TS: A linear SVM classifier that uses time-series to model the variation of handcrafted social context features (Ma et al., 2015) .",
"-SVM-BOW: A naive baseline we built by representing text content using bag-of-words and using linear SVM for rumor classification.",
"-SVM-TK and SVM-HK: SVM classifier uses a Tree Kernel (Ma et al., 2017) and that uses a Hybrid Kernel (Wu et al., 2015) , respectively, both of which model propagation structures with kernels.",
"-GRU-RNN: A detection model based on recurrent neural networks (Ma et al., 2016) with GRU units for learning rumor representations by modeling sequential structure of relevant posts.",
"We implement DTC and RFC using Weka 5 , SVM-based models using LibSVM 6 and all neural-network-based models with Theano 7 .",
"We conduct 5-fold cross-validation on the datasets and use accuracy over all the four categories and F1 measure on each class to evaluate the performance of models.",
"Rumor Classification Performance As shown in Table 1 , our proposed models basically yield much better performance than other methods on both datasets via the modeling of interaction structures of posts in the propagation.",
"It is observed that the performance of the 4 baselines in the first group based on handcrafted features is obviously poor, varying between 0.409 and 0.585 in accuracy, indicating that they fail to generalize due to the lack of capacity capturing helpful features.",
"Among these baselines, SVM-TS and RFC perform relatively better because they 5 www.cs.waikato.ac.nz/ml/weka 6 www.csie.ntu.edu.tw/˜cjlin/libsvm 7 deeplearning.net/software/theano use additional temporal traits, but they are still clearly worse than the models not relying on feature engineering.",
"DTR uses a set of regular expressions indicative of stances.",
"However, only 19.6% and 22.2% tweets in the two datasets contain strings covered by these regular expressions, rendering unsatisfactory result.",
"Among the two kernel methods that are based on comparing propagation structures, we observe that SVM-TK is much more effective than SVM-HK.",
"There are two reasons: 1) SVM-HK was originally proposed and experimented on Sina Weibo (Wu et al., 2015) , which may not be generalize well on Twitter.",
"2) SVM-HK loosely couples two separate kernels: a RBF kernel based on handcrafted features, plus a random walk-based kernel which relies on a set of pre-defined keywords for jumping over the nodes probabilistically.",
"This under utilizes the propagation information due to such oversimplified treatment of tree structure.",
"In contrast, SVM-TK is an integrated kernel and can fully utilize the structure by comparing the trees based on both textual and structural similarities.",
"It appears that using bag-of-words is already a decent model evidenced as the fairly good performance of SVM-BOW which is even better than SVM-HK.",
"This is because the features of SVM-HK are handcrafted for binary classification (i.e., non-rumor vs rumor), ignoring the importance of indicative words or units that benefit finer-grained classification which can be captured more effectively by SVM-BOW.",
"The sequential neural model GRU-RNN performs slightly worse than SVM-TK, but much worse than our recursive models.",
"This is because it is a special case of the recursive model where each non-leaf node has only one child.",
"It has to rely on a linear chain as input, which missed out valuable structural information.",
"However, it does learn high-level features from the post content via hidden units of the neural model while SVM-TK cannot which can only evaluates similarities based on the overlapping words among subtrees.",
"Our recursive models are inherently tree-structured and take advantages of representation learning following the propagation structure, thus beats SVM-TK.",
"In the two recursive models, TD-RvNN outperforms BU-RvNN, which indicates that the bottomup model may suffer from larger information loss than the top-down one.",
"This verifies the hypothesis we made in Section 4.3 that the pooling layer For only the non-rumor class, it seems that our method does not perform so well as some featureengineering baselines.",
"This can be explained by the fact that these baselines are trained with additional features such as user information (e.g., profile, verification status, etc) which may contain clues for differentiating non-rumors from rumors.",
"Also, the responses to non-rumors are usually much more diverse with little informative indication, making identification of non-rumors more difficult based on content even with the structure.",
"Early Rumor Detection Performance Detecting rumors at early state of propagation is important so that interventions can be made in a timely manner.",
"We compared different methods in term of different time delays measured by either tweet count received or time elapsed since the source tweet is posted.",
"The performance is evaluated by the accuracy obtained when we incrementally add test data up to the check point given the targeted time delay or tweets volume.",
"Figure 4 shows that the performance of our recursive models climbs more rapidly and starts to supersede the other models at the early stage.",
"Although all the methods are getting to their best per-formance in the end, TD-RvNN and BU-RvNN only need around 8 hours or about 90 tweets to achieve the comparable performance of the best baseline model, i.e., SVM-TK, which needs about 36 hours or around 300 posts, indicating superior early detection performance of our method.",
"Figure 5 shows a sample tree at the early stage of propagation that has been correctly classified as a false rumor by both recursive models.",
"We can see that this false rumor demonstrates typical patterns in subtrees and propagation paths indicative of the falsehood, where a set of responses supporting the parent posts that deny or question the source post are captured by our bottom-up model.",
"Similarly, some patterns of propagation from the root to leaf nodes like \"support→deny→support\" are also seized by our top-down model.",
"In comparison, sequential models may be confused because the supportive key terms such as \"be right\", \"yeah\", \"exactly!\"",
"dominate the responses, and the SVM-TK may miss similar subtrees by just comparing the surface words.",
"Conclusions and Future Work We propose a bottom-up and a top-down treestructured model based on recursive neural networks for rumor detection on Twitter.",
"The inher-ent nature of recursive models allows them using propagation tree to guide the learning of representations from tweets content, such as embedding various indicative signals hidden in the structure, for better identifying rumors.",
"Results on two public Twitter datasets show that our method improves rumor detection performance in very large margins as compared to state-of-the-art baselines.",
"In our future work, we plan to integrate other types of information such as user properties into the structured neural models to further enhance representation learning and detect rumor spreaders at the same time.",
"We also plan to use unsupervised models for the task by exploiting structural information."
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"4",
"4.1",
"4.2",
"4.3",
"4.4",
"5.1",
"5.2",
"5.3",
"5.4",
"6"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Problem Statement",
"RvNN-based Rumor Detection",
"Standard Recursive Neural Networks",
"Bottom-up RvNN",
"Top-down RvNN",
"Model Training",
"Datasets",
"Experimental Setup",
"Rumor Classification Performance",
"Early Rumor Detection Performance",
"Conclusions and Future Work"
]
} | GEM-SciDuet-train-135#paper-1364#slide-15 | Results on Early Detection | In the first few hours, the accuracy of the RvNN- based methods climbs more rapidly and stabilize more quickly
RvNN only need around
8 hours or about 90 tweets to achieve the comparable performance of the best baseline model. | In the first few hours, the accuracy of the RvNN- based methods climbs more rapidly and stabilize more quickly
RvNN only need around
8 hours or about 90 tweets to achieve the comparable performance of the best baseline model. | [] |
GEM-SciDuet-train-135#paper-1364#slide-16 | 1364 | Rumor Detection on Twitter with Tree-structured Recursive Neural Networks | Automatic rumor detection is technically very challenging. In this work, we try to learn discriminative features from tweets content by following their non-sequential propagation structure and generate more powerful representations for identifying different type of rumors. We propose two recursive neural models based on a bottom-up and a top-down tree-structured neural networks for rumor representation learning and classification, which naturally conform to the propagation layout of tweets. Results on two public Twitter datasets demonstrate that our recursive neural models 1) achieve much better performance than state-of-the-art approaches; 2) demonstrate superior capacity on detecting rumors at very early stage. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173
],
"paper_content_text": [
"Introduction Rumors have always been a social disease.",
"In recent years, it has become unprecedentedly convenient for the \"evil-doers\" to create and disseminate rumors in massive scale with low cost thanks to the popularity of social media outlets on Twitter, Facebook, etc.",
"The worst effect of false rumors could be devastating to individual and/or society.",
"Research pertaining rumors spans multiple disciplines, such as philosophy and humanities (Di-Fonzo and Bordia, 2007; Donovan, 2007) , social psychology (Allport and Postman, 1965; Jaeger et al., 1980; Rosnow and Foster, 2005) , political studies (Allport and Postman, 1946; Berinsky, 2017) , management science (DiFonzo et al., 1994; Kimmel, 2004) and recently computer science and artificial intelligence (Qazvinian et al., 2011; Ratkiewicz et al., 2011; Castillo et al., 2011; Hannak et al., 2014; Zhao et al., 2015; Ma et al., 2015) .",
"Rumor is commonly defined as information that emerge and spread among people whose truth value is unverified or intentionally false (Di-Fonzo and Bordia, 2007; Qazvinian et al., 2011) .",
"Analysis shows that people tend to stop spreading a rumor if it is known as false (Zubiaga et al., 2016b) .",
"However, identifying such misinformation is non-trivial and needs investigative journalism to fact check the suspected claim, which is labor-intensive and time-consuming.",
"The proliferation of social media makes it worse due to the ever-increasing information load and dynamics.",
"Therefore, it is necessary to develop automatic and assistant approaches to facilitate real-time rumor tracking and debunking.",
"For automating rumor detection, most of the previous studies focused on text mining from sequential microblog streams using supervised models based on feature engineering (Castillo et al., 2011; Kwon et al., 2013; Liu et al., 2015; Ma et al., 2015) , and more recently deep neural models (Ma et al., 2016; Chen et al., 2017; Ruchansky et al., 2017) .",
"These methods largely ignore or oversimplify the structural information associated with message propagation which however has been shown conducive to provide useful clues for identifying rumors.",
"Kernel-based method (Wu et al., 2015; Ma et al., 2017) was thus proposed to model the structure as propagation trees in order to differentiate rumorous and non-rumorous claims by comparing their tree-based similarities.",
"But such kind of approach cannot directly classify a tree without pairwise comparison with all other trees imposing unnecessary overhead, and it also cannot automatically learn any high-level feature representations out of the noisy surface features.",
"In this paper, we present a neural rumor detection approach based on recursive neural networks (RvNN) to bridge the content semantics and propagation clues.",
"RvNN and its variants were originally used to compose phrase or sentence representation for syntactic and semantic parsing (Socher et al., 2011 (Socher et al., , 2012 .",
"Unlike parsing, the input into our model is a propagation tree rooted from a source post rather than the parse tree of an individual sentence, and each tree node is a responsive post instead of an individual words.",
"The content semantics of posts and the responsive relationship among them can be jointly captured via the recursive feature learning process along the tree structure.",
"So, why can such neural model do better for the task?",
"Analysis has generally found that Twitter could \"self-correct\" some inaccurate information as users share opinions, conjectures and evidences (Zubiaga et al., 2017) .",
"To illustrate our intuition, Figure 1 exemplifies the propagation trees of two rumors in our dataset, one being false and the other being true 1 .",
"Structure-insensitive methods basically relying on the relative ratio of different stances in the text cannot do well when such clue is unclear like this example.",
"However, it can be seen that when a post denies the false rumor, it tends to spark supportive or affirmative replies confirming the denial; in contrast, denial to a true rumor tends to trigger question or denial in its replies.",
"This observation may suggest a more general hypothesis that the repliers tend to disagree with (or question) who support a false rumor or deny a true rumor, and also they tend to agree with who deny a false rumor or support a true rumor.",
"Meanwhile, a reply, rather than directly responding to the source tweet (i.e., the root), is usually responsive to its immediate ancestor (Lukasik et al., 2016; Zubiaga et al., 2016a) , suggesting obvious local characteristic of the interaction.",
"The recursive network naturally models such structures for learning to capture the rumor indicative signals and enhance the representation by recursively aggregating the signals from different branches.",
"To this end, we extend the standard RvNN into two variants, i.e., a bottom-up (BU) model and a top-down (TD) model, which represent the propagation tree structure from different angles, in order to visit the nodes and combine their representations following distinct directions.",
"The important merit of such architecture is that the node features can be selectively refined by the recursion given the connection and direction of all paths of the 1 False (true) rumor means the veracity of the rumorous claim is false (true).",
"Figure 1 : Propagation trees of two rumorous source tweets.",
"Nodes may express stances on their parent as commenting, supporting, questioning or denying.",
"The edge arrow indicates the direction from a response to its responded node, and the polarity is marked as '+' ('-') for support (denial).",
"The same node color indicates the same stance on the veracity of root node (i.e., source tweet).",
"tree.",
"As a result, it can be expected that the discriminative signals are better embedded into the learned representations.",
"We evaluate our proposed approach based on two public Twitter datasets.",
"The results show that our method outperforms strong rumor detection baselines with large margin and also demonstrate much higher effectiveness for detection at early stage of propagation, which is promising for realtime intervention and debunking.",
"Our contributions are summarized as follows in three folds: • This is the first study that deeply integrates both structure and content semantics based on tree-structured recursive neural networks for detecting rumors from microblog posts.",
"• We propose two variants of RvNN models based on bottom-up and top-down tree structures to generate better integrated representations for a claim by capturing both structural and textural properties signaling rumors.",
"• Our experiments based on real-world Twitter datasets achieve superior improvements over state-of-the-art baselines on both rumor classification and early detection tasks.",
"We make the source codes in our experiments publicly accessible 2 .",
"Related Work Most previous automatic approaches for rumor detection (Castillo et al., 2011; Yang et al., 2012; Liu et al., 2015) intended to learn a supervised classifier by utilizing a wide range of features crafted from post contents, user profiles and propagation patterns.",
"Subsequent studies were then conducted to engineer new features such as those representing rumor diffusion and cascades (Friggeri et al., 2014; Hannak et al., 2014) characterized by comments with links to debunking websites.",
"Kwon et al.",
"(2013) introduced a time-series-fitting model based on the volume of tweets over time.",
"Ma et al.",
"(2015) extended their model with more chronological social context features.",
"These approaches typically require heavy preprocessing and feature engineering.",
"Zhao et al.",
"(2015) alleviated the engineering effort by using a set of regular expressions (such as \"really?",
"\", \"not true\", etc) to find questing and denying tweets, but the approach was oversimplified and suffered from very low recall.",
"Ma et al.",
"(2016) used recurrent neural networks (RNN) to learn automatically the representations from tweets content based on time series.",
"Recently, they studied to mutually reinforce stance detection and rumor classification in a neural multi-task learning framework (Ma et al., 2018) .",
"However, the approaches cannot embed features reflecting how the posts are propagated and requires careful data segmentation to prepare for time sequence.",
"Some kernel-based methods were exploited to model the propagation structure.",
"Wu et al.",
"(2015) proposed a hybrid SVM classifier which combines a RBF kernel and a random-walk-based graph kernel to capture both flat and propagation patterns for detecting rumors on Sina Weibo.",
"Ma et al.",
"(2017) used tree kernel to capture the similarity of propagation trees by counting their similar substructures in order to identify different types of rumors on Twitter.",
"Compared to their studies, our model can learn the useful features via a more natural and general approach, i.e., the tree-structured neural network, to jointly generate representations from both structure and content.",
"RvNN has demonstrated state-of-the-art performances in a variety of tasks, e.g., images segmentation (Socher et al., 2011) , phrase representation from word vectors (Socher et al., 2012) , and sentiment classification in sentences (Socher et al., 2013) .",
"More recently, a deep RvNN was proposed to model the compositionality in natural language for fine-grained sentiment classification by stacking multiple recursive layers (Irsoy and Cardie, 2014) .",
"In order to avoid gradient vanishing, some studies integrated Long Short Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) to RvNN Tai et al., 2015) .",
"Mou et al.",
"(2015) used a convolutional network over tree structures for syntactic tree parsing of natural language sentences.",
"Problem Statement We define a Twitter rumor detection dataset as a set of claims C = {C 1 , C 2 , · · · , C |C| }, where each claim C i corresponds to a source tweet r i which consists of ideally all its relevant responsive tweets in chronological order, i.e., C i = {r i , x i1 , x i2 , · · · , x im } where each x i * is a responsive tweet of the root r i .",
"Note that although the tweets are notated sequentially, there are connections among them based on their reply or repost relationships, which can form a propagation tree structure (Wu et al., 2015; Ma et al., 2017) with r i being the root node.",
"We formulate this task as a supervised classification problem, which learns a classifier f from labeled claims, that is f : C i → Y i , where Y i takes one of the four finer-grained classes: non-rumor, false rumor, true rumor, and unverified rumor that are introduced in the literature (Ma et al., 2017; Zubiaga et al., 2016b ).",
"An important issue of the tree structure is concerned about the direction of edges, which can result in two different architectures of the model: 1) a bottom-up tree; 2) a top-down tree, which are defined as follows: • Bottom-up tree takes the similar shape as shown in Figure 1 , where responsive nodes always point to their responded nodes and leaf nodes not having any response are laid out at the furthest level.",
"We represent a tree as T i = V i , E i , where V i = C i which con- sists of all relevant posts as nodes, and E i denotes a set of all directed links, where for any u, v ∈ V i , u ← v exists if v responses to u.",
"This structure is similar to a citation network where a response mimics a reference.",
"• Top-down tree naturally conforms to the direction of information propagation, in which a link u → v means the information flows from u to v and v sees it and provides a response to u.",
"This structure reverses bottomup tree and simulates how information cas- cades from a source tweet, i.e., the root, to all its receivers, i.e., the decedents, which is similar as (Wu et al., 2015; Ma et al., 2017) .",
"RvNN-based Rumor Detection The core idea of our method is to strengthen the high-level representation of tree nodes by the recursion following the propagation structure over different branches in the tree.",
"For instance, the responsive nodes confirming or supporting a node (e.g., \"I agree\", \"be right\", etc) can further reinforce the stance of that node while denial or questioning responses (e.g., \"disagree, \"really?!)",
"otherwise weaken its stance.",
"Compared to the kernelbased method using propagation tree (Wu et al., 2015; Ma et al., 2017) , our method does not need pairwise comparison among large number of subtrees, and can learn much stronger representation of content following the response structure.",
"In this section, we will describe our extension to the standard RvNN for modeling rumor detection based on the bottom-up and top-down architectures presented in Section 3.",
"Standard Recursive Neural Networks RvNN is a type of tree-structured neural networks.",
"The original version of RvNN utilized binarized sentence parse trees (Socher et al., 2012) , in which the representation associated with each node of a parse tree is computed from its direct children.",
"The overall structure of the standard RvNN is illustrated as the right side of Figure 2 , corresponding to the input parse tree at the left side.",
"Leaf nodes are the words in an input sentence, each represented by a low-dimensional word embedding.",
"Non-leaf nodes are sentence constituents, computed by recursion based on the presentations of child nodes.",
"Let p be the feature vector of a parent node whose children are c 1 and c 2 , the representation of the parent is computed by p = f (W ·[c 1 ; c 2 ]+b), where f (·) is the activation function with W and b as parameters.",
"This computation is done recursively over all tree nodes; the learned hidden vectors of the nodes can then be used for various classification tasks.",
"Bottom-up RvNN The core idea of bottom-up model is to generate a feature vector for each subtree by recursively visiting every node from the leaves at the bottom to the root at the top.",
"In this way, the subtrees with similar contexts, such as those subtrees having a denial parent and a set of supportive children, will be projected into the proximity in the representation space.",
"And thus such local rumor indicative features are aggregated along different branches into some global representation of the whole tree.",
"For this purpose, we make a natural extension to the original RvNN.",
"The overall structure of our proposed bottom-up model is illustrated in Figure 3(b) , taking a bottom-up tree (see Figure 3 (a)) as input.",
"Different from the standard RvNN, the input of each node in the bottom-up model is a post represented as a vector of words in the vocabulary in terms of tf idf values.",
"Here, every node has an input vector, and the number of children of nodes varies significantly 3 .",
"In rumor detection, long short-term memory (LSTM) (Hochreiter and Schmidhuber, 1997) and gated recurrent units (GRU) (Cho et al., 2014) were used to learn textual representation, which adopts memory units to store information over long time steps (Ma et al., 2016) .",
"In this paper, we choose to extend GRU as hidden unit to model long-distance interactions over the tree nodes because it is more efficient due to fewer parameters.",
"Let S(j) denote the set of direct children of the node j.",
"The transition equations of node j in the bottom-up model are formulated as follows: where x j is the original input vector of node j, E denotes the parameter matrix for transforming this input post,x j is the transformed representation of j, [W * , U * ] are the weight connections inside GRU, and h j and h s refer to the hidden state of j and its s-th child.",
"Thus h S denotes the sum of the hidden state of all the children of j assuming that all children are equally important to j.",
"As with the standard GRU, denotes element-wise multiplication; a reset gate r j determines how to combine the current inputx j with the memory of children, and an update gate z j defines how much memory from the children is cascaded into the current node; andh j denotes the candidate activation of the hidden state of the current node.",
"Different from the standard GRU unit, the gating vectors in our variant of GRU are dependent on the states of many child units, allowing our model to incorporate representations from different children.",
"After recursive aggregation from bottom to up, the state of root node (i.e., source tweet) can be regard as the representation of the whole tree which is used for supervised classification.",
"So, an output layer is connected to the root node for predicting the class of the tree using a softmax function: x j = x j E h S = s∈S(j) h s r j = σ (W rxj + U r h S ) z j = σ (W zxj + U z h S ) h j = tanh (W hxj + U h (h S r j )) h j = (1 − z j ) h S + z j h j y = Sof tmax(Vh 0 + b) (2) where h 0 is the learned hidden vector of root node; V and b are the weights and bias in output layer.",
"Top-down RvNN This model is designed to leverage the structure of top-down tree to capture complex propagation patterns for classifying rumorous claims, which is shown in Figure 3 (c).",
"It models how the informa-tion flows from source post to the current node.",
"The idea of this top-down approach is to generate a strengthened feature vector for each post considering its propagation path, where rumor-indicative features are aggregated along the propagation history in the path.",
"For example, if current post agree with its parent's stance which denies the source post, the denial stance from the root node down to the current node on this path should be reinforced.",
"Due to different branches of any non-leaf node, the top-down visit to its subtree nodes is also recursive.",
"However, the nature of top-down tree lends this model different from the bottom-up one.",
"The representation of each node is computed by combining its own input and its parent node instead of its children nodes.",
"This process proceeds recursively from the root node to its children until all leaf nodes are reached.",
"Suppose that the hidden state of a non-leaf node can be passed synchronously to all its child nodes without loss.",
"Then the hidden state h j of a node j can be computed by combining the hidden state h P(j) of its parent node P(j) and its own input vector x j .",
"Therefore, the transition equations of node j can be formulated as a standard GRU: x j = x j E r j = σ W rxj + U r h P(j) z j = σ W zxj + U z h P(j) h j = tanh W hxj + U h (h P(j) r j ) h j = (1 − z j ) h P(j) + z j h j (3) Through the top-down recursion, the learned representations are eventually embedded into the hidden vector of all the leaf nodes.",
"Since the num-ber of leaf nodes varies, the resulting vectors cannot be directly fed into a fixed-size neural layer for output.",
"Therefore, we add a max-pooling layer to take the maximum value of each dimension of the vectors over all the leaf nodes.",
"This can also help capture the most appealing indicative features from all the propagation paths.",
"Based on the pooling result, we finally use a softmax function in the output layer to predict the label of the tree: y = Sof tmax(Vh ∞ + b) (4) where h ∞ is the pooling vector over all leaf nodes, V and b are parameters in the output layer.",
"Although both of the two RvNN models aim to capture the structural properties by recursively visiting all nodes, we can conjecture that the topdown model would be better.",
"The hypothesis is that in the bottom-up case the final output relies on the representation of single root, and its information loss can be larger than the top-down one since in the top-down case the representations embedded into all leaf nodes along different propagation paths can be incorporated via pooling holistically.",
"Model Training The model is trained to minimize the squared error between the probability distributions of the predictions and the ground truth: L(y,ŷ) = N n=1 C c=1 (y c −ŷ c ) 2 + λ||θ|| 2 2 (5) where y c is the ground truth andŷ c is the prediction probability of a class, N is the number of training claims, C is the number of classes, ||.|| 2 is the L 2 regularization term over all model parameters θ, and λ is the trade-off coefficient.",
"During training, all the model parameters are updated using efficient back-propagation through structure (Goller and Kuchler, 1996; Socher et al., 2013) , and the optimization is gradient-based following the Ada-grad update rule (Duchi et al., 2011) to speed up the convergence.",
"We empirically initialize the model parameters with uniform distribution and set the vocabulary size as 5,000, the size of embedding and hidden units as 100.",
"We iterate over all the training examples in each epoch and continue until the loss value converges or the maximum epoch number is met.",
"Experiments and Results Datasets For experimental evaluation, we use two publicly available Twitter datasets released by Ma et al.",
"(2017) , namely Twitter15 and Twitter16 4 , which respectively contains 1,381 and 1,181 propagation trees (see (Ma et al., 2017) for detailed statistics).",
"In each dataset, a group of wide spread source tweets along with their propagation threads, i.e., replies and retweets, are provided in the form of tree structure.",
"Each tree is annotated with one of the four class labels, i.e., non-rumor, false rumor, true rumor and unverified rumor.",
"We remove the retweets from the trees since they do not provide any extra information or evidence contentwise.",
"We build two versions for each tree, one for the bottom-up tree and the other for the top-down tree, by flipping the edges' direction.",
"Experimental Setup We make comprehensive comparisons between our models and some state-of-the-art baselines on rumor classification and early detection tasks.",
"-DTR: Zhao et al.",
"(2015) proposed a Decision-Tree-based Ranking model to identify trending rumors by searching for inquiry phrases.",
"-DTC: The information credibility model using a Decision-Tree Classifier (Castillo et al., 2011) based on manually engineering various statistical features of the tweets.",
"-RFC: The Random Forest Classier using 3 fitting parameters as temporal properties and a set of handcrafted features on user, linguistic and structural properties (Kwon et al., 2013) .",
"-SVM-TS: A linear SVM classifier that uses time-series to model the variation of handcrafted social context features (Ma et al., 2015) .",
"-SVM-BOW: A naive baseline we built by representing text content using bag-of-words and using linear SVM for rumor classification.",
"-SVM-TK and SVM-HK: SVM classifier uses a Tree Kernel (Ma et al., 2017) and that uses a Hybrid Kernel (Wu et al., 2015) , respectively, both of which model propagation structures with kernels.",
"-GRU-RNN: A detection model based on recurrent neural networks (Ma et al., 2016) with GRU units for learning rumor representations by modeling sequential structure of relevant posts.",
"We implement DTC and RFC using Weka 5 , SVM-based models using LibSVM 6 and all neural-network-based models with Theano 7 .",
"We conduct 5-fold cross-validation on the datasets and use accuracy over all the four categories and F1 measure on each class to evaluate the performance of models.",
"Rumor Classification Performance As shown in Table 1 , our proposed models basically yield much better performance than other methods on both datasets via the modeling of interaction structures of posts in the propagation.",
"It is observed that the performance of the 4 baselines in the first group based on handcrafted features is obviously poor, varying between 0.409 and 0.585 in accuracy, indicating that they fail to generalize due to the lack of capacity capturing helpful features.",
"Among these baselines, SVM-TS and RFC perform relatively better because they 5 www.cs.waikato.ac.nz/ml/weka 6 www.csie.ntu.edu.tw/˜cjlin/libsvm 7 deeplearning.net/software/theano use additional temporal traits, but they are still clearly worse than the models not relying on feature engineering.",
"DTR uses a set of regular expressions indicative of stances.",
"However, only 19.6% and 22.2% tweets in the two datasets contain strings covered by these regular expressions, rendering unsatisfactory result.",
"Among the two kernel methods that are based on comparing propagation structures, we observe that SVM-TK is much more effective than SVM-HK.",
"There are two reasons: 1) SVM-HK was originally proposed and experimented on Sina Weibo (Wu et al., 2015) , which may not be generalize well on Twitter.",
"2) SVM-HK loosely couples two separate kernels: a RBF kernel based on handcrafted features, plus a random walk-based kernel which relies on a set of pre-defined keywords for jumping over the nodes probabilistically.",
"This under utilizes the propagation information due to such oversimplified treatment of tree structure.",
"In contrast, SVM-TK is an integrated kernel and can fully utilize the structure by comparing the trees based on both textual and structural similarities.",
"It appears that using bag-of-words is already a decent model evidenced as the fairly good performance of SVM-BOW which is even better than SVM-HK.",
"This is because the features of SVM-HK are handcrafted for binary classification (i.e., non-rumor vs rumor), ignoring the importance of indicative words or units that benefit finer-grained classification which can be captured more effectively by SVM-BOW.",
"The sequential neural model GRU-RNN performs slightly worse than SVM-TK, but much worse than our recursive models.",
"This is because it is a special case of the recursive model where each non-leaf node has only one child.",
"It has to rely on a linear chain as input, which missed out valuable structural information.",
"However, it does learn high-level features from the post content via hidden units of the neural model while SVM-TK cannot which can only evaluates similarities based on the overlapping words among subtrees.",
"Our recursive models are inherently tree-structured and take advantages of representation learning following the propagation structure, thus beats SVM-TK.",
"In the two recursive models, TD-RvNN outperforms BU-RvNN, which indicates that the bottomup model may suffer from larger information loss than the top-down one.",
"This verifies the hypothesis we made in Section 4.3 that the pooling layer For only the non-rumor class, it seems that our method does not perform so well as some featureengineering baselines.",
"This can be explained by the fact that these baselines are trained with additional features such as user information (e.g., profile, verification status, etc) which may contain clues for differentiating non-rumors from rumors.",
"Also, the responses to non-rumors are usually much more diverse with little informative indication, making identification of non-rumors more difficult based on content even with the structure.",
"Early Rumor Detection Performance Detecting rumors at early state of propagation is important so that interventions can be made in a timely manner.",
"We compared different methods in term of different time delays measured by either tweet count received or time elapsed since the source tweet is posted.",
"The performance is evaluated by the accuracy obtained when we incrementally add test data up to the check point given the targeted time delay or tweets volume.",
"Figure 4 shows that the performance of our recursive models climbs more rapidly and starts to supersede the other models at the early stage.",
"Although all the methods are getting to their best per-formance in the end, TD-RvNN and BU-RvNN only need around 8 hours or about 90 tweets to achieve the comparable performance of the best baseline model, i.e., SVM-TK, which needs about 36 hours or around 300 posts, indicating superior early detection performance of our method.",
"Figure 5 shows a sample tree at the early stage of propagation that has been correctly classified as a false rumor by both recursive models.",
"We can see that this false rumor demonstrates typical patterns in subtrees and propagation paths indicative of the falsehood, where a set of responses supporting the parent posts that deny or question the source post are captured by our bottom-up model.",
"Similarly, some patterns of propagation from the root to leaf nodes like \"support→deny→support\" are also seized by our top-down model.",
"In comparison, sequential models may be confused because the supportive key terms such as \"be right\", \"yeah\", \"exactly!\"",
"dominate the responses, and the SVM-TK may miss similar subtrees by just comparing the surface words.",
"Conclusions and Future Work We propose a bottom-up and a top-down treestructured model based on recursive neural networks for rumor detection on Twitter.",
"The inher-ent nature of recursive models allows them using propagation tree to guide the learning of representations from tweets content, such as embedding various indicative signals hidden in the structure, for better identifying rumors.",
"Results on two public Twitter datasets show that our method improves rumor detection performance in very large margins as compared to state-of-the-art baselines.",
"In our future work, we plan to integrate other types of information such as user properties into the structured neural models to further enhance representation learning and detect rumor spreaders at the same time.",
"We also plan to use unsupervised models for the task by exploiting structural information."
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"4",
"4.1",
"4.2",
"4.3",
"4.4",
"5.1",
"5.2",
"5.3",
"5.4",
"6"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Problem Statement",
"RvNN-based Rumor Detection",
"Standard Recursive Neural Networks",
"Bottom-up RvNN",
"Top-down RvNN",
"Model Training",
"Datasets",
"Experimental Setup",
"Rumor Classification Performance",
"Early Rumor Detection Performance",
"Conclusions and Future Work"
]
} | GEM-SciDuet-train-135#paper-1364#slide-16 | Early Detection Example | Example subtree of a rumor captured by the algorithm at early stage of propagation
Bottom-up RvNN: a set of responses supporting the parent posts that deny or question the source post.
Top-down RvNN: some patterns of propagation from the root to leaf nodes like supportdenysupport Baselines: sequential models may be confused because the supportive key terms such as be right, yeah, exactly! dominate the responses, and the SVM-TK may miss similar subtrees by just comparing the surface words. | Example subtree of a rumor captured by the algorithm at early stage of propagation
Bottom-up RvNN: a set of responses supporting the parent posts that deny or question the source post.
Top-down RvNN: some patterns of propagation from the root to leaf nodes like supportdenysupport Baselines: sequential models may be confused because the supportive key terms such as be right, yeah, exactly! dominate the responses, and the SVM-TK may miss similar subtrees by just comparing the surface words. | [] |
GEM-SciDuet-train-135#paper-1364#slide-17 | 1364 | Rumor Detection on Twitter with Tree-structured Recursive Neural Networks | Automatic rumor detection is technically very challenging. In this work, we try to learn discriminative features from tweets content by following their non-sequential propagation structure and generate more powerful representations for identifying different type of rumors. We propose two recursive neural models based on a bottom-up and a top-down tree-structured neural networks for rumor representation learning and classification, which naturally conform to the propagation layout of tweets. Results on two public Twitter datasets demonstrate that our recursive neural models 1) achieve much better performance than state-of-the-art approaches; 2) demonstrate superior capacity on detecting rumors at very early stage. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173
],
"paper_content_text": [
"Introduction Rumors have always been a social disease.",
"In recent years, it has become unprecedentedly convenient for the \"evil-doers\" to create and disseminate rumors in massive scale with low cost thanks to the popularity of social media outlets on Twitter, Facebook, etc.",
"The worst effect of false rumors could be devastating to individual and/or society.",
"Research pertaining rumors spans multiple disciplines, such as philosophy and humanities (Di-Fonzo and Bordia, 2007; Donovan, 2007) , social psychology (Allport and Postman, 1965; Jaeger et al., 1980; Rosnow and Foster, 2005) , political studies (Allport and Postman, 1946; Berinsky, 2017) , management science (DiFonzo et al., 1994; Kimmel, 2004) and recently computer science and artificial intelligence (Qazvinian et al., 2011; Ratkiewicz et al., 2011; Castillo et al., 2011; Hannak et al., 2014; Zhao et al., 2015; Ma et al., 2015) .",
"Rumor is commonly defined as information that emerge and spread among people whose truth value is unverified or intentionally false (Di-Fonzo and Bordia, 2007; Qazvinian et al., 2011) .",
"Analysis shows that people tend to stop spreading a rumor if it is known as false (Zubiaga et al., 2016b) .",
"However, identifying such misinformation is non-trivial and needs investigative journalism to fact check the suspected claim, which is labor-intensive and time-consuming.",
"The proliferation of social media makes it worse due to the ever-increasing information load and dynamics.",
"Therefore, it is necessary to develop automatic and assistant approaches to facilitate real-time rumor tracking and debunking.",
"For automating rumor detection, most of the previous studies focused on text mining from sequential microblog streams using supervised models based on feature engineering (Castillo et al., 2011; Kwon et al., 2013; Liu et al., 2015; Ma et al., 2015) , and more recently deep neural models (Ma et al., 2016; Chen et al., 2017; Ruchansky et al., 2017) .",
"These methods largely ignore or oversimplify the structural information associated with message propagation which however has been shown conducive to provide useful clues for identifying rumors.",
"Kernel-based method (Wu et al., 2015; Ma et al., 2017) was thus proposed to model the structure as propagation trees in order to differentiate rumorous and non-rumorous claims by comparing their tree-based similarities.",
"But such kind of approach cannot directly classify a tree without pairwise comparison with all other trees imposing unnecessary overhead, and it also cannot automatically learn any high-level feature representations out of the noisy surface features.",
"In this paper, we present a neural rumor detection approach based on recursive neural networks (RvNN) to bridge the content semantics and propagation clues.",
"RvNN and its variants were originally used to compose phrase or sentence representation for syntactic and semantic parsing (Socher et al., 2011 (Socher et al., , 2012 .",
"Unlike parsing, the input into our model is a propagation tree rooted from a source post rather than the parse tree of an individual sentence, and each tree node is a responsive post instead of an individual words.",
"The content semantics of posts and the responsive relationship among them can be jointly captured via the recursive feature learning process along the tree structure.",
"So, why can such neural model do better for the task?",
"Analysis has generally found that Twitter could \"self-correct\" some inaccurate information as users share opinions, conjectures and evidences (Zubiaga et al., 2017) .",
"To illustrate our intuition, Figure 1 exemplifies the propagation trees of two rumors in our dataset, one being false and the other being true 1 .",
"Structure-insensitive methods basically relying on the relative ratio of different stances in the text cannot do well when such clue is unclear like this example.",
"However, it can be seen that when a post denies the false rumor, it tends to spark supportive or affirmative replies confirming the denial; in contrast, denial to a true rumor tends to trigger question or denial in its replies.",
"This observation may suggest a more general hypothesis that the repliers tend to disagree with (or question) who support a false rumor or deny a true rumor, and also they tend to agree with who deny a false rumor or support a true rumor.",
"Meanwhile, a reply, rather than directly responding to the source tweet (i.e., the root), is usually responsive to its immediate ancestor (Lukasik et al., 2016; Zubiaga et al., 2016a) , suggesting obvious local characteristic of the interaction.",
"The recursive network naturally models such structures for learning to capture the rumor indicative signals and enhance the representation by recursively aggregating the signals from different branches.",
"To this end, we extend the standard RvNN into two variants, i.e., a bottom-up (BU) model and a top-down (TD) model, which represent the propagation tree structure from different angles, in order to visit the nodes and combine their representations following distinct directions.",
"The important merit of such architecture is that the node features can be selectively refined by the recursion given the connection and direction of all paths of the 1 False (true) rumor means the veracity of the rumorous claim is false (true).",
"Figure 1 : Propagation trees of two rumorous source tweets.",
"Nodes may express stances on their parent as commenting, supporting, questioning or denying.",
"The edge arrow indicates the direction from a response to its responded node, and the polarity is marked as '+' ('-') for support (denial).",
"The same node color indicates the same stance on the veracity of root node (i.e., source tweet).",
"tree.",
"As a result, it can be expected that the discriminative signals are better embedded into the learned representations.",
"We evaluate our proposed approach based on two public Twitter datasets.",
"The results show that our method outperforms strong rumor detection baselines with large margin and also demonstrate much higher effectiveness for detection at early stage of propagation, which is promising for realtime intervention and debunking.",
"Our contributions are summarized as follows in three folds: • This is the first study that deeply integrates both structure and content semantics based on tree-structured recursive neural networks for detecting rumors from microblog posts.",
"• We propose two variants of RvNN models based on bottom-up and top-down tree structures to generate better integrated representations for a claim by capturing both structural and textural properties signaling rumors.",
"• Our experiments based on real-world Twitter datasets achieve superior improvements over state-of-the-art baselines on both rumor classification and early detection tasks.",
"We make the source codes in our experiments publicly accessible 2 .",
"Related Work Most previous automatic approaches for rumor detection (Castillo et al., 2011; Yang et al., 2012; Liu et al., 2015) intended to learn a supervised classifier by utilizing a wide range of features crafted from post contents, user profiles and propagation patterns.",
"Subsequent studies were then conducted to engineer new features such as those representing rumor diffusion and cascades (Friggeri et al., 2014; Hannak et al., 2014) characterized by comments with links to debunking websites.",
"Kwon et al.",
"(2013) introduced a time-series-fitting model based on the volume of tweets over time.",
"Ma et al.",
"(2015) extended their model with more chronological social context features.",
"These approaches typically require heavy preprocessing and feature engineering.",
"Zhao et al.",
"(2015) alleviated the engineering effort by using a set of regular expressions (such as \"really?",
"\", \"not true\", etc) to find questing and denying tweets, but the approach was oversimplified and suffered from very low recall.",
"Ma et al.",
"(2016) used recurrent neural networks (RNN) to learn automatically the representations from tweets content based on time series.",
"Recently, they studied to mutually reinforce stance detection and rumor classification in a neural multi-task learning framework (Ma et al., 2018) .",
"However, the approaches cannot embed features reflecting how the posts are propagated and requires careful data segmentation to prepare for time sequence.",
"Some kernel-based methods were exploited to model the propagation structure.",
"Wu et al.",
"(2015) proposed a hybrid SVM classifier which combines a RBF kernel and a random-walk-based graph kernel to capture both flat and propagation patterns for detecting rumors on Sina Weibo.",
"Ma et al.",
"(2017) used tree kernel to capture the similarity of propagation trees by counting their similar substructures in order to identify different types of rumors on Twitter.",
"Compared to their studies, our model can learn the useful features via a more natural and general approach, i.e., the tree-structured neural network, to jointly generate representations from both structure and content.",
"RvNN has demonstrated state-of-the-art performances in a variety of tasks, e.g., images segmentation (Socher et al., 2011) , phrase representation from word vectors (Socher et al., 2012) , and sentiment classification in sentences (Socher et al., 2013) .",
"More recently, a deep RvNN was proposed to model the compositionality in natural language for fine-grained sentiment classification by stacking multiple recursive layers (Irsoy and Cardie, 2014) .",
"In order to avoid gradient vanishing, some studies integrated Long Short Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) to RvNN Tai et al., 2015) .",
"Mou et al.",
"(2015) used a convolutional network over tree structures for syntactic tree parsing of natural language sentences.",
"Problem Statement We define a Twitter rumor detection dataset as a set of claims C = {C 1 , C 2 , · · · , C |C| }, where each claim C i corresponds to a source tweet r i which consists of ideally all its relevant responsive tweets in chronological order, i.e., C i = {r i , x i1 , x i2 , · · · , x im } where each x i * is a responsive tweet of the root r i .",
"Note that although the tweets are notated sequentially, there are connections among them based on their reply or repost relationships, which can form a propagation tree structure (Wu et al., 2015; Ma et al., 2017) with r i being the root node.",
"We formulate this task as a supervised classification problem, which learns a classifier f from labeled claims, that is f : C i → Y i , where Y i takes one of the four finer-grained classes: non-rumor, false rumor, true rumor, and unverified rumor that are introduced in the literature (Ma et al., 2017; Zubiaga et al., 2016b ).",
"An important issue of the tree structure is concerned about the direction of edges, which can result in two different architectures of the model: 1) a bottom-up tree; 2) a top-down tree, which are defined as follows: • Bottom-up tree takes the similar shape as shown in Figure 1 , where responsive nodes always point to their responded nodes and leaf nodes not having any response are laid out at the furthest level.",
"We represent a tree as T i = V i , E i , where V i = C i which con- sists of all relevant posts as nodes, and E i denotes a set of all directed links, where for any u, v ∈ V i , u ← v exists if v responses to u.",
"This structure is similar to a citation network where a response mimics a reference.",
"• Top-down tree naturally conforms to the direction of information propagation, in which a link u → v means the information flows from u to v and v sees it and provides a response to u.",
"This structure reverses bottomup tree and simulates how information cas- cades from a source tweet, i.e., the root, to all its receivers, i.e., the decedents, which is similar as (Wu et al., 2015; Ma et al., 2017) .",
"RvNN-based Rumor Detection The core idea of our method is to strengthen the high-level representation of tree nodes by the recursion following the propagation structure over different branches in the tree.",
"For instance, the responsive nodes confirming or supporting a node (e.g., \"I agree\", \"be right\", etc) can further reinforce the stance of that node while denial or questioning responses (e.g., \"disagree, \"really?!)",
"otherwise weaken its stance.",
"Compared to the kernelbased method using propagation tree (Wu et al., 2015; Ma et al., 2017) , our method does not need pairwise comparison among large number of subtrees, and can learn much stronger representation of content following the response structure.",
"In this section, we will describe our extension to the standard RvNN for modeling rumor detection based on the bottom-up and top-down architectures presented in Section 3.",
"Standard Recursive Neural Networks RvNN is a type of tree-structured neural networks.",
"The original version of RvNN utilized binarized sentence parse trees (Socher et al., 2012) , in which the representation associated with each node of a parse tree is computed from its direct children.",
"The overall structure of the standard RvNN is illustrated as the right side of Figure 2 , corresponding to the input parse tree at the left side.",
"Leaf nodes are the words in an input sentence, each represented by a low-dimensional word embedding.",
"Non-leaf nodes are sentence constituents, computed by recursion based on the presentations of child nodes.",
"Let p be the feature vector of a parent node whose children are c 1 and c 2 , the representation of the parent is computed by p = f (W ·[c 1 ; c 2 ]+b), where f (·) is the activation function with W and b as parameters.",
"This computation is done recursively over all tree nodes; the learned hidden vectors of the nodes can then be used for various classification tasks.",
"Bottom-up RvNN The core idea of bottom-up model is to generate a feature vector for each subtree by recursively visiting every node from the leaves at the bottom to the root at the top.",
"In this way, the subtrees with similar contexts, such as those subtrees having a denial parent and a set of supportive children, will be projected into the proximity in the representation space.",
"And thus such local rumor indicative features are aggregated along different branches into some global representation of the whole tree.",
"For this purpose, we make a natural extension to the original RvNN.",
"The overall structure of our proposed bottom-up model is illustrated in Figure 3(b) , taking a bottom-up tree (see Figure 3 (a)) as input.",
"Different from the standard RvNN, the input of each node in the bottom-up model is a post represented as a vector of words in the vocabulary in terms of tf idf values.",
"Here, every node has an input vector, and the number of children of nodes varies significantly 3 .",
"In rumor detection, long short-term memory (LSTM) (Hochreiter and Schmidhuber, 1997) and gated recurrent units (GRU) (Cho et al., 2014) were used to learn textual representation, which adopts memory units to store information over long time steps (Ma et al., 2016) .",
"In this paper, we choose to extend GRU as hidden unit to model long-distance interactions over the tree nodes because it is more efficient due to fewer parameters.",
"Let S(j) denote the set of direct children of the node j.",
"The transition equations of node j in the bottom-up model are formulated as follows: where x j is the original input vector of node j, E denotes the parameter matrix for transforming this input post,x j is the transformed representation of j, [W * , U * ] are the weight connections inside GRU, and h j and h s refer to the hidden state of j and its s-th child.",
"Thus h S denotes the sum of the hidden state of all the children of j assuming that all children are equally important to j.",
"As with the standard GRU, denotes element-wise multiplication; a reset gate r j determines how to combine the current inputx j with the memory of children, and an update gate z j defines how much memory from the children is cascaded into the current node; andh j denotes the candidate activation of the hidden state of the current node.",
"Different from the standard GRU unit, the gating vectors in our variant of GRU are dependent on the states of many child units, allowing our model to incorporate representations from different children.",
"After recursive aggregation from bottom to up, the state of root node (i.e., source tweet) can be regard as the representation of the whole tree which is used for supervised classification.",
"So, an output layer is connected to the root node for predicting the class of the tree using a softmax function: x j = x j E h S = s∈S(j) h s r j = σ (W rxj + U r h S ) z j = σ (W zxj + U z h S ) h j = tanh (W hxj + U h (h S r j )) h j = (1 − z j ) h S + z j h j y = Sof tmax(Vh 0 + b) (2) where h 0 is the learned hidden vector of root node; V and b are the weights and bias in output layer.",
"Top-down RvNN This model is designed to leverage the structure of top-down tree to capture complex propagation patterns for classifying rumorous claims, which is shown in Figure 3 (c).",
"It models how the informa-tion flows from source post to the current node.",
"The idea of this top-down approach is to generate a strengthened feature vector for each post considering its propagation path, where rumor-indicative features are aggregated along the propagation history in the path.",
"For example, if current post agree with its parent's stance which denies the source post, the denial stance from the root node down to the current node on this path should be reinforced.",
"Due to different branches of any non-leaf node, the top-down visit to its subtree nodes is also recursive.",
"However, the nature of top-down tree lends this model different from the bottom-up one.",
"The representation of each node is computed by combining its own input and its parent node instead of its children nodes.",
"This process proceeds recursively from the root node to its children until all leaf nodes are reached.",
"Suppose that the hidden state of a non-leaf node can be passed synchronously to all its child nodes without loss.",
"Then the hidden state h j of a node j can be computed by combining the hidden state h P(j) of its parent node P(j) and its own input vector x j .",
"Therefore, the transition equations of node j can be formulated as a standard GRU: x j = x j E r j = σ W rxj + U r h P(j) z j = σ W zxj + U z h P(j) h j = tanh W hxj + U h (h P(j) r j ) h j = (1 − z j ) h P(j) + z j h j (3) Through the top-down recursion, the learned representations are eventually embedded into the hidden vector of all the leaf nodes.",
"Since the num-ber of leaf nodes varies, the resulting vectors cannot be directly fed into a fixed-size neural layer for output.",
"Therefore, we add a max-pooling layer to take the maximum value of each dimension of the vectors over all the leaf nodes.",
"This can also help capture the most appealing indicative features from all the propagation paths.",
"Based on the pooling result, we finally use a softmax function in the output layer to predict the label of the tree: y = Sof tmax(Vh ∞ + b) (4) where h ∞ is the pooling vector over all leaf nodes, V and b are parameters in the output layer.",
"Although both of the two RvNN models aim to capture the structural properties by recursively visiting all nodes, we can conjecture that the topdown model would be better.",
"The hypothesis is that in the bottom-up case the final output relies on the representation of single root, and its information loss can be larger than the top-down one since in the top-down case the representations embedded into all leaf nodes along different propagation paths can be incorporated via pooling holistically.",
"Model Training The model is trained to minimize the squared error between the probability distributions of the predictions and the ground truth: L(y,ŷ) = N n=1 C c=1 (y c −ŷ c ) 2 + λ||θ|| 2 2 (5) where y c is the ground truth andŷ c is the prediction probability of a class, N is the number of training claims, C is the number of classes, ||.|| 2 is the L 2 regularization term over all model parameters θ, and λ is the trade-off coefficient.",
"During training, all the model parameters are updated using efficient back-propagation through structure (Goller and Kuchler, 1996; Socher et al., 2013) , and the optimization is gradient-based following the Ada-grad update rule (Duchi et al., 2011) to speed up the convergence.",
"We empirically initialize the model parameters with uniform distribution and set the vocabulary size as 5,000, the size of embedding and hidden units as 100.",
"We iterate over all the training examples in each epoch and continue until the loss value converges or the maximum epoch number is met.",
"Experiments and Results Datasets For experimental evaluation, we use two publicly available Twitter datasets released by Ma et al.",
"(2017) , namely Twitter15 and Twitter16 4 , which respectively contains 1,381 and 1,181 propagation trees (see (Ma et al., 2017) for detailed statistics).",
"In each dataset, a group of wide spread source tweets along with their propagation threads, i.e., replies and retweets, are provided in the form of tree structure.",
"Each tree is annotated with one of the four class labels, i.e., non-rumor, false rumor, true rumor and unverified rumor.",
"We remove the retweets from the trees since they do not provide any extra information or evidence contentwise.",
"We build two versions for each tree, one for the bottom-up tree and the other for the top-down tree, by flipping the edges' direction.",
"Experimental Setup We make comprehensive comparisons between our models and some state-of-the-art baselines on rumor classification and early detection tasks.",
"-DTR: Zhao et al.",
"(2015) proposed a Decision-Tree-based Ranking model to identify trending rumors by searching for inquiry phrases.",
"-DTC: The information credibility model using a Decision-Tree Classifier (Castillo et al., 2011) based on manually engineering various statistical features of the tweets.",
"-RFC: The Random Forest Classier using 3 fitting parameters as temporal properties and a set of handcrafted features on user, linguistic and structural properties (Kwon et al., 2013) .",
"-SVM-TS: A linear SVM classifier that uses time-series to model the variation of handcrafted social context features (Ma et al., 2015) .",
"-SVM-BOW: A naive baseline we built by representing text content using bag-of-words and using linear SVM for rumor classification.",
"-SVM-TK and SVM-HK: SVM classifier uses a Tree Kernel (Ma et al., 2017) and that uses a Hybrid Kernel (Wu et al., 2015) , respectively, both of which model propagation structures with kernels.",
"-GRU-RNN: A detection model based on recurrent neural networks (Ma et al., 2016) with GRU units for learning rumor representations by modeling sequential structure of relevant posts.",
"We implement DTC and RFC using Weka 5 , SVM-based models using LibSVM 6 and all neural-network-based models with Theano 7 .",
"We conduct 5-fold cross-validation on the datasets and use accuracy over all the four categories and F1 measure on each class to evaluate the performance of models.",
"Rumor Classification Performance As shown in Table 1 , our proposed models basically yield much better performance than other methods on both datasets via the modeling of interaction structures of posts in the propagation.",
"It is observed that the performance of the 4 baselines in the first group based on handcrafted features is obviously poor, varying between 0.409 and 0.585 in accuracy, indicating that they fail to generalize due to the lack of capacity capturing helpful features.",
"Among these baselines, SVM-TS and RFC perform relatively better because they 5 www.cs.waikato.ac.nz/ml/weka 6 www.csie.ntu.edu.tw/˜cjlin/libsvm 7 deeplearning.net/software/theano use additional temporal traits, but they are still clearly worse than the models not relying on feature engineering.",
"DTR uses a set of regular expressions indicative of stances.",
"However, only 19.6% and 22.2% tweets in the two datasets contain strings covered by these regular expressions, rendering unsatisfactory result.",
"Among the two kernel methods that are based on comparing propagation structures, we observe that SVM-TK is much more effective than SVM-HK.",
"There are two reasons: 1) SVM-HK was originally proposed and experimented on Sina Weibo (Wu et al., 2015) , which may not be generalize well on Twitter.",
"2) SVM-HK loosely couples two separate kernels: a RBF kernel based on handcrafted features, plus a random walk-based kernel which relies on a set of pre-defined keywords for jumping over the nodes probabilistically.",
"This under utilizes the propagation information due to such oversimplified treatment of tree structure.",
"In contrast, SVM-TK is an integrated kernel and can fully utilize the structure by comparing the trees based on both textual and structural similarities.",
"It appears that using bag-of-words is already a decent model evidenced as the fairly good performance of SVM-BOW which is even better than SVM-HK.",
"This is because the features of SVM-HK are handcrafted for binary classification (i.e., non-rumor vs rumor), ignoring the importance of indicative words or units that benefit finer-grained classification which can be captured more effectively by SVM-BOW.",
"The sequential neural model GRU-RNN performs slightly worse than SVM-TK, but much worse than our recursive models.",
"This is because it is a special case of the recursive model where each non-leaf node has only one child.",
"It has to rely on a linear chain as input, which missed out valuable structural information.",
"However, it does learn high-level features from the post content via hidden units of the neural model while SVM-TK cannot which can only evaluates similarities based on the overlapping words among subtrees.",
"Our recursive models are inherently tree-structured and take advantages of representation learning following the propagation structure, thus beats SVM-TK.",
"In the two recursive models, TD-RvNN outperforms BU-RvNN, which indicates that the bottomup model may suffer from larger information loss than the top-down one.",
"This verifies the hypothesis we made in Section 4.3 that the pooling layer For only the non-rumor class, it seems that our method does not perform so well as some featureengineering baselines.",
"This can be explained by the fact that these baselines are trained with additional features such as user information (e.g., profile, verification status, etc) which may contain clues for differentiating non-rumors from rumors.",
"Also, the responses to non-rumors are usually much more diverse with little informative indication, making identification of non-rumors more difficult based on content even with the structure.",
"Early Rumor Detection Performance Detecting rumors at early state of propagation is important so that interventions can be made in a timely manner.",
"We compared different methods in term of different time delays measured by either tweet count received or time elapsed since the source tweet is posted.",
"The performance is evaluated by the accuracy obtained when we incrementally add test data up to the check point given the targeted time delay or tweets volume.",
"Figure 4 shows that the performance of our recursive models climbs more rapidly and starts to supersede the other models at the early stage.",
"Although all the methods are getting to their best per-formance in the end, TD-RvNN and BU-RvNN only need around 8 hours or about 90 tweets to achieve the comparable performance of the best baseline model, i.e., SVM-TK, which needs about 36 hours or around 300 posts, indicating superior early detection performance of our method.",
"Figure 5 shows a sample tree at the early stage of propagation that has been correctly classified as a false rumor by both recursive models.",
"We can see that this false rumor demonstrates typical patterns in subtrees and propagation paths indicative of the falsehood, where a set of responses supporting the parent posts that deny or question the source post are captured by our bottom-up model.",
"Similarly, some patterns of propagation from the root to leaf nodes like \"support→deny→support\" are also seized by our top-down model.",
"In comparison, sequential models may be confused because the supportive key terms such as \"be right\", \"yeah\", \"exactly!\"",
"dominate the responses, and the SVM-TK may miss similar subtrees by just comparing the surface words.",
"Conclusions and Future Work We propose a bottom-up and a top-down treestructured model based on recursive neural networks for rumor detection on Twitter.",
"The inher-ent nature of recursive models allows them using propagation tree to guide the learning of representations from tweets content, such as embedding various indicative signals hidden in the structure, for better identifying rumors.",
"Results on two public Twitter datasets show that our method improves rumor detection performance in very large margins as compared to state-of-the-art baselines.",
"In our future work, we plan to integrate other types of information such as user properties into the structured neural models to further enhance representation learning and detect rumor spreaders at the same time.",
"We also plan to use unsupervised models for the task by exploiting structural information."
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"4",
"4.1",
"4.2",
"4.3",
"4.4",
"5.1",
"5.2",
"5.3",
"5.4",
"6"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Problem Statement",
"RvNN-based Rumor Detection",
"Standard Recursive Neural Networks",
"Bottom-up RvNN",
"Top-down RvNN",
"Model Training",
"Datasets",
"Experimental Setup",
"Rumor Classification Performance",
"Early Rumor Detection Performance",
"Conclusions and Future Work"
]
} | GEM-SciDuet-train-135#paper-1364#slide-17 | Conclusion and future work | Propose a bottom-up and a top-down tree-structured model based on recursive neural networks for rumor detection on Twitter.
Using propagation tree to guide the learning of representations from tweets content, such as embedding various indicative signals hidden in the structure, for better identifying rumors.
Results on two public Twitter datasets show that our method improves rumor detection performance in very large margins as compared to state-of-the-art baselines.
Integrate other types of information such as user properties into the structured neural models to further enhance representation learning
Develop unsupervised models due to massive unlabeled data from social media. | Propose a bottom-up and a top-down tree-structured model based on recursive neural networks for rumor detection on Twitter.
Using propagation tree to guide the learning of representations from tweets content, such as embedding various indicative signals hidden in the structure, for better identifying rumors.
Results on two public Twitter datasets show that our method improves rumor detection performance in very large margins as compared to state-of-the-art baselines.
Integrate other types of information such as user properties into the structured neural models to further enhance representation learning
Develop unsupervised models due to massive unlabeled data from social media. | [] |
GEM-SciDuet-train-136#paper-1365#slide-0 | 1365 | Robust Distant Supervision Relation Extraction via Deep Reinforcement Learning | Distant supervision has become the standard method for relation extraction. However, even though it is an efficient method, it does not come at no cost-The resulted distantly-supervised training samples are often very noisy. To combat the noise, most of the recent state-of-theart approaches focus on selecting onebest sentence or calculating soft attention weights over the set of the sentences of one specific entity pair. However, these methods are suboptimal, and the false positive problem is still a key stumbling bottleneck for the performance. We argue that those incorrectly-labeled candidate sentences must be treated with a hard decision, rather than being dealt with soft attention weights. To do this, our paper describes a radical solution-We explore a deep reinforcement learning strategy to generate the false-positive indicator, where we automatically recognize false positives for each relation type without any supervised information. Unlike the removal operation in the previous studies, we redistribute them into the negative examples. The experimental results show that the proposed strategy significantly improves the performance of distant supervision comparing to state-of-the-art systems. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192
],
"paper_content_text": [
"Introduction Relation extraction is a core task in information extraction and natural language understanding.",
"The goal of relation extraction is to predict relations for entities in a sentence (Zelenko et al., 2003; Bunescu and Mooney, 2005; GuoDong et al., 2005) .",
"For example, given a sentence \"Barack Obama is married to Michelle Obama.",
"\", a relation classifier aims at predicting the relation of \"spouse\".",
"In downstream applications, relation extraction is the key module for constructing knowledge graphs, and it is a vital component of many natural language processing applications such as structured search, sentiment analysis, question answering, and summarization.",
"A major issue encountered in the early development of relation extraction algorithms is the data sparsity issue-It is extremely expensive, and almost impossible for human annotators to go through a large corpus of millions of sentences to provide a large amount of labeled training instances.",
"Therefore, distant supervision relation extraction (Mintz et al., 2009; Hoffmann et al., 2011; Surdeanu et al., 2012) becomes popular, because it uses entity pairs from knowledge bases to select a set of noisy instances from unlabeled data.",
"In recent years, neural network approaches (Zeng et al., 2014 (Zeng et al., , 2015 have been proposed to train the relation extractor under these noisy conditions.",
"To suppress the noisy (Roth et al., 2013) , recent stud-ies (Lin et al., 2016) have proposed the use of attention mechanisms to place soft weights on a set of noisy sentences, and select samples.",
"However, we argue that only selecting one example or based on soft attention weights are not the optimal strategy: To improve the robustness, we need a systematic solution to make use of more instances, while removing false positives and placing them in the right place.",
"In this paper, we investigate the possibility of using dynamic selection strategies for robust distant supervision.",
"More specifically, we design a deep reinforcement learning agent, whose goal is to learn to choose whether to remove or remain the distantly supervised candidate instance based on the performance change of the relation classifier.",
"Intuitively, our agent would like to remove false positives, and reconstruct a cleaned set of distantly supervised instances to maximize the reward based on the classification accuracy.",
"Our proposed method is classifier-independent, and it can be applied to any existing distant supervision model.",
"Empirically, we show that our method has brought consistent performance gains in various deep neural network based models, achieving strong performances on the widely used New York Times dataset (Riedel et al., 2010) .",
"Our contributions are three-fold: • We propose a novel deep reinforcement learning framework for robust distant supervision relation extraction.",
"• Our method is model-independent, meaning that it could be applied to any state-of-the-art relation extractors.",
"• We show that our method can boost the performances of recently proposed neural relation extractors.",
"In Section 2, we will discuss related works on distant supervision relation extraction.",
"Next, we will describe our robust distant supervision framework in Section 3.",
"In Section 4, empirical evaluation results are shown.",
"And finally, we conclude in Section 5.",
"Mintz et al.",
"(2009) is the first study that combines dependency path and feature aggregation for distant supervision.",
"However, this approach would introduce a lot of false positives, as the same entity pair might have multiple relations.",
"To alleviate this issue, Hoffmann et al.",
"(2011) address this issue, and propose a model to jointly learn with multiple relations.",
"Surdeanu et al.",
"(2012) further propose a multi-instance multi-label learning framework to improve the performance.",
"Note that these early approaches do not explicitly remove noisy instances, but rather hope that the model would be able to suppress the noise.",
"Related Work Recently, with the advance of neural network techniques, deep learning methods (Zeng et al., 2014 (Zeng et al., , 2015 are introduced, and the hope is to model noisy distant supervision process in the hidden layers.",
"However, their approach only selects one most plausible instance per entity pair, inevitably missing out a lot of valuable training instances.",
"Recently, Lin et al.",
"(2016) propose an attention mechanism to select plausible instances from a set of noisy instances.",
"However, we believe that soft attention weight assignment might not be the optimal solution, since the false positives should be completely removed and placed in the negative set.",
"Ji et al.",
"(2017) combine the external knowledge to rich the representation of entity pair, in which way to improve the accuracy of attention weights.",
"Even though these above-mentioned methods can select high-quality instances, they ignore the false positive case: all the sentences of one entity pair belongs to the false positives.",
"In this work, we take a radical approach to solve this problem-We will make use of the distantly labeled resources as much as possible, while learning a independent false-positive indicator to remove false positives, and place them in the right place.",
"After our ACL submission, we notice that a contemporaneous study Feng et al.",
"(2018) also adopts reinforcement learning to learn an instance selector, but their reward is calculated from the prediction probabilities.",
"In contrast, while in our method, the reward is intuitively reflected by the performance change of the relation classifier.",
"Our approach is also complement to most of the approaches above, and can be directly applied on top of any existing relation extraction classifiers.",
"Reinforcement Learning for Distant Supervision We introduce a performance-driven, policy-based reinforcement learning method to heuristically recognize false positive samples.",
"Comparing to a prior study that has underutilized the distantlysupervised samples (Lin et al., 2016) , we consider an RL agent for robust distant supervision relation extraction.",
"We first describe the definitions of our RL method, including the policy-based agent, external environment, and pre-training strategy.",
"Next, we describe the retraining strategy for our RL agent.",
"The goal of our agent is to determine whether to retain or remove a distantlysupervised sentence, based on the performance change of relation classifier.",
"Finally, we describe the noisy-suppression method, where we teach our policy-based agent to make a redistribution for a cleaner distant supervision training dataset.",
"Distant supervision relation extraction is to predict the relation type of entity pair under the automatically-generated training set.",
"However, the issue is that these distantly-supervised sentences that mention this entity pair may not express the desired relation type.",
"Therefore, what our RL agent should do is to determine whether the distantly-supervised sentence is a true positive instance for this relation type.",
"For reinforcement learning, external environment and RL agent are two necessary components, and a robust agent is trained from the dynamic interaction between these two parts (Arulkumaran et al., 2017) .",
"First, the prerequisite of reinforcement learning is that the external environment should be modeled as a Markov decision process (MDP).",
"However, the traditional setting of relation extraction cannot satisfy this condition: the input sentences are independent of each other.",
"In other words, we cannot merely use the information of the sentence being processed as the state.",
"Thus, we add the information from the early states into the representation of the current state, in which way to model our task as a MDP problem (Fang et al., 2017) .",
"The other component, RL agent, is parameterized with a policy network π θ (s, a) = p(a|s; θ).",
"The probability distribution of actions A = {a remove , a remain } is calculated by policy network based on state vectors.",
"What needs to be noted is that, Deep Q Network (DQN) (Mnih et al., 2013) is also a widelyused RL method; however, it is not suitable for our case, even if our action space is small.",
"First, we cannot compute the immediate reward for every operation; In contrast, the accurate reward can only be obtained after finishing processing the whole training dataset.",
"Second, the stochastic policy of the policy network is capable of prevent-ing the agent from getting stuck in an intermediate state.",
"The following subsections detailedly introduce the definitions of the fundamental components in the proposed RL method.",
"States In order to satisfy the condition of MDP, the state s includes the information from the current sentence and the sentences that have been removed in early states.",
"The semantic and syntactic information of sentence is represented by a continuous real-valued vector.",
"According to some state-of-the-art supervised relation extraction approaches (Zeng et al., 2014; Nguyen and Grishman, 2015) , we utilize both word embedding and position embedding to convert sentence into vector.",
"With this sentence vector, the current state is the concatenation of the current sentence vector and the average vector of the removed sentences in early states.",
"We give relatively larger weight for the vector of the current sentence, in which way to magnify the dominating influence of the current sentence information for the decision of action.",
"Actions At each step, our agent is required to determine whether the instance is false positive for target relation type.",
"Each relation type has a agent 1 .",
"There are two actions for each agent: whether to remove or retain the current instance from the training set.",
"With the initial distantlysupervised dataset that is blended with incorrectlylabeled instances, we hope that our agent is capable of using the policy network to filter noisy instances; Under this cleaned dataset, distant supervision is then expected to achieve better performance.",
"Rewards As previously mentioned, the intuition of our model is that, when the incorrectly-labeled instances are filtered, the better performance of relation classifier will achieve.",
"Therefore, we use the change of performance as the result-driven reward for a series of actions decided by the agent.",
"Compared to accuracy, we adopt the F 1 score as the evaluation criterion, since accuracy might not be an indicative metric in a multi-class classification setting where the data distribution could be imbalanced.",
"Thus, the reward can be formulated as the RL Agent Train Relation Classifier \" #$\" \" # × + # + ×(− # ) Noisy dataset - ./# Cleaned dataset - #$\" Cleaned dataset - # Removed part Removed part Train # = ( \" # -\" #$\" ) Relation Classifier RL Agent Epoch − 1 : Epoch : Figure 2 : The proposed policy-based reinforcement learning framework.",
"The agent tries to remove the wrong-labeled sentences from the distantly-supervised positive dataset P ori .",
"In order to calculate the reward, P ori is split into the training part P ori t and the validation part P ori v ; their corresponding negative part are represented as N ori t and N ori v .",
"In each epoch i, the agent performs a series of actions to recognize the false positive samples from P ori t and treat them as negative samples.",
"Then, a new relation classifier is trained under the new dataset Noisy dataset - ./# + - ./# { 6 #$\" , 6 #$\" } - #$\" - ./# + - # { 6 # , 6 # } {P i t , N i t }.",
"With this relation classifier, F 1 score is calculated from the new validation set {P i v , N i v }, where P i v is also filtered by the current agent.",
"After that, the current reward is measured as the difference of F 1 between the adjacent epochs.",
"difference between the adjacent epochs: R i = α(F i 1 − F i−1 1 ) (1) As this equation shows, in step i, our agent is given a positive reward only if F 1 gets improved; otherwise, the agent will receive a negative reward.",
"Under this setting, the value of reward is proportional to the difference of F 1 , and α is used to convert this difference into a rational numeric range.",
"Naturally, the value of the reward is in a continuous space, which is more reasonable than a binary reward (−1 and 1), because this setting can reflect the number of wrong-labeled instance that the agent has removed.",
"In order to avoid the randomness of F 1 , we use the average F 1 of last five epochs to calculate the reward.",
"Policy Network For each input sentence, our policy network is to determine whether it expresses the target relation type and then make removal action if it is irrelevant to the target relation type.",
"Thus, it is analogous to a binary relation classifier.",
"CNN is commonly used to construct relation classification system (Santos et al., 2015; Xu et al., 2015; Shen and Huang, 2016) , so we adopt a simple CNN with window size c w and kernel size c k , to model policy network π(s; θ).",
"The reason why we do not choice the variants of CNN (Zeng et al., 2015; Lin et al., 2016) that are well-designed for distant supervision is that these two models belong to bag-level models (dealing with a bag of sentences simultaneously) and deal with the multi-classification problem; We just need a model to do binary sentencelevel classification.",
"Naturally, the simpler network is adopted.",
"Training Policy-based Agent Unlike the goal of distant supervision relation extraction, our agent is to determine whether an annotated sentence expresses the target relation type rather than predict the relationship of entity pair, so sentences are treated independently despite belonging to the same entity pair.",
"In distant supervision training dataset, one relation type contains several thousands or ten thousands sentences; moreover, reward R can only be calculated after processing the whole positive set of this relation type.",
"If we randomly initialize the parameters of policy network and train this network by trial and errors, it will waste a lot of time and be inclined to poor convergence properties.",
"In order to overcome this problem, we adopt a supervised learning procedure to pre-train our policy network, in which way to provide a general learning direction for our policy-based agent.",
"Pre-training Strategy The pre-training strategy, inspired from AlphaGo (Silver et al., 2016) , is a common strategy in RL related works to accelerate the training of RL agents.",
"Normally, they utilize a small part of the annotated dataset to train policy networks before reinforcement learning.",
"For example, AlphaGo uses the collected experts moves to do a supervised learning for Go RL agent.",
"However, in distant supervision relation extraction task, there is not any supervised information that can be used unless let linguistic experts to do some manual annotations for part of the entity pairs.",
"However, this is expensive, and it is not the original intention of distant supervision.",
"Under this circumstance, we propose a compromised solution.",
"With well-aligned corpus, the true positive samples should have evident advantage in quantity compared with false positive samples in the distantly-supervised dataset.",
"So, for a specific relation type, we directly treat the distantly-supervised positive set as the positive set, and randomly extract part of distantly-supervised negative set as the negative set.",
"In order to better consider prior information during this pre-training procedure, the amount of negative samples is 10 times of the number of positive samples.",
"It is because, when learning with massive negative samples, the agent is more likely to develop toward a better direction.",
"Cross-entropy cost function is used to train this binary classifier, where the negative label corresponds to the removing action, and the positive label corresponds to the retaining action.",
"(2) J(θ) = i y i log[π(a = y i |s i ; θ)] + (1 − y i )log[1 − π(a = y i |s i ; θ)] Due to the noisy nature of the distantly-labeled instances, if we let this pre-training process overfit this noisy dataset, the predicted probabilities of most samples tend to be close to 0 or 1, which is difficult to be corrected and unnecessarily increases the training cost of reinforcement learning.",
"So, we stop this training process when the accuracy reaches 85% ∼ 90%.",
"Theoretically, our approach can be explained as increasing the entropy of the policy gradient agent, and preventing the entropy of the policy being too low, which means that the lack of exploration may be a concern.",
"3.1.2 Retraining Agent with Rewards As shown in Figure 2 , in order to discover incorrectly-labeled instances without any supervised information, we introduce a policy-based RL method.",
"What our agent tries to deal with is the noisy samples from the distantly-supervised positive dataset; Here we call it as the DS positive dataset.",
"We split it into the training positive set P ori t and the validation positive set P ori v ; naturally, both of these two set are noisy.",
"Correspondingly, the training negative set N ori t and the validation negative set N ori v are constructed by randomly selected from the DS negative dataset.",
"In every epoch, the agent removes a noisy sample set Ψ i from P ori t according to the stochastic policy π(a|s), and we obtain a new positive set P t = P ori t − Ψ i .",
"Because Ψ i is recognized as the wrong-labeled samples, we redistribute it into the negative set N t = N ori t + Ψ i .",
"Under this setting, the scale of training set is constant for each epoch.",
"Now we utilize the cleaned data {P t , N t } to train a relation classifier.",
"The desirable situation is that RL agent has the capacity to increase the performance of relation classifier through relocating incorrectly-labeled false positive instances.",
"Therefore, we use the validation set {P ori v , N ori v } to measure the performance of the current agent.",
"First, this validation set is filtered and redistributed by the current agent as {P v , N v }; the F 1 score of the current relation classifier is calculated from it.",
"Finally, the difference of F 1 scores between the current and previous epoch is used to calculate reward.",
"Next, we will introduce several strategies to train a more robust RL agent.",
"Removing the fixed number of sentences in each epoch In every epoch, we let the RL agent to remove a fixed number of sentences or less (when the number of the removed sentences in one epoch does not reach this fixed number during training), in which way to prevent the case that the agent tries to remove more false positive instances by removing more instances.",
"Under the restriction of fixed number, if the agent decides to remove the current state, it means the chance of removing other states decrease.",
"Therefore, in order to obtain a better reward, the agent should try to remove a instance set that includes more negative instances.",
"Loss function The quality of the RL agent is reflected by the quality of the removed part.",
"After the pre-training process, the agent just possesses Algorithm 1 Retraining agent with rewards for relation k. For a clearer expression, k is omitted in the following algorithm.",
"Require: Positive set {P ori t , P ori v }, Negative set {N ori t , N ori v }, the fixed number of removal γ t , γ v 1: Load parameters θ from pre-trained policy network 2: Initialize s * as the all-zero vector with the same dimension of s j 3: for epoch i = 1 → N do 4: for s j ∈ P ori t do 5: s j = concatenation(s j , s * ) 6: Randomly sample a j ∼ π(a| s j ; θ); compute p j = π(a = 0| s j ; θ) 7: if a j == 0 then Rank T based on p j from high to low, obtain T rank 12: for t i in T rank [: γ t ] do 13: Add t i [0] into Ψ i 14: end for 15: P i t = P ori t − Ψ i , N i t = N ori t + Ψ i , R = α(F i 1 − F i−1 1 ) 19 : Ω i−1 = Ψ i−1 − Ψ i ∩ Ψ i−1 ; Ω i = Ψ i − Ψ i ∩ Ψ i−1 20: 21: Updata θ: g ∝ θ Ω i log π(a|s; θ)R + θ Ω i−1 log π(a|s; θ)(−R) 22: end for the ability to distinguish the obvious false positive instances, which means the discrimination of the indistinguishable wrong-labeled instances are still ambiguous.",
"Particularly, this indistinguishable part is the criterion to reflect the quality of the agent.",
"Therefore, regardless of these easydistinguished instances, the different parts of the removed parts in different epochs are the determinant of the change of F 1 scores.",
"Therefore, we definite two sets: Ω i−1 = Ψ i−1 − (Ψ i ∩ Ψ i−1 ) (3) Ω i = Ψ i − (Ψ i ∩ Ψ i−1 ) (4) where Ψ i is the removed part of epoch i. Ω i−1 and Ω i are represented with the different colors in Figure 2.",
"If F 1 score increases in the epoch i, it means the actions of the epoch i is more reasonable than that in the epoch i − 1.",
"In other words, Ω i is more negative than Ω i−1 .",
"Thus, we assign the positive reward to Ω i and the negative reward to Ω i−1 , and vice versa.",
"In summary, the ultimate loss function is formulated as follow: (5) J(θ) = Ω i log π(a|s; θ)R + Ω i−1 log π(a|s; θ)(−R) Redistributing Training Dataset with Policy-based Agents Through the above reinforcement learning procedure, for each relation type, we obtain a agent as the false-positive indicator.",
"These agents possess the capability of recognizing incorrectly-labeled instances of the corresponding relation types.",
"We adopt these agents as classifiers to recognize false positive samples in the noisy distantly-supervised training dataset.",
"For one entity pair, if all the sentence aligned from corpus are classified as false positive, then this entity pair is redistributed into the negative set.",
"Experiments We adopt a policy-based RL method to generate a series of relation indicators and use them to re-distribute training dataset by moving false positive samples to negative sample set.",
"Therefore, our experiments are intended to demonstrate that our RL agents possess this capability.",
"Datast and Evaluation Metrics We evaluate the proposed method on a commonlyused dataset 2 , which is first presented in Riedel et al.",
"(2010) .",
"This dataset is generated by aligning entity pairs from Freebase with New York Times corpus(NYT).",
"Entity mentions of NYT corpus are recognized by the Stanford named entity recognizer (Finkel et al., 2005) .",
"Similar to the previous works, we adopt the held-out evaluation to evaluate our model, which can provide an approximate measure of the classification ability without costly human evaluation.",
"Similar to the generation of the training set, the entity pairs in test set are also selected from Freebase, which will be predicted under the sentences discovered from the NYT corpus.",
"Experimental Settings Policy-based Agent The action space of our RL agent just includes two actions.",
"Therefore, the agent can be modeled as a binary classifier.",
"We adopt a single-window CNN as this policy network.",
"The detailed hyperparameter settings are presented in Table 1 .",
"As for word embeddings, we directly use the word embedding file released by Lin et al.",
"(2016) 3 , which just keeps the words that appear more than 100 times in NYT.",
"Moreover, we have the same dimension setting of the position embedding, and the maximum length of relative distance is −30 and 30 (\"-\" and \"+\" represent the left and right side of the entities).",
"The learning rate of reinforcement learning is 2e −5 .",
"For each relation type, the fixed number γ t , γ v are according to the pre-trained agent.",
"When one relation type has too many distantsupervised positive sentences (for example, /lo-2 http://iesl.cs.umass.edu/riedel/ecml/ 3 https://github.com/thunlp/NRE Table 2 : Comparison of F 1 scores among three cases: the relation classifier is trained with the original dataset, the redistributed dataset generated by the pre-trained agent, and the redistributed dataset generated by our RL agent respectively.",
"The name of relation types are abbreviated: /peo/per/pob represents /people/person/place of birth cation/location/contains has 75768 sentences), we sample a subset of size 7,500 sentences to train the agent.",
"For the average vector of the removed sentences, in the pre-training process and the first state of the retraining process, it is set as all-zero vector.",
"Relation Classifier for Calculating Reward In order to evaluate a series of actions by agent, we use a simple CNN model, because the simple network is more sensitive to the quality of the training set.",
"The proportion between P ori t and P ori v is 2:1, and they are all derived from the training set of Riedel dataset; the corresponding negative sample sets N ori t and N ori v are randomly selected from the Riedel negative dataset, whose size is twice that of their corresponding positive sets.",
"The Effectiveness of Reinforcement Learning In Table 2 , we list the F 1 scores before and after adopting the proposed RL method.",
"Even though there are 52 actual relation types in Riedel dataset, only 10 relation types have more than 1000 pos- Zeng et al.",
"(2015) and Lin et al.",
"(2016) are both the robust models to solve wrong labeling problem of distant supervision relation extraction.",
"Zeng et al.",
"(2015) combine at-least-one multi-instance learning with deep neural network to extract only one active sentence to predict the relation between entity pair; Lin et al.",
"(2016) combine all sentences of one entity pair and assign soft attention weights to them, in which way to generate a compositive relation representation for this entity pair.",
"However, the false positive phenomenon also includes the case that all the sentences of one entity pair are wrong, which is because the corpus is not completely aligned with the knowledge base.",
"This phenomenon is also common between Riedel dataset and Freebase through our manual inspection.",
"Obviously, there is nothing the above two methods can do in this case.",
"The proposed RL method is to tackle this problem.",
"We adopt our RL agents to redistribute Riedel dataset by moving false positive samples into the negative sample set.",
"Then we use Zeng et al.",
"(2015) and Lin et al.",
"(2016) to predict relations on this cleaned dataset, and compare the performance with that on the original Riedel dataset.",
"As shown in Figure 3 and Figure 4 , under the assistant of our RL agent, the same model can achieve obvious improvement with more reasonable training dataset.",
"In order to give the more intuitive comparison, we calculate the AUC value of each PR curve, which reflects the area size under these curves.",
"These comparable results also indicate the effectiveness of our policy-based RL method.",
"Moreover, as can be seen from the result of t-test evaluation, all the p-values are less than 5e-02, so the improvements are significant.",
"proportional to the original scale, which is in accordance with the actual accident situation.",
"At the same time, we analyze the correlation between the false positive phenomenon and the number of sentences of entity pairs : With this the number ranging from 1 to 5, the corresponding percentages are [55.9%, 32.0%, 3.7%, 4.4%, 0.7%].",
"This distribution is consistent with our assumption.",
"Because Freebase is, to some extent, not completely aligned with the NYT corpus, entity pairs with fewer sentences are more likely to be false positive, which is the major factor hindering the performance of the previous systems.",
"In Table 4 , we present some false positive examples selected by our agents.",
"Taking entity pair (Sami Moubayed, Syria) as an example, it is obvious that there is not any valuable information reflecting relation /people/person/place of birth.",
"Both of these sentences talks about the situation analysis of Syria from the political analyst Sami Moubayed.",
"We also found that, for some entity pairs, even though there are multiple sentences, all of them are identical.",
"This phenomenon also increases the probability of the appearance of false positive samples.",
"Case Study Conclusion In this work, we propose a deep reinforcement learning framework for robust distant supervision.",
"The intuition is that, in contrast to prior works that utilize only one instance per entity pair and use soft attention weights to select plausible distantly supervised examples, we describe a policy-based framework to systematically learn to relocate the false positive samples, and better utilize the unlabeled data.",
"More specifically, our goal is to Table 2. teach the reinforcement agent to optimize the selection/redistribution strategy that maximizes the reward of boosting the performance of relation classification.",
"An important aspect of our work is that our framework does not depend on a specific form of the relation classifier, meaning that it is a plug-and-play technique that could be potentially applied to any relation extraction pipeline.",
"In experiments, we show that our framework boosts the performance of distant supervision relation extraction of various strong deep learning baselines on the widely used New York Times -Freebase dataset."
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.1.1",
"3.2",
"4",
"4.1",
"4.2.1",
"4.2.2",
"4.3",
"5"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Reinforcement Learning for Distant Supervision",
"Training Policy-based Agent",
"Pre-training Strategy",
"Redistributing Training Dataset with",
"Experiments",
"Datast and Evaluation Metrics",
"Policy-based Agent",
"Relation Classifier for Calculating Reward",
"The Effectiveness of Reinforcement Learning",
"Conclusion"
]
} | GEM-SciDuet-train-136#paper-1365#slide-0 | Relation Extraction | Plain Text Corpus Entity-Relation Triple Classifier
(Unstructured Info) (Structured Info) | Plain Text Corpus Entity-Relation Triple Classifier
(Unstructured Info) (Structured Info) | [] |
GEM-SciDuet-train-136#paper-1365#slide-1 | 1365 | Robust Distant Supervision Relation Extraction via Deep Reinforcement Learning | Distant supervision has become the standard method for relation extraction. However, even though it is an efficient method, it does not come at no cost-The resulted distantly-supervised training samples are often very noisy. To combat the noise, most of the recent state-of-theart approaches focus on selecting onebest sentence or calculating soft attention weights over the set of the sentences of one specific entity pair. However, these methods are suboptimal, and the false positive problem is still a key stumbling bottleneck for the performance. We argue that those incorrectly-labeled candidate sentences must be treated with a hard decision, rather than being dealt with soft attention weights. To do this, our paper describes a radical solution-We explore a deep reinforcement learning strategy to generate the false-positive indicator, where we automatically recognize false positives for each relation type without any supervised information. Unlike the removal operation in the previous studies, we redistribute them into the negative examples. The experimental results show that the proposed strategy significantly improves the performance of distant supervision comparing to state-of-the-art systems. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192
],
"paper_content_text": [
"Introduction Relation extraction is a core task in information extraction and natural language understanding.",
"The goal of relation extraction is to predict relations for entities in a sentence (Zelenko et al., 2003; Bunescu and Mooney, 2005; GuoDong et al., 2005) .",
"For example, given a sentence \"Barack Obama is married to Michelle Obama.",
"\", a relation classifier aims at predicting the relation of \"spouse\".",
"In downstream applications, relation extraction is the key module for constructing knowledge graphs, and it is a vital component of many natural language processing applications such as structured search, sentiment analysis, question answering, and summarization.",
"A major issue encountered in the early development of relation extraction algorithms is the data sparsity issue-It is extremely expensive, and almost impossible for human annotators to go through a large corpus of millions of sentences to provide a large amount of labeled training instances.",
"Therefore, distant supervision relation extraction (Mintz et al., 2009; Hoffmann et al., 2011; Surdeanu et al., 2012) becomes popular, because it uses entity pairs from knowledge bases to select a set of noisy instances from unlabeled data.",
"In recent years, neural network approaches (Zeng et al., 2014 (Zeng et al., , 2015 have been proposed to train the relation extractor under these noisy conditions.",
"To suppress the noisy (Roth et al., 2013) , recent stud-ies (Lin et al., 2016) have proposed the use of attention mechanisms to place soft weights on a set of noisy sentences, and select samples.",
"However, we argue that only selecting one example or based on soft attention weights are not the optimal strategy: To improve the robustness, we need a systematic solution to make use of more instances, while removing false positives and placing them in the right place.",
"In this paper, we investigate the possibility of using dynamic selection strategies for robust distant supervision.",
"More specifically, we design a deep reinforcement learning agent, whose goal is to learn to choose whether to remove or remain the distantly supervised candidate instance based on the performance change of the relation classifier.",
"Intuitively, our agent would like to remove false positives, and reconstruct a cleaned set of distantly supervised instances to maximize the reward based on the classification accuracy.",
"Our proposed method is classifier-independent, and it can be applied to any existing distant supervision model.",
"Empirically, we show that our method has brought consistent performance gains in various deep neural network based models, achieving strong performances on the widely used New York Times dataset (Riedel et al., 2010) .",
"Our contributions are three-fold: • We propose a novel deep reinforcement learning framework for robust distant supervision relation extraction.",
"• Our method is model-independent, meaning that it could be applied to any state-of-the-art relation extractors.",
"• We show that our method can boost the performances of recently proposed neural relation extractors.",
"In Section 2, we will discuss related works on distant supervision relation extraction.",
"Next, we will describe our robust distant supervision framework in Section 3.",
"In Section 4, empirical evaluation results are shown.",
"And finally, we conclude in Section 5.",
"Mintz et al.",
"(2009) is the first study that combines dependency path and feature aggregation for distant supervision.",
"However, this approach would introduce a lot of false positives, as the same entity pair might have multiple relations.",
"To alleviate this issue, Hoffmann et al.",
"(2011) address this issue, and propose a model to jointly learn with multiple relations.",
"Surdeanu et al.",
"(2012) further propose a multi-instance multi-label learning framework to improve the performance.",
"Note that these early approaches do not explicitly remove noisy instances, but rather hope that the model would be able to suppress the noise.",
"Related Work Recently, with the advance of neural network techniques, deep learning methods (Zeng et al., 2014 (Zeng et al., , 2015 are introduced, and the hope is to model noisy distant supervision process in the hidden layers.",
"However, their approach only selects one most plausible instance per entity pair, inevitably missing out a lot of valuable training instances.",
"Recently, Lin et al.",
"(2016) propose an attention mechanism to select plausible instances from a set of noisy instances.",
"However, we believe that soft attention weight assignment might not be the optimal solution, since the false positives should be completely removed and placed in the negative set.",
"Ji et al.",
"(2017) combine the external knowledge to rich the representation of entity pair, in which way to improve the accuracy of attention weights.",
"Even though these above-mentioned methods can select high-quality instances, they ignore the false positive case: all the sentences of one entity pair belongs to the false positives.",
"In this work, we take a radical approach to solve this problem-We will make use of the distantly labeled resources as much as possible, while learning a independent false-positive indicator to remove false positives, and place them in the right place.",
"After our ACL submission, we notice that a contemporaneous study Feng et al.",
"(2018) also adopts reinforcement learning to learn an instance selector, but their reward is calculated from the prediction probabilities.",
"In contrast, while in our method, the reward is intuitively reflected by the performance change of the relation classifier.",
"Our approach is also complement to most of the approaches above, and can be directly applied on top of any existing relation extraction classifiers.",
"Reinforcement Learning for Distant Supervision We introduce a performance-driven, policy-based reinforcement learning method to heuristically recognize false positive samples.",
"Comparing to a prior study that has underutilized the distantlysupervised samples (Lin et al., 2016) , we consider an RL agent for robust distant supervision relation extraction.",
"We first describe the definitions of our RL method, including the policy-based agent, external environment, and pre-training strategy.",
"Next, we describe the retraining strategy for our RL agent.",
"The goal of our agent is to determine whether to retain or remove a distantlysupervised sentence, based on the performance change of relation classifier.",
"Finally, we describe the noisy-suppression method, where we teach our policy-based agent to make a redistribution for a cleaner distant supervision training dataset.",
"Distant supervision relation extraction is to predict the relation type of entity pair under the automatically-generated training set.",
"However, the issue is that these distantly-supervised sentences that mention this entity pair may not express the desired relation type.",
"Therefore, what our RL agent should do is to determine whether the distantly-supervised sentence is a true positive instance for this relation type.",
"For reinforcement learning, external environment and RL agent are two necessary components, and a robust agent is trained from the dynamic interaction between these two parts (Arulkumaran et al., 2017) .",
"First, the prerequisite of reinforcement learning is that the external environment should be modeled as a Markov decision process (MDP).",
"However, the traditional setting of relation extraction cannot satisfy this condition: the input sentences are independent of each other.",
"In other words, we cannot merely use the information of the sentence being processed as the state.",
"Thus, we add the information from the early states into the representation of the current state, in which way to model our task as a MDP problem (Fang et al., 2017) .",
"The other component, RL agent, is parameterized with a policy network π θ (s, a) = p(a|s; θ).",
"The probability distribution of actions A = {a remove , a remain } is calculated by policy network based on state vectors.",
"What needs to be noted is that, Deep Q Network (DQN) (Mnih et al., 2013) is also a widelyused RL method; however, it is not suitable for our case, even if our action space is small.",
"First, we cannot compute the immediate reward for every operation; In contrast, the accurate reward can only be obtained after finishing processing the whole training dataset.",
"Second, the stochastic policy of the policy network is capable of prevent-ing the agent from getting stuck in an intermediate state.",
"The following subsections detailedly introduce the definitions of the fundamental components in the proposed RL method.",
"States In order to satisfy the condition of MDP, the state s includes the information from the current sentence and the sentences that have been removed in early states.",
"The semantic and syntactic information of sentence is represented by a continuous real-valued vector.",
"According to some state-of-the-art supervised relation extraction approaches (Zeng et al., 2014; Nguyen and Grishman, 2015) , we utilize both word embedding and position embedding to convert sentence into vector.",
"With this sentence vector, the current state is the concatenation of the current sentence vector and the average vector of the removed sentences in early states.",
"We give relatively larger weight for the vector of the current sentence, in which way to magnify the dominating influence of the current sentence information for the decision of action.",
"Actions At each step, our agent is required to determine whether the instance is false positive for target relation type.",
"Each relation type has a agent 1 .",
"There are two actions for each agent: whether to remove or retain the current instance from the training set.",
"With the initial distantlysupervised dataset that is blended with incorrectlylabeled instances, we hope that our agent is capable of using the policy network to filter noisy instances; Under this cleaned dataset, distant supervision is then expected to achieve better performance.",
"Rewards As previously mentioned, the intuition of our model is that, when the incorrectly-labeled instances are filtered, the better performance of relation classifier will achieve.",
"Therefore, we use the change of performance as the result-driven reward for a series of actions decided by the agent.",
"Compared to accuracy, we adopt the F 1 score as the evaluation criterion, since accuracy might not be an indicative metric in a multi-class classification setting where the data distribution could be imbalanced.",
"Thus, the reward can be formulated as the RL Agent Train Relation Classifier \" #$\" \" # × + # + ×(− # ) Noisy dataset - ./# Cleaned dataset - #$\" Cleaned dataset - # Removed part Removed part Train # = ( \" # -\" #$\" ) Relation Classifier RL Agent Epoch − 1 : Epoch : Figure 2 : The proposed policy-based reinforcement learning framework.",
"The agent tries to remove the wrong-labeled sentences from the distantly-supervised positive dataset P ori .",
"In order to calculate the reward, P ori is split into the training part P ori t and the validation part P ori v ; their corresponding negative part are represented as N ori t and N ori v .",
"In each epoch i, the agent performs a series of actions to recognize the false positive samples from P ori t and treat them as negative samples.",
"Then, a new relation classifier is trained under the new dataset Noisy dataset - ./# + - ./# { 6 #$\" , 6 #$\" } - #$\" - ./# + - # { 6 # , 6 # } {P i t , N i t }.",
"With this relation classifier, F 1 score is calculated from the new validation set {P i v , N i v }, where P i v is also filtered by the current agent.",
"After that, the current reward is measured as the difference of F 1 between the adjacent epochs.",
"difference between the adjacent epochs: R i = α(F i 1 − F i−1 1 ) (1) As this equation shows, in step i, our agent is given a positive reward only if F 1 gets improved; otherwise, the agent will receive a negative reward.",
"Under this setting, the value of reward is proportional to the difference of F 1 , and α is used to convert this difference into a rational numeric range.",
"Naturally, the value of the reward is in a continuous space, which is more reasonable than a binary reward (−1 and 1), because this setting can reflect the number of wrong-labeled instance that the agent has removed.",
"In order to avoid the randomness of F 1 , we use the average F 1 of last five epochs to calculate the reward.",
"Policy Network For each input sentence, our policy network is to determine whether it expresses the target relation type and then make removal action if it is irrelevant to the target relation type.",
"Thus, it is analogous to a binary relation classifier.",
"CNN is commonly used to construct relation classification system (Santos et al., 2015; Xu et al., 2015; Shen and Huang, 2016) , so we adopt a simple CNN with window size c w and kernel size c k , to model policy network π(s; θ).",
"The reason why we do not choice the variants of CNN (Zeng et al., 2015; Lin et al., 2016) that are well-designed for distant supervision is that these two models belong to bag-level models (dealing with a bag of sentences simultaneously) and deal with the multi-classification problem; We just need a model to do binary sentencelevel classification.",
"Naturally, the simpler network is adopted.",
"Training Policy-based Agent Unlike the goal of distant supervision relation extraction, our agent is to determine whether an annotated sentence expresses the target relation type rather than predict the relationship of entity pair, so sentences are treated independently despite belonging to the same entity pair.",
"In distant supervision training dataset, one relation type contains several thousands or ten thousands sentences; moreover, reward R can only be calculated after processing the whole positive set of this relation type.",
"If we randomly initialize the parameters of policy network and train this network by trial and errors, it will waste a lot of time and be inclined to poor convergence properties.",
"In order to overcome this problem, we adopt a supervised learning procedure to pre-train our policy network, in which way to provide a general learning direction for our policy-based agent.",
"Pre-training Strategy The pre-training strategy, inspired from AlphaGo (Silver et al., 2016) , is a common strategy in RL related works to accelerate the training of RL agents.",
"Normally, they utilize a small part of the annotated dataset to train policy networks before reinforcement learning.",
"For example, AlphaGo uses the collected experts moves to do a supervised learning for Go RL agent.",
"However, in distant supervision relation extraction task, there is not any supervised information that can be used unless let linguistic experts to do some manual annotations for part of the entity pairs.",
"However, this is expensive, and it is not the original intention of distant supervision.",
"Under this circumstance, we propose a compromised solution.",
"With well-aligned corpus, the true positive samples should have evident advantage in quantity compared with false positive samples in the distantly-supervised dataset.",
"So, for a specific relation type, we directly treat the distantly-supervised positive set as the positive set, and randomly extract part of distantly-supervised negative set as the negative set.",
"In order to better consider prior information during this pre-training procedure, the amount of negative samples is 10 times of the number of positive samples.",
"It is because, when learning with massive negative samples, the agent is more likely to develop toward a better direction.",
"Cross-entropy cost function is used to train this binary classifier, where the negative label corresponds to the removing action, and the positive label corresponds to the retaining action.",
"(2) J(θ) = i y i log[π(a = y i |s i ; θ)] + (1 − y i )log[1 − π(a = y i |s i ; θ)] Due to the noisy nature of the distantly-labeled instances, if we let this pre-training process overfit this noisy dataset, the predicted probabilities of most samples tend to be close to 0 or 1, which is difficult to be corrected and unnecessarily increases the training cost of reinforcement learning.",
"So, we stop this training process when the accuracy reaches 85% ∼ 90%.",
"Theoretically, our approach can be explained as increasing the entropy of the policy gradient agent, and preventing the entropy of the policy being too low, which means that the lack of exploration may be a concern.",
"3.1.2 Retraining Agent with Rewards As shown in Figure 2 , in order to discover incorrectly-labeled instances without any supervised information, we introduce a policy-based RL method.",
"What our agent tries to deal with is the noisy samples from the distantly-supervised positive dataset; Here we call it as the DS positive dataset.",
"We split it into the training positive set P ori t and the validation positive set P ori v ; naturally, both of these two set are noisy.",
"Correspondingly, the training negative set N ori t and the validation negative set N ori v are constructed by randomly selected from the DS negative dataset.",
"In every epoch, the agent removes a noisy sample set Ψ i from P ori t according to the stochastic policy π(a|s), and we obtain a new positive set P t = P ori t − Ψ i .",
"Because Ψ i is recognized as the wrong-labeled samples, we redistribute it into the negative set N t = N ori t + Ψ i .",
"Under this setting, the scale of training set is constant for each epoch.",
"Now we utilize the cleaned data {P t , N t } to train a relation classifier.",
"The desirable situation is that RL agent has the capacity to increase the performance of relation classifier through relocating incorrectly-labeled false positive instances.",
"Therefore, we use the validation set {P ori v , N ori v } to measure the performance of the current agent.",
"First, this validation set is filtered and redistributed by the current agent as {P v , N v }; the F 1 score of the current relation classifier is calculated from it.",
"Finally, the difference of F 1 scores between the current and previous epoch is used to calculate reward.",
"Next, we will introduce several strategies to train a more robust RL agent.",
"Removing the fixed number of sentences in each epoch In every epoch, we let the RL agent to remove a fixed number of sentences or less (when the number of the removed sentences in one epoch does not reach this fixed number during training), in which way to prevent the case that the agent tries to remove more false positive instances by removing more instances.",
"Under the restriction of fixed number, if the agent decides to remove the current state, it means the chance of removing other states decrease.",
"Therefore, in order to obtain a better reward, the agent should try to remove a instance set that includes more negative instances.",
"Loss function The quality of the RL agent is reflected by the quality of the removed part.",
"After the pre-training process, the agent just possesses Algorithm 1 Retraining agent with rewards for relation k. For a clearer expression, k is omitted in the following algorithm.",
"Require: Positive set {P ori t , P ori v }, Negative set {N ori t , N ori v }, the fixed number of removal γ t , γ v 1: Load parameters θ from pre-trained policy network 2: Initialize s * as the all-zero vector with the same dimension of s j 3: for epoch i = 1 → N do 4: for s j ∈ P ori t do 5: s j = concatenation(s j , s * ) 6: Randomly sample a j ∼ π(a| s j ; θ); compute p j = π(a = 0| s j ; θ) 7: if a j == 0 then Rank T based on p j from high to low, obtain T rank 12: for t i in T rank [: γ t ] do 13: Add t i [0] into Ψ i 14: end for 15: P i t = P ori t − Ψ i , N i t = N ori t + Ψ i , R = α(F i 1 − F i−1 1 ) 19 : Ω i−1 = Ψ i−1 − Ψ i ∩ Ψ i−1 ; Ω i = Ψ i − Ψ i ∩ Ψ i−1 20: 21: Updata θ: g ∝ θ Ω i log π(a|s; θ)R + θ Ω i−1 log π(a|s; θ)(−R) 22: end for the ability to distinguish the obvious false positive instances, which means the discrimination of the indistinguishable wrong-labeled instances are still ambiguous.",
"Particularly, this indistinguishable part is the criterion to reflect the quality of the agent.",
"Therefore, regardless of these easydistinguished instances, the different parts of the removed parts in different epochs are the determinant of the change of F 1 scores.",
"Therefore, we definite two sets: Ω i−1 = Ψ i−1 − (Ψ i ∩ Ψ i−1 ) (3) Ω i = Ψ i − (Ψ i ∩ Ψ i−1 ) (4) where Ψ i is the removed part of epoch i. Ω i−1 and Ω i are represented with the different colors in Figure 2.",
"If F 1 score increases in the epoch i, it means the actions of the epoch i is more reasonable than that in the epoch i − 1.",
"In other words, Ω i is more negative than Ω i−1 .",
"Thus, we assign the positive reward to Ω i and the negative reward to Ω i−1 , and vice versa.",
"In summary, the ultimate loss function is formulated as follow: (5) J(θ) = Ω i log π(a|s; θ)R + Ω i−1 log π(a|s; θ)(−R) Redistributing Training Dataset with Policy-based Agents Through the above reinforcement learning procedure, for each relation type, we obtain a agent as the false-positive indicator.",
"These agents possess the capability of recognizing incorrectly-labeled instances of the corresponding relation types.",
"We adopt these agents as classifiers to recognize false positive samples in the noisy distantly-supervised training dataset.",
"For one entity pair, if all the sentence aligned from corpus are classified as false positive, then this entity pair is redistributed into the negative set.",
"Experiments We adopt a policy-based RL method to generate a series of relation indicators and use them to re-distribute training dataset by moving false positive samples to negative sample set.",
"Therefore, our experiments are intended to demonstrate that our RL agents possess this capability.",
"Datast and Evaluation Metrics We evaluate the proposed method on a commonlyused dataset 2 , which is first presented in Riedel et al.",
"(2010) .",
"This dataset is generated by aligning entity pairs from Freebase with New York Times corpus(NYT).",
"Entity mentions of NYT corpus are recognized by the Stanford named entity recognizer (Finkel et al., 2005) .",
"Similar to the previous works, we adopt the held-out evaluation to evaluate our model, which can provide an approximate measure of the classification ability without costly human evaluation.",
"Similar to the generation of the training set, the entity pairs in test set are also selected from Freebase, which will be predicted under the sentences discovered from the NYT corpus.",
"Experimental Settings Policy-based Agent The action space of our RL agent just includes two actions.",
"Therefore, the agent can be modeled as a binary classifier.",
"We adopt a single-window CNN as this policy network.",
"The detailed hyperparameter settings are presented in Table 1 .",
"As for word embeddings, we directly use the word embedding file released by Lin et al.",
"(2016) 3 , which just keeps the words that appear more than 100 times in NYT.",
"Moreover, we have the same dimension setting of the position embedding, and the maximum length of relative distance is −30 and 30 (\"-\" and \"+\" represent the left and right side of the entities).",
"The learning rate of reinforcement learning is 2e −5 .",
"For each relation type, the fixed number γ t , γ v are according to the pre-trained agent.",
"When one relation type has too many distantsupervised positive sentences (for example, /lo-2 http://iesl.cs.umass.edu/riedel/ecml/ 3 https://github.com/thunlp/NRE Table 2 : Comparison of F 1 scores among three cases: the relation classifier is trained with the original dataset, the redistributed dataset generated by the pre-trained agent, and the redistributed dataset generated by our RL agent respectively.",
"The name of relation types are abbreviated: /peo/per/pob represents /people/person/place of birth cation/location/contains has 75768 sentences), we sample a subset of size 7,500 sentences to train the agent.",
"For the average vector of the removed sentences, in the pre-training process and the first state of the retraining process, it is set as all-zero vector.",
"Relation Classifier for Calculating Reward In order to evaluate a series of actions by agent, we use a simple CNN model, because the simple network is more sensitive to the quality of the training set.",
"The proportion between P ori t and P ori v is 2:1, and they are all derived from the training set of Riedel dataset; the corresponding negative sample sets N ori t and N ori v are randomly selected from the Riedel negative dataset, whose size is twice that of their corresponding positive sets.",
"The Effectiveness of Reinforcement Learning In Table 2 , we list the F 1 scores before and after adopting the proposed RL method.",
"Even though there are 52 actual relation types in Riedel dataset, only 10 relation types have more than 1000 pos- Zeng et al.",
"(2015) and Lin et al.",
"(2016) are both the robust models to solve wrong labeling problem of distant supervision relation extraction.",
"Zeng et al.",
"(2015) combine at-least-one multi-instance learning with deep neural network to extract only one active sentence to predict the relation between entity pair; Lin et al.",
"(2016) combine all sentences of one entity pair and assign soft attention weights to them, in which way to generate a compositive relation representation for this entity pair.",
"However, the false positive phenomenon also includes the case that all the sentences of one entity pair are wrong, which is because the corpus is not completely aligned with the knowledge base.",
"This phenomenon is also common between Riedel dataset and Freebase through our manual inspection.",
"Obviously, there is nothing the above two methods can do in this case.",
"The proposed RL method is to tackle this problem.",
"We adopt our RL agents to redistribute Riedel dataset by moving false positive samples into the negative sample set.",
"Then we use Zeng et al.",
"(2015) and Lin et al.",
"(2016) to predict relations on this cleaned dataset, and compare the performance with that on the original Riedel dataset.",
"As shown in Figure 3 and Figure 4 , under the assistant of our RL agent, the same model can achieve obvious improvement with more reasonable training dataset.",
"In order to give the more intuitive comparison, we calculate the AUC value of each PR curve, which reflects the area size under these curves.",
"These comparable results also indicate the effectiveness of our policy-based RL method.",
"Moreover, as can be seen from the result of t-test evaluation, all the p-values are less than 5e-02, so the improvements are significant.",
"proportional to the original scale, which is in accordance with the actual accident situation.",
"At the same time, we analyze the correlation between the false positive phenomenon and the number of sentences of entity pairs : With this the number ranging from 1 to 5, the corresponding percentages are [55.9%, 32.0%, 3.7%, 4.4%, 0.7%].",
"This distribution is consistent with our assumption.",
"Because Freebase is, to some extent, not completely aligned with the NYT corpus, entity pairs with fewer sentences are more likely to be false positive, which is the major factor hindering the performance of the previous systems.",
"In Table 4 , we present some false positive examples selected by our agents.",
"Taking entity pair (Sami Moubayed, Syria) as an example, it is obvious that there is not any valuable information reflecting relation /people/person/place of birth.",
"Both of these sentences talks about the situation analysis of Syria from the political analyst Sami Moubayed.",
"We also found that, for some entity pairs, even though there are multiple sentences, all of them are identical.",
"This phenomenon also increases the probability of the appearance of false positive samples.",
"Case Study Conclusion In this work, we propose a deep reinforcement learning framework for robust distant supervision.",
"The intuition is that, in contrast to prior works that utilize only one instance per entity pair and use soft attention weights to select plausible distantly supervised examples, we describe a policy-based framework to systematically learn to relocate the false positive samples, and better utilize the unlabeled data.",
"More specifically, our goal is to Table 2. teach the reinforcement agent to optimize the selection/redistribution strategy that maximizes the reward of boosting the performance of relation classification.",
"An important aspect of our work is that our framework does not depend on a specific form of the relation classifier, meaning that it is a plug-and-play technique that could be potentially applied to any relation extraction pipeline.",
"In experiments, we show that our framework boosts the performance of distant supervision relation extraction of various strong deep learning baselines on the widely used New York Times -Freebase dataset."
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.1.1",
"3.2",
"4",
"4.1",
"4.2.1",
"4.2.2",
"4.3",
"5"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Reinforcement Learning for Distant Supervision",
"Training Policy-based Agent",
"Pre-training Strategy",
"Redistributing Training Dataset with",
"Experiments",
"Datast and Evaluation Metrics",
"Policy-based Agent",
"Relation Classifier for Calculating Reward",
"The Effectiveness of Reinforcement Learning",
"Conclusion"
]
} | GEM-SciDuet-train-136#paper-1365#slide-1 | Distant Supervision | If two entities participate in a relation, any sentence that contains those two entities might express that
Nijlen is a municipality located in the Belgian province of Antwerp.
Neural relation extraction with selective attention over instances. | If two entities participate in a relation, any sentence that contains those two entities might express that
Nijlen is a municipality located in the Belgian province of Antwerp.
Neural relation extraction with selective attention over instances. | [] |
GEM-SciDuet-train-136#paper-1365#slide-2 | 1365 | Robust Distant Supervision Relation Extraction via Deep Reinforcement Learning | Distant supervision has become the standard method for relation extraction. However, even though it is an efficient method, it does not come at no cost-The resulted distantly-supervised training samples are often very noisy. To combat the noise, most of the recent state-of-theart approaches focus on selecting onebest sentence or calculating soft attention weights over the set of the sentences of one specific entity pair. However, these methods are suboptimal, and the false positive problem is still a key stumbling bottleneck for the performance. We argue that those incorrectly-labeled candidate sentences must be treated with a hard decision, rather than being dealt with soft attention weights. To do this, our paper describes a radical solution-We explore a deep reinforcement learning strategy to generate the false-positive indicator, where we automatically recognize false positives for each relation type without any supervised information. Unlike the removal operation in the previous studies, we redistribute them into the negative examples. The experimental results show that the proposed strategy significantly improves the performance of distant supervision comparing to state-of-the-art systems. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192
],
"paper_content_text": [
"Introduction Relation extraction is a core task in information extraction and natural language understanding.",
"The goal of relation extraction is to predict relations for entities in a sentence (Zelenko et al., 2003; Bunescu and Mooney, 2005; GuoDong et al., 2005) .",
"For example, given a sentence \"Barack Obama is married to Michelle Obama.",
"\", a relation classifier aims at predicting the relation of \"spouse\".",
"In downstream applications, relation extraction is the key module for constructing knowledge graphs, and it is a vital component of many natural language processing applications such as structured search, sentiment analysis, question answering, and summarization.",
"A major issue encountered in the early development of relation extraction algorithms is the data sparsity issue-It is extremely expensive, and almost impossible for human annotators to go through a large corpus of millions of sentences to provide a large amount of labeled training instances.",
"Therefore, distant supervision relation extraction (Mintz et al., 2009; Hoffmann et al., 2011; Surdeanu et al., 2012) becomes popular, because it uses entity pairs from knowledge bases to select a set of noisy instances from unlabeled data.",
"In recent years, neural network approaches (Zeng et al., 2014 (Zeng et al., , 2015 have been proposed to train the relation extractor under these noisy conditions.",
"To suppress the noisy (Roth et al., 2013) , recent stud-ies (Lin et al., 2016) have proposed the use of attention mechanisms to place soft weights on a set of noisy sentences, and select samples.",
"However, we argue that only selecting one example or based on soft attention weights are not the optimal strategy: To improve the robustness, we need a systematic solution to make use of more instances, while removing false positives and placing them in the right place.",
"In this paper, we investigate the possibility of using dynamic selection strategies for robust distant supervision.",
"More specifically, we design a deep reinforcement learning agent, whose goal is to learn to choose whether to remove or remain the distantly supervised candidate instance based on the performance change of the relation classifier.",
"Intuitively, our agent would like to remove false positives, and reconstruct a cleaned set of distantly supervised instances to maximize the reward based on the classification accuracy.",
"Our proposed method is classifier-independent, and it can be applied to any existing distant supervision model.",
"Empirically, we show that our method has brought consistent performance gains in various deep neural network based models, achieving strong performances on the widely used New York Times dataset (Riedel et al., 2010) .",
"Our contributions are three-fold: • We propose a novel deep reinforcement learning framework for robust distant supervision relation extraction.",
"• Our method is model-independent, meaning that it could be applied to any state-of-the-art relation extractors.",
"• We show that our method can boost the performances of recently proposed neural relation extractors.",
"In Section 2, we will discuss related works on distant supervision relation extraction.",
"Next, we will describe our robust distant supervision framework in Section 3.",
"In Section 4, empirical evaluation results are shown.",
"And finally, we conclude in Section 5.",
"Mintz et al.",
"(2009) is the first study that combines dependency path and feature aggregation for distant supervision.",
"However, this approach would introduce a lot of false positives, as the same entity pair might have multiple relations.",
"To alleviate this issue, Hoffmann et al.",
"(2011) address this issue, and propose a model to jointly learn with multiple relations.",
"Surdeanu et al.",
"(2012) further propose a multi-instance multi-label learning framework to improve the performance.",
"Note that these early approaches do not explicitly remove noisy instances, but rather hope that the model would be able to suppress the noise.",
"Related Work Recently, with the advance of neural network techniques, deep learning methods (Zeng et al., 2014 (Zeng et al., , 2015 are introduced, and the hope is to model noisy distant supervision process in the hidden layers.",
"However, their approach only selects one most plausible instance per entity pair, inevitably missing out a lot of valuable training instances.",
"Recently, Lin et al.",
"(2016) propose an attention mechanism to select plausible instances from a set of noisy instances.",
"However, we believe that soft attention weight assignment might not be the optimal solution, since the false positives should be completely removed and placed in the negative set.",
"Ji et al.",
"(2017) combine the external knowledge to rich the representation of entity pair, in which way to improve the accuracy of attention weights.",
"Even though these above-mentioned methods can select high-quality instances, they ignore the false positive case: all the sentences of one entity pair belongs to the false positives.",
"In this work, we take a radical approach to solve this problem-We will make use of the distantly labeled resources as much as possible, while learning a independent false-positive indicator to remove false positives, and place them in the right place.",
"After our ACL submission, we notice that a contemporaneous study Feng et al.",
"(2018) also adopts reinforcement learning to learn an instance selector, but their reward is calculated from the prediction probabilities.",
"In contrast, while in our method, the reward is intuitively reflected by the performance change of the relation classifier.",
"Our approach is also complement to most of the approaches above, and can be directly applied on top of any existing relation extraction classifiers.",
"Reinforcement Learning for Distant Supervision We introduce a performance-driven, policy-based reinforcement learning method to heuristically recognize false positive samples.",
"Comparing to a prior study that has underutilized the distantlysupervised samples (Lin et al., 2016) , we consider an RL agent for robust distant supervision relation extraction.",
"We first describe the definitions of our RL method, including the policy-based agent, external environment, and pre-training strategy.",
"Next, we describe the retraining strategy for our RL agent.",
"The goal of our agent is to determine whether to retain or remove a distantlysupervised sentence, based on the performance change of relation classifier.",
"Finally, we describe the noisy-suppression method, where we teach our policy-based agent to make a redistribution for a cleaner distant supervision training dataset.",
"Distant supervision relation extraction is to predict the relation type of entity pair under the automatically-generated training set.",
"However, the issue is that these distantly-supervised sentences that mention this entity pair may not express the desired relation type.",
"Therefore, what our RL agent should do is to determine whether the distantly-supervised sentence is a true positive instance for this relation type.",
"For reinforcement learning, external environment and RL agent are two necessary components, and a robust agent is trained from the dynamic interaction between these two parts (Arulkumaran et al., 2017) .",
"First, the prerequisite of reinforcement learning is that the external environment should be modeled as a Markov decision process (MDP).",
"However, the traditional setting of relation extraction cannot satisfy this condition: the input sentences are independent of each other.",
"In other words, we cannot merely use the information of the sentence being processed as the state.",
"Thus, we add the information from the early states into the representation of the current state, in which way to model our task as a MDP problem (Fang et al., 2017) .",
"The other component, RL agent, is parameterized with a policy network π θ (s, a) = p(a|s; θ).",
"The probability distribution of actions A = {a remove , a remain } is calculated by policy network based on state vectors.",
"What needs to be noted is that, Deep Q Network (DQN) (Mnih et al., 2013) is also a widelyused RL method; however, it is not suitable for our case, even if our action space is small.",
"First, we cannot compute the immediate reward for every operation; In contrast, the accurate reward can only be obtained after finishing processing the whole training dataset.",
"Second, the stochastic policy of the policy network is capable of prevent-ing the agent from getting stuck in an intermediate state.",
"The following subsections detailedly introduce the definitions of the fundamental components in the proposed RL method.",
"States In order to satisfy the condition of MDP, the state s includes the information from the current sentence and the sentences that have been removed in early states.",
"The semantic and syntactic information of sentence is represented by a continuous real-valued vector.",
"According to some state-of-the-art supervised relation extraction approaches (Zeng et al., 2014; Nguyen and Grishman, 2015) , we utilize both word embedding and position embedding to convert sentence into vector.",
"With this sentence vector, the current state is the concatenation of the current sentence vector and the average vector of the removed sentences in early states.",
"We give relatively larger weight for the vector of the current sentence, in which way to magnify the dominating influence of the current sentence information for the decision of action.",
"Actions At each step, our agent is required to determine whether the instance is false positive for target relation type.",
"Each relation type has a agent 1 .",
"There are two actions for each agent: whether to remove or retain the current instance from the training set.",
"With the initial distantlysupervised dataset that is blended with incorrectlylabeled instances, we hope that our agent is capable of using the policy network to filter noisy instances; Under this cleaned dataset, distant supervision is then expected to achieve better performance.",
"Rewards As previously mentioned, the intuition of our model is that, when the incorrectly-labeled instances are filtered, the better performance of relation classifier will achieve.",
"Therefore, we use the change of performance as the result-driven reward for a series of actions decided by the agent.",
"Compared to accuracy, we adopt the F 1 score as the evaluation criterion, since accuracy might not be an indicative metric in a multi-class classification setting where the data distribution could be imbalanced.",
"Thus, the reward can be formulated as the RL Agent Train Relation Classifier \" #$\" \" # × + # + ×(− # ) Noisy dataset - ./# Cleaned dataset - #$\" Cleaned dataset - # Removed part Removed part Train # = ( \" # -\" #$\" ) Relation Classifier RL Agent Epoch − 1 : Epoch : Figure 2 : The proposed policy-based reinforcement learning framework.",
"The agent tries to remove the wrong-labeled sentences from the distantly-supervised positive dataset P ori .",
"In order to calculate the reward, P ori is split into the training part P ori t and the validation part P ori v ; their corresponding negative part are represented as N ori t and N ori v .",
"In each epoch i, the agent performs a series of actions to recognize the false positive samples from P ori t and treat them as negative samples.",
"Then, a new relation classifier is trained under the new dataset Noisy dataset - ./# + - ./# { 6 #$\" , 6 #$\" } - #$\" - ./# + - # { 6 # , 6 # } {P i t , N i t }.",
"With this relation classifier, F 1 score is calculated from the new validation set {P i v , N i v }, where P i v is also filtered by the current agent.",
"After that, the current reward is measured as the difference of F 1 between the adjacent epochs.",
"difference between the adjacent epochs: R i = α(F i 1 − F i−1 1 ) (1) As this equation shows, in step i, our agent is given a positive reward only if F 1 gets improved; otherwise, the agent will receive a negative reward.",
"Under this setting, the value of reward is proportional to the difference of F 1 , and α is used to convert this difference into a rational numeric range.",
"Naturally, the value of the reward is in a continuous space, which is more reasonable than a binary reward (−1 and 1), because this setting can reflect the number of wrong-labeled instance that the agent has removed.",
"In order to avoid the randomness of F 1 , we use the average F 1 of last five epochs to calculate the reward.",
"Policy Network For each input sentence, our policy network is to determine whether it expresses the target relation type and then make removal action if it is irrelevant to the target relation type.",
"Thus, it is analogous to a binary relation classifier.",
"CNN is commonly used to construct relation classification system (Santos et al., 2015; Xu et al., 2015; Shen and Huang, 2016) , so we adopt a simple CNN with window size c w and kernel size c k , to model policy network π(s; θ).",
"The reason why we do not choice the variants of CNN (Zeng et al., 2015; Lin et al., 2016) that are well-designed for distant supervision is that these two models belong to bag-level models (dealing with a bag of sentences simultaneously) and deal with the multi-classification problem; We just need a model to do binary sentencelevel classification.",
"Naturally, the simpler network is adopted.",
"Training Policy-based Agent Unlike the goal of distant supervision relation extraction, our agent is to determine whether an annotated sentence expresses the target relation type rather than predict the relationship of entity pair, so sentences are treated independently despite belonging to the same entity pair.",
"In distant supervision training dataset, one relation type contains several thousands or ten thousands sentences; moreover, reward R can only be calculated after processing the whole positive set of this relation type.",
"If we randomly initialize the parameters of policy network and train this network by trial and errors, it will waste a lot of time and be inclined to poor convergence properties.",
"In order to overcome this problem, we adopt a supervised learning procedure to pre-train our policy network, in which way to provide a general learning direction for our policy-based agent.",
"Pre-training Strategy The pre-training strategy, inspired from AlphaGo (Silver et al., 2016) , is a common strategy in RL related works to accelerate the training of RL agents.",
"Normally, they utilize a small part of the annotated dataset to train policy networks before reinforcement learning.",
"For example, AlphaGo uses the collected experts moves to do a supervised learning for Go RL agent.",
"However, in distant supervision relation extraction task, there is not any supervised information that can be used unless let linguistic experts to do some manual annotations for part of the entity pairs.",
"However, this is expensive, and it is not the original intention of distant supervision.",
"Under this circumstance, we propose a compromised solution.",
"With well-aligned corpus, the true positive samples should have evident advantage in quantity compared with false positive samples in the distantly-supervised dataset.",
"So, for a specific relation type, we directly treat the distantly-supervised positive set as the positive set, and randomly extract part of distantly-supervised negative set as the negative set.",
"In order to better consider prior information during this pre-training procedure, the amount of negative samples is 10 times of the number of positive samples.",
"It is because, when learning with massive negative samples, the agent is more likely to develop toward a better direction.",
"Cross-entropy cost function is used to train this binary classifier, where the negative label corresponds to the removing action, and the positive label corresponds to the retaining action.",
"(2) J(θ) = i y i log[π(a = y i |s i ; θ)] + (1 − y i )log[1 − π(a = y i |s i ; θ)] Due to the noisy nature of the distantly-labeled instances, if we let this pre-training process overfit this noisy dataset, the predicted probabilities of most samples tend to be close to 0 or 1, which is difficult to be corrected and unnecessarily increases the training cost of reinforcement learning.",
"So, we stop this training process when the accuracy reaches 85% ∼ 90%.",
"Theoretically, our approach can be explained as increasing the entropy of the policy gradient agent, and preventing the entropy of the policy being too low, which means that the lack of exploration may be a concern.",
"3.1.2 Retraining Agent with Rewards As shown in Figure 2 , in order to discover incorrectly-labeled instances without any supervised information, we introduce a policy-based RL method.",
"What our agent tries to deal with is the noisy samples from the distantly-supervised positive dataset; Here we call it as the DS positive dataset.",
"We split it into the training positive set P ori t and the validation positive set P ori v ; naturally, both of these two set are noisy.",
"Correspondingly, the training negative set N ori t and the validation negative set N ori v are constructed by randomly selected from the DS negative dataset.",
"In every epoch, the agent removes a noisy sample set Ψ i from P ori t according to the stochastic policy π(a|s), and we obtain a new positive set P t = P ori t − Ψ i .",
"Because Ψ i is recognized as the wrong-labeled samples, we redistribute it into the negative set N t = N ori t + Ψ i .",
"Under this setting, the scale of training set is constant for each epoch.",
"Now we utilize the cleaned data {P t , N t } to train a relation classifier.",
"The desirable situation is that RL agent has the capacity to increase the performance of relation classifier through relocating incorrectly-labeled false positive instances.",
"Therefore, we use the validation set {P ori v , N ori v } to measure the performance of the current agent.",
"First, this validation set is filtered and redistributed by the current agent as {P v , N v }; the F 1 score of the current relation classifier is calculated from it.",
"Finally, the difference of F 1 scores between the current and previous epoch is used to calculate reward.",
"Next, we will introduce several strategies to train a more robust RL agent.",
"Removing the fixed number of sentences in each epoch In every epoch, we let the RL agent to remove a fixed number of sentences or less (when the number of the removed sentences in one epoch does not reach this fixed number during training), in which way to prevent the case that the agent tries to remove more false positive instances by removing more instances.",
"Under the restriction of fixed number, if the agent decides to remove the current state, it means the chance of removing other states decrease.",
"Therefore, in order to obtain a better reward, the agent should try to remove a instance set that includes more negative instances.",
"Loss function The quality of the RL agent is reflected by the quality of the removed part.",
"After the pre-training process, the agent just possesses Algorithm 1 Retraining agent with rewards for relation k. For a clearer expression, k is omitted in the following algorithm.",
"Require: Positive set {P ori t , P ori v }, Negative set {N ori t , N ori v }, the fixed number of removal γ t , γ v 1: Load parameters θ from pre-trained policy network 2: Initialize s * as the all-zero vector with the same dimension of s j 3: for epoch i = 1 → N do 4: for s j ∈ P ori t do 5: s j = concatenation(s j , s * ) 6: Randomly sample a j ∼ π(a| s j ; θ); compute p j = π(a = 0| s j ; θ) 7: if a j == 0 then Rank T based on p j from high to low, obtain T rank 12: for t i in T rank [: γ t ] do 13: Add t i [0] into Ψ i 14: end for 15: P i t = P ori t − Ψ i , N i t = N ori t + Ψ i , R = α(F i 1 − F i−1 1 ) 19 : Ω i−1 = Ψ i−1 − Ψ i ∩ Ψ i−1 ; Ω i = Ψ i − Ψ i ∩ Ψ i−1 20: 21: Updata θ: g ∝ θ Ω i log π(a|s; θ)R + θ Ω i−1 log π(a|s; θ)(−R) 22: end for the ability to distinguish the obvious false positive instances, which means the discrimination of the indistinguishable wrong-labeled instances are still ambiguous.",
"Particularly, this indistinguishable part is the criterion to reflect the quality of the agent.",
"Therefore, regardless of these easydistinguished instances, the different parts of the removed parts in different epochs are the determinant of the change of F 1 scores.",
"Therefore, we definite two sets: Ω i−1 = Ψ i−1 − (Ψ i ∩ Ψ i−1 ) (3) Ω i = Ψ i − (Ψ i ∩ Ψ i−1 ) (4) where Ψ i is the removed part of epoch i. Ω i−1 and Ω i are represented with the different colors in Figure 2.",
"If F 1 score increases in the epoch i, it means the actions of the epoch i is more reasonable than that in the epoch i − 1.",
"In other words, Ω i is more negative than Ω i−1 .",
"Thus, we assign the positive reward to Ω i and the negative reward to Ω i−1 , and vice versa.",
"In summary, the ultimate loss function is formulated as follow: (5) J(θ) = Ω i log π(a|s; θ)R + Ω i−1 log π(a|s; θ)(−R) Redistributing Training Dataset with Policy-based Agents Through the above reinforcement learning procedure, for each relation type, we obtain a agent as the false-positive indicator.",
"These agents possess the capability of recognizing incorrectly-labeled instances of the corresponding relation types.",
"We adopt these agents as classifiers to recognize false positive samples in the noisy distantly-supervised training dataset.",
"For one entity pair, if all the sentence aligned from corpus are classified as false positive, then this entity pair is redistributed into the negative set.",
"Experiments We adopt a policy-based RL method to generate a series of relation indicators and use them to re-distribute training dataset by moving false positive samples to negative sample set.",
"Therefore, our experiments are intended to demonstrate that our RL agents possess this capability.",
"Datast and Evaluation Metrics We evaluate the proposed method on a commonlyused dataset 2 , which is first presented in Riedel et al.",
"(2010) .",
"This dataset is generated by aligning entity pairs from Freebase with New York Times corpus(NYT).",
"Entity mentions of NYT corpus are recognized by the Stanford named entity recognizer (Finkel et al., 2005) .",
"Similar to the previous works, we adopt the held-out evaluation to evaluate our model, which can provide an approximate measure of the classification ability without costly human evaluation.",
"Similar to the generation of the training set, the entity pairs in test set are also selected from Freebase, which will be predicted under the sentences discovered from the NYT corpus.",
"Experimental Settings Policy-based Agent The action space of our RL agent just includes two actions.",
"Therefore, the agent can be modeled as a binary classifier.",
"We adopt a single-window CNN as this policy network.",
"The detailed hyperparameter settings are presented in Table 1 .",
"As for word embeddings, we directly use the word embedding file released by Lin et al.",
"(2016) 3 , which just keeps the words that appear more than 100 times in NYT.",
"Moreover, we have the same dimension setting of the position embedding, and the maximum length of relative distance is −30 and 30 (\"-\" and \"+\" represent the left and right side of the entities).",
"The learning rate of reinforcement learning is 2e −5 .",
"For each relation type, the fixed number γ t , γ v are according to the pre-trained agent.",
"When one relation type has too many distantsupervised positive sentences (for example, /lo-2 http://iesl.cs.umass.edu/riedel/ecml/ 3 https://github.com/thunlp/NRE Table 2 : Comparison of F 1 scores among three cases: the relation classifier is trained with the original dataset, the redistributed dataset generated by the pre-trained agent, and the redistributed dataset generated by our RL agent respectively.",
"The name of relation types are abbreviated: /peo/per/pob represents /people/person/place of birth cation/location/contains has 75768 sentences), we sample a subset of size 7,500 sentences to train the agent.",
"For the average vector of the removed sentences, in the pre-training process and the first state of the retraining process, it is set as all-zero vector.",
"Relation Classifier for Calculating Reward In order to evaluate a series of actions by agent, we use a simple CNN model, because the simple network is more sensitive to the quality of the training set.",
"The proportion between P ori t and P ori v is 2:1, and they are all derived from the training set of Riedel dataset; the corresponding negative sample sets N ori t and N ori v are randomly selected from the Riedel negative dataset, whose size is twice that of their corresponding positive sets.",
"The Effectiveness of Reinforcement Learning In Table 2 , we list the F 1 scores before and after adopting the proposed RL method.",
"Even though there are 52 actual relation types in Riedel dataset, only 10 relation types have more than 1000 pos- Zeng et al.",
"(2015) and Lin et al.",
"(2016) are both the robust models to solve wrong labeling problem of distant supervision relation extraction.",
"Zeng et al.",
"(2015) combine at-least-one multi-instance learning with deep neural network to extract only one active sentence to predict the relation between entity pair; Lin et al.",
"(2016) combine all sentences of one entity pair and assign soft attention weights to them, in which way to generate a compositive relation representation for this entity pair.",
"However, the false positive phenomenon also includes the case that all the sentences of one entity pair are wrong, which is because the corpus is not completely aligned with the knowledge base.",
"This phenomenon is also common between Riedel dataset and Freebase through our manual inspection.",
"Obviously, there is nothing the above two methods can do in this case.",
"The proposed RL method is to tackle this problem.",
"We adopt our RL agents to redistribute Riedel dataset by moving false positive samples into the negative sample set.",
"Then we use Zeng et al.",
"(2015) and Lin et al.",
"(2016) to predict relations on this cleaned dataset, and compare the performance with that on the original Riedel dataset.",
"As shown in Figure 3 and Figure 4 , under the assistant of our RL agent, the same model can achieve obvious improvement with more reasonable training dataset.",
"In order to give the more intuitive comparison, we calculate the AUC value of each PR curve, which reflects the area size under these curves.",
"These comparable results also indicate the effectiveness of our policy-based RL method.",
"Moreover, as can be seen from the result of t-test evaluation, all the p-values are less than 5e-02, so the improvements are significant.",
"proportional to the original scale, which is in accordance with the actual accident situation.",
"At the same time, we analyze the correlation between the false positive phenomenon and the number of sentences of entity pairs : With this the number ranging from 1 to 5, the corresponding percentages are [55.9%, 32.0%, 3.7%, 4.4%, 0.7%].",
"This distribution is consistent with our assumption.",
"Because Freebase is, to some extent, not completely aligned with the NYT corpus, entity pairs with fewer sentences are more likely to be false positive, which is the major factor hindering the performance of the previous systems.",
"In Table 4 , we present some false positive examples selected by our agents.",
"Taking entity pair (Sami Moubayed, Syria) as an example, it is obvious that there is not any valuable information reflecting relation /people/person/place of birth.",
"Both of these sentences talks about the situation analysis of Syria from the political analyst Sami Moubayed.",
"We also found that, for some entity pairs, even though there are multiple sentences, all of them are identical.",
"This phenomenon also increases the probability of the appearance of false positive samples.",
"Case Study Conclusion In this work, we propose a deep reinforcement learning framework for robust distant supervision.",
"The intuition is that, in contrast to prior works that utilize only one instance per entity pair and use soft attention weights to select plausible distantly supervised examples, we describe a policy-based framework to systematically learn to relocate the false positive samples, and better utilize the unlabeled data.",
"More specifically, our goal is to Table 2. teach the reinforcement agent to optimize the selection/redistribution strategy that maximizes the reward of boosting the performance of relation classification.",
"An important aspect of our work is that our framework does not depend on a specific form of the relation classifier, meaning that it is a plug-and-play technique that could be potentially applied to any relation extraction pipeline.",
"In experiments, we show that our framework boosts the performance of distant supervision relation extraction of various strong deep learning baselines on the widely used New York Times -Freebase dataset."
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.1.1",
"3.2",
"4",
"4.1",
"4.2.1",
"4.2.2",
"4.3",
"5"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Reinforcement Learning for Distant Supervision",
"Training Policy-based Agent",
"Pre-training Strategy",
"Redistributing Training Dataset with",
"Experiments",
"Datast and Evaluation Metrics",
"Policy-based Agent",
"Relation Classifier for Calculating Reward",
"The Effectiveness of Reinforcement Learning",
"Conclusion"
]
} | GEM-SciDuet-train-136#paper-1365#slide-2 | Wrong Labeling | Place_of_Death (William ODwyer, New York city)
i. Some New York city mayors William ODwyer, Vincent R. Impellitteri and Abraham Beame were born abroad.
Entity-Pair Level ii. Plenty of local officials have, too, including two New York city mayors,
Most of entity pairs only have several sentences | Place_of_Death (William ODwyer, New York city)
i. Some New York city mayors William ODwyer, Vincent R. Impellitteri and Abraham Beame were born abroad.
Entity-Pair Level ii. Plenty of local officials have, too, including two New York city mayors,
Most of entity pairs only have several sentences | [] |
GEM-SciDuet-train-136#paper-1365#slide-3 | 1365 | Robust Distant Supervision Relation Extraction via Deep Reinforcement Learning | Distant supervision has become the standard method for relation extraction. However, even though it is an efficient method, it does not come at no cost-The resulted distantly-supervised training samples are often very noisy. To combat the noise, most of the recent state-of-theart approaches focus on selecting onebest sentence or calculating soft attention weights over the set of the sentences of one specific entity pair. However, these methods are suboptimal, and the false positive problem is still a key stumbling bottleneck for the performance. We argue that those incorrectly-labeled candidate sentences must be treated with a hard decision, rather than being dealt with soft attention weights. To do this, our paper describes a radical solution-We explore a deep reinforcement learning strategy to generate the false-positive indicator, where we automatically recognize false positives for each relation type without any supervised information. Unlike the removal operation in the previous studies, we redistribute them into the negative examples. The experimental results show that the proposed strategy significantly improves the performance of distant supervision comparing to state-of-the-art systems. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192
],
"paper_content_text": [
"Introduction Relation extraction is a core task in information extraction and natural language understanding.",
"The goal of relation extraction is to predict relations for entities in a sentence (Zelenko et al., 2003; Bunescu and Mooney, 2005; GuoDong et al., 2005) .",
"For example, given a sentence \"Barack Obama is married to Michelle Obama.",
"\", a relation classifier aims at predicting the relation of \"spouse\".",
"In downstream applications, relation extraction is the key module for constructing knowledge graphs, and it is a vital component of many natural language processing applications such as structured search, sentiment analysis, question answering, and summarization.",
"A major issue encountered in the early development of relation extraction algorithms is the data sparsity issue-It is extremely expensive, and almost impossible for human annotators to go through a large corpus of millions of sentences to provide a large amount of labeled training instances.",
"Therefore, distant supervision relation extraction (Mintz et al., 2009; Hoffmann et al., 2011; Surdeanu et al., 2012) becomes popular, because it uses entity pairs from knowledge bases to select a set of noisy instances from unlabeled data.",
"In recent years, neural network approaches (Zeng et al., 2014 (Zeng et al., , 2015 have been proposed to train the relation extractor under these noisy conditions.",
"To suppress the noisy (Roth et al., 2013) , recent stud-ies (Lin et al., 2016) have proposed the use of attention mechanisms to place soft weights on a set of noisy sentences, and select samples.",
"However, we argue that only selecting one example or based on soft attention weights are not the optimal strategy: To improve the robustness, we need a systematic solution to make use of more instances, while removing false positives and placing them in the right place.",
"In this paper, we investigate the possibility of using dynamic selection strategies for robust distant supervision.",
"More specifically, we design a deep reinforcement learning agent, whose goal is to learn to choose whether to remove or remain the distantly supervised candidate instance based on the performance change of the relation classifier.",
"Intuitively, our agent would like to remove false positives, and reconstruct a cleaned set of distantly supervised instances to maximize the reward based on the classification accuracy.",
"Our proposed method is classifier-independent, and it can be applied to any existing distant supervision model.",
"Empirically, we show that our method has brought consistent performance gains in various deep neural network based models, achieving strong performances on the widely used New York Times dataset (Riedel et al., 2010) .",
"Our contributions are three-fold: • We propose a novel deep reinforcement learning framework for robust distant supervision relation extraction.",
"• Our method is model-independent, meaning that it could be applied to any state-of-the-art relation extractors.",
"• We show that our method can boost the performances of recently proposed neural relation extractors.",
"In Section 2, we will discuss related works on distant supervision relation extraction.",
"Next, we will describe our robust distant supervision framework in Section 3.",
"In Section 4, empirical evaluation results are shown.",
"And finally, we conclude in Section 5.",
"Mintz et al.",
"(2009) is the first study that combines dependency path and feature aggregation for distant supervision.",
"However, this approach would introduce a lot of false positives, as the same entity pair might have multiple relations.",
"To alleviate this issue, Hoffmann et al.",
"(2011) address this issue, and propose a model to jointly learn with multiple relations.",
"Surdeanu et al.",
"(2012) further propose a multi-instance multi-label learning framework to improve the performance.",
"Note that these early approaches do not explicitly remove noisy instances, but rather hope that the model would be able to suppress the noise.",
"Related Work Recently, with the advance of neural network techniques, deep learning methods (Zeng et al., 2014 (Zeng et al., , 2015 are introduced, and the hope is to model noisy distant supervision process in the hidden layers.",
"However, their approach only selects one most plausible instance per entity pair, inevitably missing out a lot of valuable training instances.",
"Recently, Lin et al.",
"(2016) propose an attention mechanism to select plausible instances from a set of noisy instances.",
"However, we believe that soft attention weight assignment might not be the optimal solution, since the false positives should be completely removed and placed in the negative set.",
"Ji et al.",
"(2017) combine the external knowledge to rich the representation of entity pair, in which way to improve the accuracy of attention weights.",
"Even though these above-mentioned methods can select high-quality instances, they ignore the false positive case: all the sentences of one entity pair belongs to the false positives.",
"In this work, we take a radical approach to solve this problem-We will make use of the distantly labeled resources as much as possible, while learning a independent false-positive indicator to remove false positives, and place them in the right place.",
"After our ACL submission, we notice that a contemporaneous study Feng et al.",
"(2018) also adopts reinforcement learning to learn an instance selector, but their reward is calculated from the prediction probabilities.",
"In contrast, while in our method, the reward is intuitively reflected by the performance change of the relation classifier.",
"Our approach is also complement to most of the approaches above, and can be directly applied on top of any existing relation extraction classifiers.",
"Reinforcement Learning for Distant Supervision We introduce a performance-driven, policy-based reinforcement learning method to heuristically recognize false positive samples.",
"Comparing to a prior study that has underutilized the distantlysupervised samples (Lin et al., 2016) , we consider an RL agent for robust distant supervision relation extraction.",
"We first describe the definitions of our RL method, including the policy-based agent, external environment, and pre-training strategy.",
"Next, we describe the retraining strategy for our RL agent.",
"The goal of our agent is to determine whether to retain or remove a distantlysupervised sentence, based on the performance change of relation classifier.",
"Finally, we describe the noisy-suppression method, where we teach our policy-based agent to make a redistribution for a cleaner distant supervision training dataset.",
"Distant supervision relation extraction is to predict the relation type of entity pair under the automatically-generated training set.",
"However, the issue is that these distantly-supervised sentences that mention this entity pair may not express the desired relation type.",
"Therefore, what our RL agent should do is to determine whether the distantly-supervised sentence is a true positive instance for this relation type.",
"For reinforcement learning, external environment and RL agent are two necessary components, and a robust agent is trained from the dynamic interaction between these two parts (Arulkumaran et al., 2017) .",
"First, the prerequisite of reinforcement learning is that the external environment should be modeled as a Markov decision process (MDP).",
"However, the traditional setting of relation extraction cannot satisfy this condition: the input sentences are independent of each other.",
"In other words, we cannot merely use the information of the sentence being processed as the state.",
"Thus, we add the information from the early states into the representation of the current state, in which way to model our task as a MDP problem (Fang et al., 2017) .",
"The other component, RL agent, is parameterized with a policy network π θ (s, a) = p(a|s; θ).",
"The probability distribution of actions A = {a remove , a remain } is calculated by policy network based on state vectors.",
"What needs to be noted is that, Deep Q Network (DQN) (Mnih et al., 2013) is also a widelyused RL method; however, it is not suitable for our case, even if our action space is small.",
"First, we cannot compute the immediate reward for every operation; In contrast, the accurate reward can only be obtained after finishing processing the whole training dataset.",
"Second, the stochastic policy of the policy network is capable of prevent-ing the agent from getting stuck in an intermediate state.",
"The following subsections detailedly introduce the definitions of the fundamental components in the proposed RL method.",
"States In order to satisfy the condition of MDP, the state s includes the information from the current sentence and the sentences that have been removed in early states.",
"The semantic and syntactic information of sentence is represented by a continuous real-valued vector.",
"According to some state-of-the-art supervised relation extraction approaches (Zeng et al., 2014; Nguyen and Grishman, 2015) , we utilize both word embedding and position embedding to convert sentence into vector.",
"With this sentence vector, the current state is the concatenation of the current sentence vector and the average vector of the removed sentences in early states.",
"We give relatively larger weight for the vector of the current sentence, in which way to magnify the dominating influence of the current sentence information for the decision of action.",
"Actions At each step, our agent is required to determine whether the instance is false positive for target relation type.",
"Each relation type has a agent 1 .",
"There are two actions for each agent: whether to remove or retain the current instance from the training set.",
"With the initial distantlysupervised dataset that is blended with incorrectlylabeled instances, we hope that our agent is capable of using the policy network to filter noisy instances; Under this cleaned dataset, distant supervision is then expected to achieve better performance.",
"Rewards As previously mentioned, the intuition of our model is that, when the incorrectly-labeled instances are filtered, the better performance of relation classifier will achieve.",
"Therefore, we use the change of performance as the result-driven reward for a series of actions decided by the agent.",
"Compared to accuracy, we adopt the F 1 score as the evaluation criterion, since accuracy might not be an indicative metric in a multi-class classification setting where the data distribution could be imbalanced.",
"Thus, the reward can be formulated as the RL Agent Train Relation Classifier \" #$\" \" # × + # + ×(− # ) Noisy dataset - ./# Cleaned dataset - #$\" Cleaned dataset - # Removed part Removed part Train # = ( \" # -\" #$\" ) Relation Classifier RL Agent Epoch − 1 : Epoch : Figure 2 : The proposed policy-based reinforcement learning framework.",
"The agent tries to remove the wrong-labeled sentences from the distantly-supervised positive dataset P ori .",
"In order to calculate the reward, P ori is split into the training part P ori t and the validation part P ori v ; their corresponding negative part are represented as N ori t and N ori v .",
"In each epoch i, the agent performs a series of actions to recognize the false positive samples from P ori t and treat them as negative samples.",
"Then, a new relation classifier is trained under the new dataset Noisy dataset - ./# + - ./# { 6 #$\" , 6 #$\" } - #$\" - ./# + - # { 6 # , 6 # } {P i t , N i t }.",
"With this relation classifier, F 1 score is calculated from the new validation set {P i v , N i v }, where P i v is also filtered by the current agent.",
"After that, the current reward is measured as the difference of F 1 between the adjacent epochs.",
"difference between the adjacent epochs: R i = α(F i 1 − F i−1 1 ) (1) As this equation shows, in step i, our agent is given a positive reward only if F 1 gets improved; otherwise, the agent will receive a negative reward.",
"Under this setting, the value of reward is proportional to the difference of F 1 , and α is used to convert this difference into a rational numeric range.",
"Naturally, the value of the reward is in a continuous space, which is more reasonable than a binary reward (−1 and 1), because this setting can reflect the number of wrong-labeled instance that the agent has removed.",
"In order to avoid the randomness of F 1 , we use the average F 1 of last five epochs to calculate the reward.",
"Policy Network For each input sentence, our policy network is to determine whether it expresses the target relation type and then make removal action if it is irrelevant to the target relation type.",
"Thus, it is analogous to a binary relation classifier.",
"CNN is commonly used to construct relation classification system (Santos et al., 2015; Xu et al., 2015; Shen and Huang, 2016) , so we adopt a simple CNN with window size c w and kernel size c k , to model policy network π(s; θ).",
"The reason why we do not choice the variants of CNN (Zeng et al., 2015; Lin et al., 2016) that are well-designed for distant supervision is that these two models belong to bag-level models (dealing with a bag of sentences simultaneously) and deal with the multi-classification problem; We just need a model to do binary sentencelevel classification.",
"Naturally, the simpler network is adopted.",
"Training Policy-based Agent Unlike the goal of distant supervision relation extraction, our agent is to determine whether an annotated sentence expresses the target relation type rather than predict the relationship of entity pair, so sentences are treated independently despite belonging to the same entity pair.",
"In distant supervision training dataset, one relation type contains several thousands or ten thousands sentences; moreover, reward R can only be calculated after processing the whole positive set of this relation type.",
"If we randomly initialize the parameters of policy network and train this network by trial and errors, it will waste a lot of time and be inclined to poor convergence properties.",
"In order to overcome this problem, we adopt a supervised learning procedure to pre-train our policy network, in which way to provide a general learning direction for our policy-based agent.",
"Pre-training Strategy The pre-training strategy, inspired from AlphaGo (Silver et al., 2016) , is a common strategy in RL related works to accelerate the training of RL agents.",
"Normally, they utilize a small part of the annotated dataset to train policy networks before reinforcement learning.",
"For example, AlphaGo uses the collected experts moves to do a supervised learning for Go RL agent.",
"However, in distant supervision relation extraction task, there is not any supervised information that can be used unless let linguistic experts to do some manual annotations for part of the entity pairs.",
"However, this is expensive, and it is not the original intention of distant supervision.",
"Under this circumstance, we propose a compromised solution.",
"With well-aligned corpus, the true positive samples should have evident advantage in quantity compared with false positive samples in the distantly-supervised dataset.",
"So, for a specific relation type, we directly treat the distantly-supervised positive set as the positive set, and randomly extract part of distantly-supervised negative set as the negative set.",
"In order to better consider prior information during this pre-training procedure, the amount of negative samples is 10 times of the number of positive samples.",
"It is because, when learning with massive negative samples, the agent is more likely to develop toward a better direction.",
"Cross-entropy cost function is used to train this binary classifier, where the negative label corresponds to the removing action, and the positive label corresponds to the retaining action.",
"(2) J(θ) = i y i log[π(a = y i |s i ; θ)] + (1 − y i )log[1 − π(a = y i |s i ; θ)] Due to the noisy nature of the distantly-labeled instances, if we let this pre-training process overfit this noisy dataset, the predicted probabilities of most samples tend to be close to 0 or 1, which is difficult to be corrected and unnecessarily increases the training cost of reinforcement learning.",
"So, we stop this training process when the accuracy reaches 85% ∼ 90%.",
"Theoretically, our approach can be explained as increasing the entropy of the policy gradient agent, and preventing the entropy of the policy being too low, which means that the lack of exploration may be a concern.",
"3.1.2 Retraining Agent with Rewards As shown in Figure 2 , in order to discover incorrectly-labeled instances without any supervised information, we introduce a policy-based RL method.",
"What our agent tries to deal with is the noisy samples from the distantly-supervised positive dataset; Here we call it as the DS positive dataset.",
"We split it into the training positive set P ori t and the validation positive set P ori v ; naturally, both of these two set are noisy.",
"Correspondingly, the training negative set N ori t and the validation negative set N ori v are constructed by randomly selected from the DS negative dataset.",
"In every epoch, the agent removes a noisy sample set Ψ i from P ori t according to the stochastic policy π(a|s), and we obtain a new positive set P t = P ori t − Ψ i .",
"Because Ψ i is recognized as the wrong-labeled samples, we redistribute it into the negative set N t = N ori t + Ψ i .",
"Under this setting, the scale of training set is constant for each epoch.",
"Now we utilize the cleaned data {P t , N t } to train a relation classifier.",
"The desirable situation is that RL agent has the capacity to increase the performance of relation classifier through relocating incorrectly-labeled false positive instances.",
"Therefore, we use the validation set {P ori v , N ori v } to measure the performance of the current agent.",
"First, this validation set is filtered and redistributed by the current agent as {P v , N v }; the F 1 score of the current relation classifier is calculated from it.",
"Finally, the difference of F 1 scores between the current and previous epoch is used to calculate reward.",
"Next, we will introduce several strategies to train a more robust RL agent.",
"Removing the fixed number of sentences in each epoch In every epoch, we let the RL agent to remove a fixed number of sentences or less (when the number of the removed sentences in one epoch does not reach this fixed number during training), in which way to prevent the case that the agent tries to remove more false positive instances by removing more instances.",
"Under the restriction of fixed number, if the agent decides to remove the current state, it means the chance of removing other states decrease.",
"Therefore, in order to obtain a better reward, the agent should try to remove a instance set that includes more negative instances.",
"Loss function The quality of the RL agent is reflected by the quality of the removed part.",
"After the pre-training process, the agent just possesses Algorithm 1 Retraining agent with rewards for relation k. For a clearer expression, k is omitted in the following algorithm.",
"Require: Positive set {P ori t , P ori v }, Negative set {N ori t , N ori v }, the fixed number of removal γ t , γ v 1: Load parameters θ from pre-trained policy network 2: Initialize s * as the all-zero vector with the same dimension of s j 3: for epoch i = 1 → N do 4: for s j ∈ P ori t do 5: s j = concatenation(s j , s * ) 6: Randomly sample a j ∼ π(a| s j ; θ); compute p j = π(a = 0| s j ; θ) 7: if a j == 0 then Rank T based on p j from high to low, obtain T rank 12: for t i in T rank [: γ t ] do 13: Add t i [0] into Ψ i 14: end for 15: P i t = P ori t − Ψ i , N i t = N ori t + Ψ i , R = α(F i 1 − F i−1 1 ) 19 : Ω i−1 = Ψ i−1 − Ψ i ∩ Ψ i−1 ; Ω i = Ψ i − Ψ i ∩ Ψ i−1 20: 21: Updata θ: g ∝ θ Ω i log π(a|s; θ)R + θ Ω i−1 log π(a|s; θ)(−R) 22: end for the ability to distinguish the obvious false positive instances, which means the discrimination of the indistinguishable wrong-labeled instances are still ambiguous.",
"Particularly, this indistinguishable part is the criterion to reflect the quality of the agent.",
"Therefore, regardless of these easydistinguished instances, the different parts of the removed parts in different epochs are the determinant of the change of F 1 scores.",
"Therefore, we definite two sets: Ω i−1 = Ψ i−1 − (Ψ i ∩ Ψ i−1 ) (3) Ω i = Ψ i − (Ψ i ∩ Ψ i−1 ) (4) where Ψ i is the removed part of epoch i. Ω i−1 and Ω i are represented with the different colors in Figure 2.",
"If F 1 score increases in the epoch i, it means the actions of the epoch i is more reasonable than that in the epoch i − 1.",
"In other words, Ω i is more negative than Ω i−1 .",
"Thus, we assign the positive reward to Ω i and the negative reward to Ω i−1 , and vice versa.",
"In summary, the ultimate loss function is formulated as follow: (5) J(θ) = Ω i log π(a|s; θ)R + Ω i−1 log π(a|s; θ)(−R) Redistributing Training Dataset with Policy-based Agents Through the above reinforcement learning procedure, for each relation type, we obtain a agent as the false-positive indicator.",
"These agents possess the capability of recognizing incorrectly-labeled instances of the corresponding relation types.",
"We adopt these agents as classifiers to recognize false positive samples in the noisy distantly-supervised training dataset.",
"For one entity pair, if all the sentence aligned from corpus are classified as false positive, then this entity pair is redistributed into the negative set.",
"Experiments We adopt a policy-based RL method to generate a series of relation indicators and use them to re-distribute training dataset by moving false positive samples to negative sample set.",
"Therefore, our experiments are intended to demonstrate that our RL agents possess this capability.",
"Datast and Evaluation Metrics We evaluate the proposed method on a commonlyused dataset 2 , which is first presented in Riedel et al.",
"(2010) .",
"This dataset is generated by aligning entity pairs from Freebase with New York Times corpus(NYT).",
"Entity mentions of NYT corpus are recognized by the Stanford named entity recognizer (Finkel et al., 2005) .",
"Similar to the previous works, we adopt the held-out evaluation to evaluate our model, which can provide an approximate measure of the classification ability without costly human evaluation.",
"Similar to the generation of the training set, the entity pairs in test set are also selected from Freebase, which will be predicted under the sentences discovered from the NYT corpus.",
"Experimental Settings Policy-based Agent The action space of our RL agent just includes two actions.",
"Therefore, the agent can be modeled as a binary classifier.",
"We adopt a single-window CNN as this policy network.",
"The detailed hyperparameter settings are presented in Table 1 .",
"As for word embeddings, we directly use the word embedding file released by Lin et al.",
"(2016) 3 , which just keeps the words that appear more than 100 times in NYT.",
"Moreover, we have the same dimension setting of the position embedding, and the maximum length of relative distance is −30 and 30 (\"-\" and \"+\" represent the left and right side of the entities).",
"The learning rate of reinforcement learning is 2e −5 .",
"For each relation type, the fixed number γ t , γ v are according to the pre-trained agent.",
"When one relation type has too many distantsupervised positive sentences (for example, /lo-2 http://iesl.cs.umass.edu/riedel/ecml/ 3 https://github.com/thunlp/NRE Table 2 : Comparison of F 1 scores among three cases: the relation classifier is trained with the original dataset, the redistributed dataset generated by the pre-trained agent, and the redistributed dataset generated by our RL agent respectively.",
"The name of relation types are abbreviated: /peo/per/pob represents /people/person/place of birth cation/location/contains has 75768 sentences), we sample a subset of size 7,500 sentences to train the agent.",
"For the average vector of the removed sentences, in the pre-training process and the first state of the retraining process, it is set as all-zero vector.",
"Relation Classifier for Calculating Reward In order to evaluate a series of actions by agent, we use a simple CNN model, because the simple network is more sensitive to the quality of the training set.",
"The proportion between P ori t and P ori v is 2:1, and they are all derived from the training set of Riedel dataset; the corresponding negative sample sets N ori t and N ori v are randomly selected from the Riedel negative dataset, whose size is twice that of their corresponding positive sets.",
"The Effectiveness of Reinforcement Learning In Table 2 , we list the F 1 scores before and after adopting the proposed RL method.",
"Even though there are 52 actual relation types in Riedel dataset, only 10 relation types have more than 1000 pos- Zeng et al.",
"(2015) and Lin et al.",
"(2016) are both the robust models to solve wrong labeling problem of distant supervision relation extraction.",
"Zeng et al.",
"(2015) combine at-least-one multi-instance learning with deep neural network to extract only one active sentence to predict the relation between entity pair; Lin et al.",
"(2016) combine all sentences of one entity pair and assign soft attention weights to them, in which way to generate a compositive relation representation for this entity pair.",
"However, the false positive phenomenon also includes the case that all the sentences of one entity pair are wrong, which is because the corpus is not completely aligned with the knowledge base.",
"This phenomenon is also common between Riedel dataset and Freebase through our manual inspection.",
"Obviously, there is nothing the above two methods can do in this case.",
"The proposed RL method is to tackle this problem.",
"We adopt our RL agents to redistribute Riedel dataset by moving false positive samples into the negative sample set.",
"Then we use Zeng et al.",
"(2015) and Lin et al.",
"(2016) to predict relations on this cleaned dataset, and compare the performance with that on the original Riedel dataset.",
"As shown in Figure 3 and Figure 4 , under the assistant of our RL agent, the same model can achieve obvious improvement with more reasonable training dataset.",
"In order to give the more intuitive comparison, we calculate the AUC value of each PR curve, which reflects the area size under these curves.",
"These comparable results also indicate the effectiveness of our policy-based RL method.",
"Moreover, as can be seen from the result of t-test evaluation, all the p-values are less than 5e-02, so the improvements are significant.",
"proportional to the original scale, which is in accordance with the actual accident situation.",
"At the same time, we analyze the correlation between the false positive phenomenon and the number of sentences of entity pairs : With this the number ranging from 1 to 5, the corresponding percentages are [55.9%, 32.0%, 3.7%, 4.4%, 0.7%].",
"This distribution is consistent with our assumption.",
"Because Freebase is, to some extent, not completely aligned with the NYT corpus, entity pairs with fewer sentences are more likely to be false positive, which is the major factor hindering the performance of the previous systems.",
"In Table 4 , we present some false positive examples selected by our agents.",
"Taking entity pair (Sami Moubayed, Syria) as an example, it is obvious that there is not any valuable information reflecting relation /people/person/place of birth.",
"Both of these sentences talks about the situation analysis of Syria from the political analyst Sami Moubayed.",
"We also found that, for some entity pairs, even though there are multiple sentences, all of them are identical.",
"This phenomenon also increases the probability of the appearance of false positive samples.",
"Case Study Conclusion In this work, we propose a deep reinforcement learning framework for robust distant supervision.",
"The intuition is that, in contrast to prior works that utilize only one instance per entity pair and use soft attention weights to select plausible distantly supervised examples, we describe a policy-based framework to systematically learn to relocate the false positive samples, and better utilize the unlabeled data.",
"More specifically, our goal is to Table 2. teach the reinforcement agent to optimize the selection/redistribution strategy that maximizes the reward of boosting the performance of relation classification.",
"An important aspect of our work is that our framework does not depend on a specific form of the relation classifier, meaning that it is a plug-and-play technique that could be potentially applied to any relation extraction pipeline.",
"In experiments, we show that our framework boosts the performance of distant supervision relation extraction of various strong deep learning baselines on the widely used New York Times -Freebase dataset."
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.1.1",
"3.2",
"4",
"4.1",
"4.2.1",
"4.2.2",
"4.3",
"5"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Reinforcement Learning for Distant Supervision",
"Training Policy-based Agent",
"Pre-training Strategy",
"Redistributing Training Dataset with",
"Experiments",
"Datast and Evaluation Metrics",
"Policy-based Agent",
"Relation Classifier for Calculating Reward",
"The Effectiveness of Reinforcement Learning",
"Conclusion"
]
} | GEM-SciDuet-train-136#paper-1365#slide-3 | Requirements | General Purpose and Offline Process
Learn a Policy to Denoise the Training Data | General Purpose and Offline Process
Learn a Policy to Denoise the Training Data | [] |
GEM-SciDuet-train-136#paper-1365#slide-4 | 1365 | Robust Distant Supervision Relation Extraction via Deep Reinforcement Learning | Distant supervision has become the standard method for relation extraction. However, even though it is an efficient method, it does not come at no cost-The resulted distantly-supervised training samples are often very noisy. To combat the noise, most of the recent state-of-theart approaches focus on selecting onebest sentence or calculating soft attention weights over the set of the sentences of one specific entity pair. However, these methods are suboptimal, and the false positive problem is still a key stumbling bottleneck for the performance. We argue that those incorrectly-labeled candidate sentences must be treated with a hard decision, rather than being dealt with soft attention weights. To do this, our paper describes a radical solution-We explore a deep reinforcement learning strategy to generate the false-positive indicator, where we automatically recognize false positives for each relation type without any supervised information. Unlike the removal operation in the previous studies, we redistribute them into the negative examples. The experimental results show that the proposed strategy significantly improves the performance of distant supervision comparing to state-of-the-art systems. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192
],
"paper_content_text": [
"Introduction Relation extraction is a core task in information extraction and natural language understanding.",
"The goal of relation extraction is to predict relations for entities in a sentence (Zelenko et al., 2003; Bunescu and Mooney, 2005; GuoDong et al., 2005) .",
"For example, given a sentence \"Barack Obama is married to Michelle Obama.",
"\", a relation classifier aims at predicting the relation of \"spouse\".",
"In downstream applications, relation extraction is the key module for constructing knowledge graphs, and it is a vital component of many natural language processing applications such as structured search, sentiment analysis, question answering, and summarization.",
"A major issue encountered in the early development of relation extraction algorithms is the data sparsity issue-It is extremely expensive, and almost impossible for human annotators to go through a large corpus of millions of sentences to provide a large amount of labeled training instances.",
"Therefore, distant supervision relation extraction (Mintz et al., 2009; Hoffmann et al., 2011; Surdeanu et al., 2012) becomes popular, because it uses entity pairs from knowledge bases to select a set of noisy instances from unlabeled data.",
"In recent years, neural network approaches (Zeng et al., 2014 (Zeng et al., , 2015 have been proposed to train the relation extractor under these noisy conditions.",
"To suppress the noisy (Roth et al., 2013) , recent stud-ies (Lin et al., 2016) have proposed the use of attention mechanisms to place soft weights on a set of noisy sentences, and select samples.",
"However, we argue that only selecting one example or based on soft attention weights are not the optimal strategy: To improve the robustness, we need a systematic solution to make use of more instances, while removing false positives and placing them in the right place.",
"In this paper, we investigate the possibility of using dynamic selection strategies for robust distant supervision.",
"More specifically, we design a deep reinforcement learning agent, whose goal is to learn to choose whether to remove or remain the distantly supervised candidate instance based on the performance change of the relation classifier.",
"Intuitively, our agent would like to remove false positives, and reconstruct a cleaned set of distantly supervised instances to maximize the reward based on the classification accuracy.",
"Our proposed method is classifier-independent, and it can be applied to any existing distant supervision model.",
"Empirically, we show that our method has brought consistent performance gains in various deep neural network based models, achieving strong performances on the widely used New York Times dataset (Riedel et al., 2010) .",
"Our contributions are three-fold: • We propose a novel deep reinforcement learning framework for robust distant supervision relation extraction.",
"• Our method is model-independent, meaning that it could be applied to any state-of-the-art relation extractors.",
"• We show that our method can boost the performances of recently proposed neural relation extractors.",
"In Section 2, we will discuss related works on distant supervision relation extraction.",
"Next, we will describe our robust distant supervision framework in Section 3.",
"In Section 4, empirical evaluation results are shown.",
"And finally, we conclude in Section 5.",
"Mintz et al.",
"(2009) is the first study that combines dependency path and feature aggregation for distant supervision.",
"However, this approach would introduce a lot of false positives, as the same entity pair might have multiple relations.",
"To alleviate this issue, Hoffmann et al.",
"(2011) address this issue, and propose a model to jointly learn with multiple relations.",
"Surdeanu et al.",
"(2012) further propose a multi-instance multi-label learning framework to improve the performance.",
"Note that these early approaches do not explicitly remove noisy instances, but rather hope that the model would be able to suppress the noise.",
"Related Work Recently, with the advance of neural network techniques, deep learning methods (Zeng et al., 2014 (Zeng et al., , 2015 are introduced, and the hope is to model noisy distant supervision process in the hidden layers.",
"However, their approach only selects one most plausible instance per entity pair, inevitably missing out a lot of valuable training instances.",
"Recently, Lin et al.",
"(2016) propose an attention mechanism to select plausible instances from a set of noisy instances.",
"However, we believe that soft attention weight assignment might not be the optimal solution, since the false positives should be completely removed and placed in the negative set.",
"Ji et al.",
"(2017) combine the external knowledge to rich the representation of entity pair, in which way to improve the accuracy of attention weights.",
"Even though these above-mentioned methods can select high-quality instances, they ignore the false positive case: all the sentences of one entity pair belongs to the false positives.",
"In this work, we take a radical approach to solve this problem-We will make use of the distantly labeled resources as much as possible, while learning a independent false-positive indicator to remove false positives, and place them in the right place.",
"After our ACL submission, we notice that a contemporaneous study Feng et al.",
"(2018) also adopts reinforcement learning to learn an instance selector, but their reward is calculated from the prediction probabilities.",
"In contrast, while in our method, the reward is intuitively reflected by the performance change of the relation classifier.",
"Our approach is also complement to most of the approaches above, and can be directly applied on top of any existing relation extraction classifiers.",
"Reinforcement Learning for Distant Supervision We introduce a performance-driven, policy-based reinforcement learning method to heuristically recognize false positive samples.",
"Comparing to a prior study that has underutilized the distantlysupervised samples (Lin et al., 2016) , we consider an RL agent for robust distant supervision relation extraction.",
"We first describe the definitions of our RL method, including the policy-based agent, external environment, and pre-training strategy.",
"Next, we describe the retraining strategy for our RL agent.",
"The goal of our agent is to determine whether to retain or remove a distantlysupervised sentence, based on the performance change of relation classifier.",
"Finally, we describe the noisy-suppression method, where we teach our policy-based agent to make a redistribution for a cleaner distant supervision training dataset.",
"Distant supervision relation extraction is to predict the relation type of entity pair under the automatically-generated training set.",
"However, the issue is that these distantly-supervised sentences that mention this entity pair may not express the desired relation type.",
"Therefore, what our RL agent should do is to determine whether the distantly-supervised sentence is a true positive instance for this relation type.",
"For reinforcement learning, external environment and RL agent are two necessary components, and a robust agent is trained from the dynamic interaction between these two parts (Arulkumaran et al., 2017) .",
"First, the prerequisite of reinforcement learning is that the external environment should be modeled as a Markov decision process (MDP).",
"However, the traditional setting of relation extraction cannot satisfy this condition: the input sentences are independent of each other.",
"In other words, we cannot merely use the information of the sentence being processed as the state.",
"Thus, we add the information from the early states into the representation of the current state, in which way to model our task as a MDP problem (Fang et al., 2017) .",
"The other component, RL agent, is parameterized with a policy network π θ (s, a) = p(a|s; θ).",
"The probability distribution of actions A = {a remove , a remain } is calculated by policy network based on state vectors.",
"What needs to be noted is that, Deep Q Network (DQN) (Mnih et al., 2013) is also a widelyused RL method; however, it is not suitable for our case, even if our action space is small.",
"First, we cannot compute the immediate reward for every operation; In contrast, the accurate reward can only be obtained after finishing processing the whole training dataset.",
"Second, the stochastic policy of the policy network is capable of prevent-ing the agent from getting stuck in an intermediate state.",
"The following subsections detailedly introduce the definitions of the fundamental components in the proposed RL method.",
"States In order to satisfy the condition of MDP, the state s includes the information from the current sentence and the sentences that have been removed in early states.",
"The semantic and syntactic information of sentence is represented by a continuous real-valued vector.",
"According to some state-of-the-art supervised relation extraction approaches (Zeng et al., 2014; Nguyen and Grishman, 2015) , we utilize both word embedding and position embedding to convert sentence into vector.",
"With this sentence vector, the current state is the concatenation of the current sentence vector and the average vector of the removed sentences in early states.",
"We give relatively larger weight for the vector of the current sentence, in which way to magnify the dominating influence of the current sentence information for the decision of action.",
"Actions At each step, our agent is required to determine whether the instance is false positive for target relation type.",
"Each relation type has a agent 1 .",
"There are two actions for each agent: whether to remove or retain the current instance from the training set.",
"With the initial distantlysupervised dataset that is blended with incorrectlylabeled instances, we hope that our agent is capable of using the policy network to filter noisy instances; Under this cleaned dataset, distant supervision is then expected to achieve better performance.",
"Rewards As previously mentioned, the intuition of our model is that, when the incorrectly-labeled instances are filtered, the better performance of relation classifier will achieve.",
"Therefore, we use the change of performance as the result-driven reward for a series of actions decided by the agent.",
"Compared to accuracy, we adopt the F 1 score as the evaluation criterion, since accuracy might not be an indicative metric in a multi-class classification setting where the data distribution could be imbalanced.",
"Thus, the reward can be formulated as the RL Agent Train Relation Classifier \" #$\" \" # × + # + ×(− # ) Noisy dataset - ./# Cleaned dataset - #$\" Cleaned dataset - # Removed part Removed part Train # = ( \" # -\" #$\" ) Relation Classifier RL Agent Epoch − 1 : Epoch : Figure 2 : The proposed policy-based reinforcement learning framework.",
"The agent tries to remove the wrong-labeled sentences from the distantly-supervised positive dataset P ori .",
"In order to calculate the reward, P ori is split into the training part P ori t and the validation part P ori v ; their corresponding negative part are represented as N ori t and N ori v .",
"In each epoch i, the agent performs a series of actions to recognize the false positive samples from P ori t and treat them as negative samples.",
"Then, a new relation classifier is trained under the new dataset Noisy dataset - ./# + - ./# { 6 #$\" , 6 #$\" } - #$\" - ./# + - # { 6 # , 6 # } {P i t , N i t }.",
"With this relation classifier, F 1 score is calculated from the new validation set {P i v , N i v }, where P i v is also filtered by the current agent.",
"After that, the current reward is measured as the difference of F 1 between the adjacent epochs.",
"difference between the adjacent epochs: R i = α(F i 1 − F i−1 1 ) (1) As this equation shows, in step i, our agent is given a positive reward only if F 1 gets improved; otherwise, the agent will receive a negative reward.",
"Under this setting, the value of reward is proportional to the difference of F 1 , and α is used to convert this difference into a rational numeric range.",
"Naturally, the value of the reward is in a continuous space, which is more reasonable than a binary reward (−1 and 1), because this setting can reflect the number of wrong-labeled instance that the agent has removed.",
"In order to avoid the randomness of F 1 , we use the average F 1 of last five epochs to calculate the reward.",
"Policy Network For each input sentence, our policy network is to determine whether it expresses the target relation type and then make removal action if it is irrelevant to the target relation type.",
"Thus, it is analogous to a binary relation classifier.",
"CNN is commonly used to construct relation classification system (Santos et al., 2015; Xu et al., 2015; Shen and Huang, 2016) , so we adopt a simple CNN with window size c w and kernel size c k , to model policy network π(s; θ).",
"The reason why we do not choice the variants of CNN (Zeng et al., 2015; Lin et al., 2016) that are well-designed for distant supervision is that these two models belong to bag-level models (dealing with a bag of sentences simultaneously) and deal with the multi-classification problem; We just need a model to do binary sentencelevel classification.",
"Naturally, the simpler network is adopted.",
"Training Policy-based Agent Unlike the goal of distant supervision relation extraction, our agent is to determine whether an annotated sentence expresses the target relation type rather than predict the relationship of entity pair, so sentences are treated independently despite belonging to the same entity pair.",
"In distant supervision training dataset, one relation type contains several thousands or ten thousands sentences; moreover, reward R can only be calculated after processing the whole positive set of this relation type.",
"If we randomly initialize the parameters of policy network and train this network by trial and errors, it will waste a lot of time and be inclined to poor convergence properties.",
"In order to overcome this problem, we adopt a supervised learning procedure to pre-train our policy network, in which way to provide a general learning direction for our policy-based agent.",
"Pre-training Strategy The pre-training strategy, inspired from AlphaGo (Silver et al., 2016) , is a common strategy in RL related works to accelerate the training of RL agents.",
"Normally, they utilize a small part of the annotated dataset to train policy networks before reinforcement learning.",
"For example, AlphaGo uses the collected experts moves to do a supervised learning for Go RL agent.",
"However, in distant supervision relation extraction task, there is not any supervised information that can be used unless let linguistic experts to do some manual annotations for part of the entity pairs.",
"However, this is expensive, and it is not the original intention of distant supervision.",
"Under this circumstance, we propose a compromised solution.",
"With well-aligned corpus, the true positive samples should have evident advantage in quantity compared with false positive samples in the distantly-supervised dataset.",
"So, for a specific relation type, we directly treat the distantly-supervised positive set as the positive set, and randomly extract part of distantly-supervised negative set as the negative set.",
"In order to better consider prior information during this pre-training procedure, the amount of negative samples is 10 times of the number of positive samples.",
"It is because, when learning with massive negative samples, the agent is more likely to develop toward a better direction.",
"Cross-entropy cost function is used to train this binary classifier, where the negative label corresponds to the removing action, and the positive label corresponds to the retaining action.",
"(2) J(θ) = i y i log[π(a = y i |s i ; θ)] + (1 − y i )log[1 − π(a = y i |s i ; θ)] Due to the noisy nature of the distantly-labeled instances, if we let this pre-training process overfit this noisy dataset, the predicted probabilities of most samples tend to be close to 0 or 1, which is difficult to be corrected and unnecessarily increases the training cost of reinforcement learning.",
"So, we stop this training process when the accuracy reaches 85% ∼ 90%.",
"Theoretically, our approach can be explained as increasing the entropy of the policy gradient agent, and preventing the entropy of the policy being too low, which means that the lack of exploration may be a concern.",
"3.1.2 Retraining Agent with Rewards As shown in Figure 2 , in order to discover incorrectly-labeled instances without any supervised information, we introduce a policy-based RL method.",
"What our agent tries to deal with is the noisy samples from the distantly-supervised positive dataset; Here we call it as the DS positive dataset.",
"We split it into the training positive set P ori t and the validation positive set P ori v ; naturally, both of these two set are noisy.",
"Correspondingly, the training negative set N ori t and the validation negative set N ori v are constructed by randomly selected from the DS negative dataset.",
"In every epoch, the agent removes a noisy sample set Ψ i from P ori t according to the stochastic policy π(a|s), and we obtain a new positive set P t = P ori t − Ψ i .",
"Because Ψ i is recognized as the wrong-labeled samples, we redistribute it into the negative set N t = N ori t + Ψ i .",
"Under this setting, the scale of training set is constant for each epoch.",
"Now we utilize the cleaned data {P t , N t } to train a relation classifier.",
"The desirable situation is that RL agent has the capacity to increase the performance of relation classifier through relocating incorrectly-labeled false positive instances.",
"Therefore, we use the validation set {P ori v , N ori v } to measure the performance of the current agent.",
"First, this validation set is filtered and redistributed by the current agent as {P v , N v }; the F 1 score of the current relation classifier is calculated from it.",
"Finally, the difference of F 1 scores between the current and previous epoch is used to calculate reward.",
"Next, we will introduce several strategies to train a more robust RL agent.",
"Removing the fixed number of sentences in each epoch In every epoch, we let the RL agent to remove a fixed number of sentences or less (when the number of the removed sentences in one epoch does not reach this fixed number during training), in which way to prevent the case that the agent tries to remove more false positive instances by removing more instances.",
"Under the restriction of fixed number, if the agent decides to remove the current state, it means the chance of removing other states decrease.",
"Therefore, in order to obtain a better reward, the agent should try to remove a instance set that includes more negative instances.",
"Loss function The quality of the RL agent is reflected by the quality of the removed part.",
"After the pre-training process, the agent just possesses Algorithm 1 Retraining agent with rewards for relation k. For a clearer expression, k is omitted in the following algorithm.",
"Require: Positive set {P ori t , P ori v }, Negative set {N ori t , N ori v }, the fixed number of removal γ t , γ v 1: Load parameters θ from pre-trained policy network 2: Initialize s * as the all-zero vector with the same dimension of s j 3: for epoch i = 1 → N do 4: for s j ∈ P ori t do 5: s j = concatenation(s j , s * ) 6: Randomly sample a j ∼ π(a| s j ; θ); compute p j = π(a = 0| s j ; θ) 7: if a j == 0 then Rank T based on p j from high to low, obtain T rank 12: for t i in T rank [: γ t ] do 13: Add t i [0] into Ψ i 14: end for 15: P i t = P ori t − Ψ i , N i t = N ori t + Ψ i , R = α(F i 1 − F i−1 1 ) 19 : Ω i−1 = Ψ i−1 − Ψ i ∩ Ψ i−1 ; Ω i = Ψ i − Ψ i ∩ Ψ i−1 20: 21: Updata θ: g ∝ θ Ω i log π(a|s; θ)R + θ Ω i−1 log π(a|s; θ)(−R) 22: end for the ability to distinguish the obvious false positive instances, which means the discrimination of the indistinguishable wrong-labeled instances are still ambiguous.",
"Particularly, this indistinguishable part is the criterion to reflect the quality of the agent.",
"Therefore, regardless of these easydistinguished instances, the different parts of the removed parts in different epochs are the determinant of the change of F 1 scores.",
"Therefore, we definite two sets: Ω i−1 = Ψ i−1 − (Ψ i ∩ Ψ i−1 ) (3) Ω i = Ψ i − (Ψ i ∩ Ψ i−1 ) (4) where Ψ i is the removed part of epoch i. Ω i−1 and Ω i are represented with the different colors in Figure 2.",
"If F 1 score increases in the epoch i, it means the actions of the epoch i is more reasonable than that in the epoch i − 1.",
"In other words, Ω i is more negative than Ω i−1 .",
"Thus, we assign the positive reward to Ω i and the negative reward to Ω i−1 , and vice versa.",
"In summary, the ultimate loss function is formulated as follow: (5) J(θ) = Ω i log π(a|s; θ)R + Ω i−1 log π(a|s; θ)(−R) Redistributing Training Dataset with Policy-based Agents Through the above reinforcement learning procedure, for each relation type, we obtain a agent as the false-positive indicator.",
"These agents possess the capability of recognizing incorrectly-labeled instances of the corresponding relation types.",
"We adopt these agents as classifiers to recognize false positive samples in the noisy distantly-supervised training dataset.",
"For one entity pair, if all the sentence aligned from corpus are classified as false positive, then this entity pair is redistributed into the negative set.",
"Experiments We adopt a policy-based RL method to generate a series of relation indicators and use them to re-distribute training dataset by moving false positive samples to negative sample set.",
"Therefore, our experiments are intended to demonstrate that our RL agents possess this capability.",
"Datast and Evaluation Metrics We evaluate the proposed method on a commonlyused dataset 2 , which is first presented in Riedel et al.",
"(2010) .",
"This dataset is generated by aligning entity pairs from Freebase with New York Times corpus(NYT).",
"Entity mentions of NYT corpus are recognized by the Stanford named entity recognizer (Finkel et al., 2005) .",
"Similar to the previous works, we adopt the held-out evaluation to evaluate our model, which can provide an approximate measure of the classification ability without costly human evaluation.",
"Similar to the generation of the training set, the entity pairs in test set are also selected from Freebase, which will be predicted under the sentences discovered from the NYT corpus.",
"Experimental Settings Policy-based Agent The action space of our RL agent just includes two actions.",
"Therefore, the agent can be modeled as a binary classifier.",
"We adopt a single-window CNN as this policy network.",
"The detailed hyperparameter settings are presented in Table 1 .",
"As for word embeddings, we directly use the word embedding file released by Lin et al.",
"(2016) 3 , which just keeps the words that appear more than 100 times in NYT.",
"Moreover, we have the same dimension setting of the position embedding, and the maximum length of relative distance is −30 and 30 (\"-\" and \"+\" represent the left and right side of the entities).",
"The learning rate of reinforcement learning is 2e −5 .",
"For each relation type, the fixed number γ t , γ v are according to the pre-trained agent.",
"When one relation type has too many distantsupervised positive sentences (for example, /lo-2 http://iesl.cs.umass.edu/riedel/ecml/ 3 https://github.com/thunlp/NRE Table 2 : Comparison of F 1 scores among three cases: the relation classifier is trained with the original dataset, the redistributed dataset generated by the pre-trained agent, and the redistributed dataset generated by our RL agent respectively.",
"The name of relation types are abbreviated: /peo/per/pob represents /people/person/place of birth cation/location/contains has 75768 sentences), we sample a subset of size 7,500 sentences to train the agent.",
"For the average vector of the removed sentences, in the pre-training process and the first state of the retraining process, it is set as all-zero vector.",
"Relation Classifier for Calculating Reward In order to evaluate a series of actions by agent, we use a simple CNN model, because the simple network is more sensitive to the quality of the training set.",
"The proportion between P ori t and P ori v is 2:1, and they are all derived from the training set of Riedel dataset; the corresponding negative sample sets N ori t and N ori v are randomly selected from the Riedel negative dataset, whose size is twice that of their corresponding positive sets.",
"The Effectiveness of Reinforcement Learning In Table 2 , we list the F 1 scores before and after adopting the proposed RL method.",
"Even though there are 52 actual relation types in Riedel dataset, only 10 relation types have more than 1000 pos- Zeng et al.",
"(2015) and Lin et al.",
"(2016) are both the robust models to solve wrong labeling problem of distant supervision relation extraction.",
"Zeng et al.",
"(2015) combine at-least-one multi-instance learning with deep neural network to extract only one active sentence to predict the relation between entity pair; Lin et al.",
"(2016) combine all sentences of one entity pair and assign soft attention weights to them, in which way to generate a compositive relation representation for this entity pair.",
"However, the false positive phenomenon also includes the case that all the sentences of one entity pair are wrong, which is because the corpus is not completely aligned with the knowledge base.",
"This phenomenon is also common between Riedel dataset and Freebase through our manual inspection.",
"Obviously, there is nothing the above two methods can do in this case.",
"The proposed RL method is to tackle this problem.",
"We adopt our RL agents to redistribute Riedel dataset by moving false positive samples into the negative sample set.",
"Then we use Zeng et al.",
"(2015) and Lin et al.",
"(2016) to predict relations on this cleaned dataset, and compare the performance with that on the original Riedel dataset.",
"As shown in Figure 3 and Figure 4 , under the assistant of our RL agent, the same model can achieve obvious improvement with more reasonable training dataset.",
"In order to give the more intuitive comparison, we calculate the AUC value of each PR curve, which reflects the area size under these curves.",
"These comparable results also indicate the effectiveness of our policy-based RL method.",
"Moreover, as can be seen from the result of t-test evaluation, all the p-values are less than 5e-02, so the improvements are significant.",
"proportional to the original scale, which is in accordance with the actual accident situation.",
"At the same time, we analyze the correlation between the false positive phenomenon and the number of sentences of entity pairs : With this the number ranging from 1 to 5, the corresponding percentages are [55.9%, 32.0%, 3.7%, 4.4%, 0.7%].",
"This distribution is consistent with our assumption.",
"Because Freebase is, to some extent, not completely aligned with the NYT corpus, entity pairs with fewer sentences are more likely to be false positive, which is the major factor hindering the performance of the previous systems.",
"In Table 4 , we present some false positive examples selected by our agents.",
"Taking entity pair (Sami Moubayed, Syria) as an example, it is obvious that there is not any valuable information reflecting relation /people/person/place of birth.",
"Both of these sentences talks about the situation analysis of Syria from the political analyst Sami Moubayed.",
"We also found that, for some entity pairs, even though there are multiple sentences, all of them are identical.",
"This phenomenon also increases the probability of the appearance of false positive samples.",
"Case Study Conclusion In this work, we propose a deep reinforcement learning framework for robust distant supervision.",
"The intuition is that, in contrast to prior works that utilize only one instance per entity pair and use soft attention weights to select plausible distantly supervised examples, we describe a policy-based framework to systematically learn to relocate the false positive samples, and better utilize the unlabeled data.",
"More specifically, our goal is to Table 2. teach the reinforcement agent to optimize the selection/redistribution strategy that maximizes the reward of boosting the performance of relation classification.",
"An important aspect of our work is that our framework does not depend on a specific form of the relation classifier, meaning that it is a plug-and-play technique that could be potentially applied to any relation extraction pipeline.",
"In experiments, we show that our framework boosts the performance of distant supervision relation extraction of various strong deep learning baselines on the widely used New York Times -Freebase dataset."
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.1.1",
"3.2",
"4",
"4.1",
"4.2.1",
"4.2.2",
"4.3",
"5"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Reinforcement Learning for Distant Supervision",
"Training Policy-based Agent",
"Pre-training Strategy",
"Redistributing Training Dataset with",
"Experiments",
"Datast and Evaluation Metrics",
"Policy-based Agent",
"Relation Classifier for Calculating Reward",
"The Effectiveness of Reinforcement Learning",
"Conclusion"
]
} | GEM-SciDuet-train-136#paper-1365#slide-4 | Deep Reinforcement Learning | The average vector of previous removed sentences
One relation type has an agent
Positive: Distantly-supervised positive sentences
Negative: Sampled from other relations
Split into training set and validation set
RL Agent da taset Train
RL Agent C leane d Train Relation Classifier d ataset | The average vector of previous removed sentences
One relation type has an agent
Positive: Distantly-supervised positive sentences
Negative: Sampled from other relations
Split into training set and validation set
RL Agent da taset Train
RL Agent C leane d Train Relation Classifier d ataset | [] |
GEM-SciDuet-train-136#paper-1365#slide-5 | 1365 | Robust Distant Supervision Relation Extraction via Deep Reinforcement Learning | Distant supervision has become the standard method for relation extraction. However, even though it is an efficient method, it does not come at no cost-The resulted distantly-supervised training samples are often very noisy. To combat the noise, most of the recent state-of-theart approaches focus on selecting onebest sentence or calculating soft attention weights over the set of the sentences of one specific entity pair. However, these methods are suboptimal, and the false positive problem is still a key stumbling bottleneck for the performance. We argue that those incorrectly-labeled candidate sentences must be treated with a hard decision, rather than being dealt with soft attention weights. To do this, our paper describes a radical solution-We explore a deep reinforcement learning strategy to generate the false-positive indicator, where we automatically recognize false positives for each relation type without any supervised information. Unlike the removal operation in the previous studies, we redistribute them into the negative examples. The experimental results show that the proposed strategy significantly improves the performance of distant supervision comparing to state-of-the-art systems. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192
],
"paper_content_text": [
"Introduction Relation extraction is a core task in information extraction and natural language understanding.",
"The goal of relation extraction is to predict relations for entities in a sentence (Zelenko et al., 2003; Bunescu and Mooney, 2005; GuoDong et al., 2005) .",
"For example, given a sentence \"Barack Obama is married to Michelle Obama.",
"\", a relation classifier aims at predicting the relation of \"spouse\".",
"In downstream applications, relation extraction is the key module for constructing knowledge graphs, and it is a vital component of many natural language processing applications such as structured search, sentiment analysis, question answering, and summarization.",
"A major issue encountered in the early development of relation extraction algorithms is the data sparsity issue-It is extremely expensive, and almost impossible for human annotators to go through a large corpus of millions of sentences to provide a large amount of labeled training instances.",
"Therefore, distant supervision relation extraction (Mintz et al., 2009; Hoffmann et al., 2011; Surdeanu et al., 2012) becomes popular, because it uses entity pairs from knowledge bases to select a set of noisy instances from unlabeled data.",
"In recent years, neural network approaches (Zeng et al., 2014 (Zeng et al., , 2015 have been proposed to train the relation extractor under these noisy conditions.",
"To suppress the noisy (Roth et al., 2013) , recent stud-ies (Lin et al., 2016) have proposed the use of attention mechanisms to place soft weights on a set of noisy sentences, and select samples.",
"However, we argue that only selecting one example or based on soft attention weights are not the optimal strategy: To improve the robustness, we need a systematic solution to make use of more instances, while removing false positives and placing them in the right place.",
"In this paper, we investigate the possibility of using dynamic selection strategies for robust distant supervision.",
"More specifically, we design a deep reinforcement learning agent, whose goal is to learn to choose whether to remove or remain the distantly supervised candidate instance based on the performance change of the relation classifier.",
"Intuitively, our agent would like to remove false positives, and reconstruct a cleaned set of distantly supervised instances to maximize the reward based on the classification accuracy.",
"Our proposed method is classifier-independent, and it can be applied to any existing distant supervision model.",
"Empirically, we show that our method has brought consistent performance gains in various deep neural network based models, achieving strong performances on the widely used New York Times dataset (Riedel et al., 2010) .",
"Our contributions are three-fold: • We propose a novel deep reinforcement learning framework for robust distant supervision relation extraction.",
"• Our method is model-independent, meaning that it could be applied to any state-of-the-art relation extractors.",
"• We show that our method can boost the performances of recently proposed neural relation extractors.",
"In Section 2, we will discuss related works on distant supervision relation extraction.",
"Next, we will describe our robust distant supervision framework in Section 3.",
"In Section 4, empirical evaluation results are shown.",
"And finally, we conclude in Section 5.",
"Mintz et al.",
"(2009) is the first study that combines dependency path and feature aggregation for distant supervision.",
"However, this approach would introduce a lot of false positives, as the same entity pair might have multiple relations.",
"To alleviate this issue, Hoffmann et al.",
"(2011) address this issue, and propose a model to jointly learn with multiple relations.",
"Surdeanu et al.",
"(2012) further propose a multi-instance multi-label learning framework to improve the performance.",
"Note that these early approaches do not explicitly remove noisy instances, but rather hope that the model would be able to suppress the noise.",
"Related Work Recently, with the advance of neural network techniques, deep learning methods (Zeng et al., 2014 (Zeng et al., , 2015 are introduced, and the hope is to model noisy distant supervision process in the hidden layers.",
"However, their approach only selects one most plausible instance per entity pair, inevitably missing out a lot of valuable training instances.",
"Recently, Lin et al.",
"(2016) propose an attention mechanism to select plausible instances from a set of noisy instances.",
"However, we believe that soft attention weight assignment might not be the optimal solution, since the false positives should be completely removed and placed in the negative set.",
"Ji et al.",
"(2017) combine the external knowledge to rich the representation of entity pair, in which way to improve the accuracy of attention weights.",
"Even though these above-mentioned methods can select high-quality instances, they ignore the false positive case: all the sentences of one entity pair belongs to the false positives.",
"In this work, we take a radical approach to solve this problem-We will make use of the distantly labeled resources as much as possible, while learning a independent false-positive indicator to remove false positives, and place them in the right place.",
"After our ACL submission, we notice that a contemporaneous study Feng et al.",
"(2018) also adopts reinforcement learning to learn an instance selector, but their reward is calculated from the prediction probabilities.",
"In contrast, while in our method, the reward is intuitively reflected by the performance change of the relation classifier.",
"Our approach is also complement to most of the approaches above, and can be directly applied on top of any existing relation extraction classifiers.",
"Reinforcement Learning for Distant Supervision We introduce a performance-driven, policy-based reinforcement learning method to heuristically recognize false positive samples.",
"Comparing to a prior study that has underutilized the distantlysupervised samples (Lin et al., 2016) , we consider an RL agent for robust distant supervision relation extraction.",
"We first describe the definitions of our RL method, including the policy-based agent, external environment, and pre-training strategy.",
"Next, we describe the retraining strategy for our RL agent.",
"The goal of our agent is to determine whether to retain or remove a distantlysupervised sentence, based on the performance change of relation classifier.",
"Finally, we describe the noisy-suppression method, where we teach our policy-based agent to make a redistribution for a cleaner distant supervision training dataset.",
"Distant supervision relation extraction is to predict the relation type of entity pair under the automatically-generated training set.",
"However, the issue is that these distantly-supervised sentences that mention this entity pair may not express the desired relation type.",
"Therefore, what our RL agent should do is to determine whether the distantly-supervised sentence is a true positive instance for this relation type.",
"For reinforcement learning, external environment and RL agent are two necessary components, and a robust agent is trained from the dynamic interaction between these two parts (Arulkumaran et al., 2017) .",
"First, the prerequisite of reinforcement learning is that the external environment should be modeled as a Markov decision process (MDP).",
"However, the traditional setting of relation extraction cannot satisfy this condition: the input sentences are independent of each other.",
"In other words, we cannot merely use the information of the sentence being processed as the state.",
"Thus, we add the information from the early states into the representation of the current state, in which way to model our task as a MDP problem (Fang et al., 2017) .",
"The other component, RL agent, is parameterized with a policy network π θ (s, a) = p(a|s; θ).",
"The probability distribution of actions A = {a remove , a remain } is calculated by policy network based on state vectors.",
"What needs to be noted is that, Deep Q Network (DQN) (Mnih et al., 2013) is also a widelyused RL method; however, it is not suitable for our case, even if our action space is small.",
"First, we cannot compute the immediate reward for every operation; In contrast, the accurate reward can only be obtained after finishing processing the whole training dataset.",
"Second, the stochastic policy of the policy network is capable of prevent-ing the agent from getting stuck in an intermediate state.",
"The following subsections detailedly introduce the definitions of the fundamental components in the proposed RL method.",
"States In order to satisfy the condition of MDP, the state s includes the information from the current sentence and the sentences that have been removed in early states.",
"The semantic and syntactic information of sentence is represented by a continuous real-valued vector.",
"According to some state-of-the-art supervised relation extraction approaches (Zeng et al., 2014; Nguyen and Grishman, 2015) , we utilize both word embedding and position embedding to convert sentence into vector.",
"With this sentence vector, the current state is the concatenation of the current sentence vector and the average vector of the removed sentences in early states.",
"We give relatively larger weight for the vector of the current sentence, in which way to magnify the dominating influence of the current sentence information for the decision of action.",
"Actions At each step, our agent is required to determine whether the instance is false positive for target relation type.",
"Each relation type has a agent 1 .",
"There are two actions for each agent: whether to remove or retain the current instance from the training set.",
"With the initial distantlysupervised dataset that is blended with incorrectlylabeled instances, we hope that our agent is capable of using the policy network to filter noisy instances; Under this cleaned dataset, distant supervision is then expected to achieve better performance.",
"Rewards As previously mentioned, the intuition of our model is that, when the incorrectly-labeled instances are filtered, the better performance of relation classifier will achieve.",
"Therefore, we use the change of performance as the result-driven reward for a series of actions decided by the agent.",
"Compared to accuracy, we adopt the F 1 score as the evaluation criterion, since accuracy might not be an indicative metric in a multi-class classification setting where the data distribution could be imbalanced.",
"Thus, the reward can be formulated as the RL Agent Train Relation Classifier \" #$\" \" # × + # + ×(− # ) Noisy dataset - ./# Cleaned dataset - #$\" Cleaned dataset - # Removed part Removed part Train # = ( \" # -\" #$\" ) Relation Classifier RL Agent Epoch − 1 : Epoch : Figure 2 : The proposed policy-based reinforcement learning framework.",
"The agent tries to remove the wrong-labeled sentences from the distantly-supervised positive dataset P ori .",
"In order to calculate the reward, P ori is split into the training part P ori t and the validation part P ori v ; their corresponding negative part are represented as N ori t and N ori v .",
"In each epoch i, the agent performs a series of actions to recognize the false positive samples from P ori t and treat them as negative samples.",
"Then, a new relation classifier is trained under the new dataset Noisy dataset - ./# + - ./# { 6 #$\" , 6 #$\" } - #$\" - ./# + - # { 6 # , 6 # } {P i t , N i t }.",
"With this relation classifier, F 1 score is calculated from the new validation set {P i v , N i v }, where P i v is also filtered by the current agent.",
"After that, the current reward is measured as the difference of F 1 between the adjacent epochs.",
"difference between the adjacent epochs: R i = α(F i 1 − F i−1 1 ) (1) As this equation shows, in step i, our agent is given a positive reward only if F 1 gets improved; otherwise, the agent will receive a negative reward.",
"Under this setting, the value of reward is proportional to the difference of F 1 , and α is used to convert this difference into a rational numeric range.",
"Naturally, the value of the reward is in a continuous space, which is more reasonable than a binary reward (−1 and 1), because this setting can reflect the number of wrong-labeled instance that the agent has removed.",
"In order to avoid the randomness of F 1 , we use the average F 1 of last five epochs to calculate the reward.",
"Policy Network For each input sentence, our policy network is to determine whether it expresses the target relation type and then make removal action if it is irrelevant to the target relation type.",
"Thus, it is analogous to a binary relation classifier.",
"CNN is commonly used to construct relation classification system (Santos et al., 2015; Xu et al., 2015; Shen and Huang, 2016) , so we adopt a simple CNN with window size c w and kernel size c k , to model policy network π(s; θ).",
"The reason why we do not choice the variants of CNN (Zeng et al., 2015; Lin et al., 2016) that are well-designed for distant supervision is that these two models belong to bag-level models (dealing with a bag of sentences simultaneously) and deal with the multi-classification problem; We just need a model to do binary sentencelevel classification.",
"Naturally, the simpler network is adopted.",
"Training Policy-based Agent Unlike the goal of distant supervision relation extraction, our agent is to determine whether an annotated sentence expresses the target relation type rather than predict the relationship of entity pair, so sentences are treated independently despite belonging to the same entity pair.",
"In distant supervision training dataset, one relation type contains several thousands or ten thousands sentences; moreover, reward R can only be calculated after processing the whole positive set of this relation type.",
"If we randomly initialize the parameters of policy network and train this network by trial and errors, it will waste a lot of time and be inclined to poor convergence properties.",
"In order to overcome this problem, we adopt a supervised learning procedure to pre-train our policy network, in which way to provide a general learning direction for our policy-based agent.",
"Pre-training Strategy The pre-training strategy, inspired from AlphaGo (Silver et al., 2016) , is a common strategy in RL related works to accelerate the training of RL agents.",
"Normally, they utilize a small part of the annotated dataset to train policy networks before reinforcement learning.",
"For example, AlphaGo uses the collected experts moves to do a supervised learning for Go RL agent.",
"However, in distant supervision relation extraction task, there is not any supervised information that can be used unless let linguistic experts to do some manual annotations for part of the entity pairs.",
"However, this is expensive, and it is not the original intention of distant supervision.",
"Under this circumstance, we propose a compromised solution.",
"With well-aligned corpus, the true positive samples should have evident advantage in quantity compared with false positive samples in the distantly-supervised dataset.",
"So, for a specific relation type, we directly treat the distantly-supervised positive set as the positive set, and randomly extract part of distantly-supervised negative set as the negative set.",
"In order to better consider prior information during this pre-training procedure, the amount of negative samples is 10 times of the number of positive samples.",
"It is because, when learning with massive negative samples, the agent is more likely to develop toward a better direction.",
"Cross-entropy cost function is used to train this binary classifier, where the negative label corresponds to the removing action, and the positive label corresponds to the retaining action.",
"(2) J(θ) = i y i log[π(a = y i |s i ; θ)] + (1 − y i )log[1 − π(a = y i |s i ; θ)] Due to the noisy nature of the distantly-labeled instances, if we let this pre-training process overfit this noisy dataset, the predicted probabilities of most samples tend to be close to 0 or 1, which is difficult to be corrected and unnecessarily increases the training cost of reinforcement learning.",
"So, we stop this training process when the accuracy reaches 85% ∼ 90%.",
"Theoretically, our approach can be explained as increasing the entropy of the policy gradient agent, and preventing the entropy of the policy being too low, which means that the lack of exploration may be a concern.",
"3.1.2 Retraining Agent with Rewards As shown in Figure 2 , in order to discover incorrectly-labeled instances without any supervised information, we introduce a policy-based RL method.",
"What our agent tries to deal with is the noisy samples from the distantly-supervised positive dataset; Here we call it as the DS positive dataset.",
"We split it into the training positive set P ori t and the validation positive set P ori v ; naturally, both of these two set are noisy.",
"Correspondingly, the training negative set N ori t and the validation negative set N ori v are constructed by randomly selected from the DS negative dataset.",
"In every epoch, the agent removes a noisy sample set Ψ i from P ori t according to the stochastic policy π(a|s), and we obtain a new positive set P t = P ori t − Ψ i .",
"Because Ψ i is recognized as the wrong-labeled samples, we redistribute it into the negative set N t = N ori t + Ψ i .",
"Under this setting, the scale of training set is constant for each epoch.",
"Now we utilize the cleaned data {P t , N t } to train a relation classifier.",
"The desirable situation is that RL agent has the capacity to increase the performance of relation classifier through relocating incorrectly-labeled false positive instances.",
"Therefore, we use the validation set {P ori v , N ori v } to measure the performance of the current agent.",
"First, this validation set is filtered and redistributed by the current agent as {P v , N v }; the F 1 score of the current relation classifier is calculated from it.",
"Finally, the difference of F 1 scores between the current and previous epoch is used to calculate reward.",
"Next, we will introduce several strategies to train a more robust RL agent.",
"Removing the fixed number of sentences in each epoch In every epoch, we let the RL agent to remove a fixed number of sentences or less (when the number of the removed sentences in one epoch does not reach this fixed number during training), in which way to prevent the case that the agent tries to remove more false positive instances by removing more instances.",
"Under the restriction of fixed number, if the agent decides to remove the current state, it means the chance of removing other states decrease.",
"Therefore, in order to obtain a better reward, the agent should try to remove a instance set that includes more negative instances.",
"Loss function The quality of the RL agent is reflected by the quality of the removed part.",
"After the pre-training process, the agent just possesses Algorithm 1 Retraining agent with rewards for relation k. For a clearer expression, k is omitted in the following algorithm.",
"Require: Positive set {P ori t , P ori v }, Negative set {N ori t , N ori v }, the fixed number of removal γ t , γ v 1: Load parameters θ from pre-trained policy network 2: Initialize s * as the all-zero vector with the same dimension of s j 3: for epoch i = 1 → N do 4: for s j ∈ P ori t do 5: s j = concatenation(s j , s * ) 6: Randomly sample a j ∼ π(a| s j ; θ); compute p j = π(a = 0| s j ; θ) 7: if a j == 0 then Rank T based on p j from high to low, obtain T rank 12: for t i in T rank [: γ t ] do 13: Add t i [0] into Ψ i 14: end for 15: P i t = P ori t − Ψ i , N i t = N ori t + Ψ i , R = α(F i 1 − F i−1 1 ) 19 : Ω i−1 = Ψ i−1 − Ψ i ∩ Ψ i−1 ; Ω i = Ψ i − Ψ i ∩ Ψ i−1 20: 21: Updata θ: g ∝ θ Ω i log π(a|s; θ)R + θ Ω i−1 log π(a|s; θ)(−R) 22: end for the ability to distinguish the obvious false positive instances, which means the discrimination of the indistinguishable wrong-labeled instances are still ambiguous.",
"Particularly, this indistinguishable part is the criterion to reflect the quality of the agent.",
"Therefore, regardless of these easydistinguished instances, the different parts of the removed parts in different epochs are the determinant of the change of F 1 scores.",
"Therefore, we definite two sets: Ω i−1 = Ψ i−1 − (Ψ i ∩ Ψ i−1 ) (3) Ω i = Ψ i − (Ψ i ∩ Ψ i−1 ) (4) where Ψ i is the removed part of epoch i. Ω i−1 and Ω i are represented with the different colors in Figure 2.",
"If F 1 score increases in the epoch i, it means the actions of the epoch i is more reasonable than that in the epoch i − 1.",
"In other words, Ω i is more negative than Ω i−1 .",
"Thus, we assign the positive reward to Ω i and the negative reward to Ω i−1 , and vice versa.",
"In summary, the ultimate loss function is formulated as follow: (5) J(θ) = Ω i log π(a|s; θ)R + Ω i−1 log π(a|s; θ)(−R) Redistributing Training Dataset with Policy-based Agents Through the above reinforcement learning procedure, for each relation type, we obtain a agent as the false-positive indicator.",
"These agents possess the capability of recognizing incorrectly-labeled instances of the corresponding relation types.",
"We adopt these agents as classifiers to recognize false positive samples in the noisy distantly-supervised training dataset.",
"For one entity pair, if all the sentence aligned from corpus are classified as false positive, then this entity pair is redistributed into the negative set.",
"Experiments We adopt a policy-based RL method to generate a series of relation indicators and use them to re-distribute training dataset by moving false positive samples to negative sample set.",
"Therefore, our experiments are intended to demonstrate that our RL agents possess this capability.",
"Datast and Evaluation Metrics We evaluate the proposed method on a commonlyused dataset 2 , which is first presented in Riedel et al.",
"(2010) .",
"This dataset is generated by aligning entity pairs from Freebase with New York Times corpus(NYT).",
"Entity mentions of NYT corpus are recognized by the Stanford named entity recognizer (Finkel et al., 2005) .",
"Similar to the previous works, we adopt the held-out evaluation to evaluate our model, which can provide an approximate measure of the classification ability without costly human evaluation.",
"Similar to the generation of the training set, the entity pairs in test set are also selected from Freebase, which will be predicted under the sentences discovered from the NYT corpus.",
"Experimental Settings Policy-based Agent The action space of our RL agent just includes two actions.",
"Therefore, the agent can be modeled as a binary classifier.",
"We adopt a single-window CNN as this policy network.",
"The detailed hyperparameter settings are presented in Table 1 .",
"As for word embeddings, we directly use the word embedding file released by Lin et al.",
"(2016) 3 , which just keeps the words that appear more than 100 times in NYT.",
"Moreover, we have the same dimension setting of the position embedding, and the maximum length of relative distance is −30 and 30 (\"-\" and \"+\" represent the left and right side of the entities).",
"The learning rate of reinforcement learning is 2e −5 .",
"For each relation type, the fixed number γ t , γ v are according to the pre-trained agent.",
"When one relation type has too many distantsupervised positive sentences (for example, /lo-2 http://iesl.cs.umass.edu/riedel/ecml/ 3 https://github.com/thunlp/NRE Table 2 : Comparison of F 1 scores among three cases: the relation classifier is trained with the original dataset, the redistributed dataset generated by the pre-trained agent, and the redistributed dataset generated by our RL agent respectively.",
"The name of relation types are abbreviated: /peo/per/pob represents /people/person/place of birth cation/location/contains has 75768 sentences), we sample a subset of size 7,500 sentences to train the agent.",
"For the average vector of the removed sentences, in the pre-training process and the first state of the retraining process, it is set as all-zero vector.",
"Relation Classifier for Calculating Reward In order to evaluate a series of actions by agent, we use a simple CNN model, because the simple network is more sensitive to the quality of the training set.",
"The proportion between P ori t and P ori v is 2:1, and they are all derived from the training set of Riedel dataset; the corresponding negative sample sets N ori t and N ori v are randomly selected from the Riedel negative dataset, whose size is twice that of their corresponding positive sets.",
"The Effectiveness of Reinforcement Learning In Table 2 , we list the F 1 scores before and after adopting the proposed RL method.",
"Even though there are 52 actual relation types in Riedel dataset, only 10 relation types have more than 1000 pos- Zeng et al.",
"(2015) and Lin et al.",
"(2016) are both the robust models to solve wrong labeling problem of distant supervision relation extraction.",
"Zeng et al.",
"(2015) combine at-least-one multi-instance learning with deep neural network to extract only one active sentence to predict the relation between entity pair; Lin et al.",
"(2016) combine all sentences of one entity pair and assign soft attention weights to them, in which way to generate a compositive relation representation for this entity pair.",
"However, the false positive phenomenon also includes the case that all the sentences of one entity pair are wrong, which is because the corpus is not completely aligned with the knowledge base.",
"This phenomenon is also common between Riedel dataset and Freebase through our manual inspection.",
"Obviously, there is nothing the above two methods can do in this case.",
"The proposed RL method is to tackle this problem.",
"We adopt our RL agents to redistribute Riedel dataset by moving false positive samples into the negative sample set.",
"Then we use Zeng et al.",
"(2015) and Lin et al.",
"(2016) to predict relations on this cleaned dataset, and compare the performance with that on the original Riedel dataset.",
"As shown in Figure 3 and Figure 4 , under the assistant of our RL agent, the same model can achieve obvious improvement with more reasonable training dataset.",
"In order to give the more intuitive comparison, we calculate the AUC value of each PR curve, which reflects the area size under these curves.",
"These comparable results also indicate the effectiveness of our policy-based RL method.",
"Moreover, as can be seen from the result of t-test evaluation, all the p-values are less than 5e-02, so the improvements are significant.",
"proportional to the original scale, which is in accordance with the actual accident situation.",
"At the same time, we analyze the correlation between the false positive phenomenon and the number of sentences of entity pairs : With this the number ranging from 1 to 5, the corresponding percentages are [55.9%, 32.0%, 3.7%, 4.4%, 0.7%].",
"This distribution is consistent with our assumption.",
"Because Freebase is, to some extent, not completely aligned with the NYT corpus, entity pairs with fewer sentences are more likely to be false positive, which is the major factor hindering the performance of the previous systems.",
"In Table 4 , we present some false positive examples selected by our agents.",
"Taking entity pair (Sami Moubayed, Syria) as an example, it is obvious that there is not any valuable information reflecting relation /people/person/place of birth.",
"Both of these sentences talks about the situation analysis of Syria from the political analyst Sami Moubayed.",
"We also found that, for some entity pairs, even though there are multiple sentences, all of them are identical.",
"This phenomenon also increases the probability of the appearance of false positive samples.",
"Case Study Conclusion In this work, we propose a deep reinforcement learning framework for robust distant supervision.",
"The intuition is that, in contrast to prior works that utilize only one instance per entity pair and use soft attention weights to select plausible distantly supervised examples, we describe a policy-based framework to systematically learn to relocate the false positive samples, and better utilize the unlabeled data.",
"More specifically, our goal is to Table 2. teach the reinforcement agent to optimize the selection/redistribution strategy that maximizes the reward of boosting the performance of relation classification.",
"An important aspect of our work is that our framework does not depend on a specific form of the relation classifier, meaning that it is a plug-and-play technique that could be potentially applied to any relation extraction pipeline.",
"In experiments, we show that our framework boosts the performance of distant supervision relation extraction of various strong deep learning baselines on the widely used New York Times -Freebase dataset."
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.1.1",
"3.2",
"4",
"4.1",
"4.2.1",
"4.2.2",
"4.3",
"5"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Reinforcement Learning for Distant Supervision",
"Training Policy-based Agent",
"Pre-training Strategy",
"Redistributing Training Dataset with",
"Experiments",
"Datast and Evaluation Metrics",
"Policy-based Agent",
"Relation Classifier for Calculating Reward",
"The Effectiveness of Reinforcement Learning",
"Conclusion"
]
} | GEM-SciDuet-train-136#paper-1365#slide-5 | Reward | Positive Set Negative Set | Positive Set Negative Set | [] |
GEM-SciDuet-train-136#paper-1365#slide-6 | 1365 | Robust Distant Supervision Relation Extraction via Deep Reinforcement Learning | Distant supervision has become the standard method for relation extraction. However, even though it is an efficient method, it does not come at no cost-The resulted distantly-supervised training samples are often very noisy. To combat the noise, most of the recent state-of-theart approaches focus on selecting onebest sentence or calculating soft attention weights over the set of the sentences of one specific entity pair. However, these methods are suboptimal, and the false positive problem is still a key stumbling bottleneck for the performance. We argue that those incorrectly-labeled candidate sentences must be treated with a hard decision, rather than being dealt with soft attention weights. To do this, our paper describes a radical solution-We explore a deep reinforcement learning strategy to generate the false-positive indicator, where we automatically recognize false positives for each relation type without any supervised information. Unlike the removal operation in the previous studies, we redistribute them into the negative examples. The experimental results show that the proposed strategy significantly improves the performance of distant supervision comparing to state-of-the-art systems. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192
],
"paper_content_text": [
"Introduction Relation extraction is a core task in information extraction and natural language understanding.",
"The goal of relation extraction is to predict relations for entities in a sentence (Zelenko et al., 2003; Bunescu and Mooney, 2005; GuoDong et al., 2005) .",
"For example, given a sentence \"Barack Obama is married to Michelle Obama.",
"\", a relation classifier aims at predicting the relation of \"spouse\".",
"In downstream applications, relation extraction is the key module for constructing knowledge graphs, and it is a vital component of many natural language processing applications such as structured search, sentiment analysis, question answering, and summarization.",
"A major issue encountered in the early development of relation extraction algorithms is the data sparsity issue-It is extremely expensive, and almost impossible for human annotators to go through a large corpus of millions of sentences to provide a large amount of labeled training instances.",
"Therefore, distant supervision relation extraction (Mintz et al., 2009; Hoffmann et al., 2011; Surdeanu et al., 2012) becomes popular, because it uses entity pairs from knowledge bases to select a set of noisy instances from unlabeled data.",
"In recent years, neural network approaches (Zeng et al., 2014 (Zeng et al., , 2015 have been proposed to train the relation extractor under these noisy conditions.",
"To suppress the noisy (Roth et al., 2013) , recent stud-ies (Lin et al., 2016) have proposed the use of attention mechanisms to place soft weights on a set of noisy sentences, and select samples.",
"However, we argue that only selecting one example or based on soft attention weights are not the optimal strategy: To improve the robustness, we need a systematic solution to make use of more instances, while removing false positives and placing them in the right place.",
"In this paper, we investigate the possibility of using dynamic selection strategies for robust distant supervision.",
"More specifically, we design a deep reinforcement learning agent, whose goal is to learn to choose whether to remove or remain the distantly supervised candidate instance based on the performance change of the relation classifier.",
"Intuitively, our agent would like to remove false positives, and reconstruct a cleaned set of distantly supervised instances to maximize the reward based on the classification accuracy.",
"Our proposed method is classifier-independent, and it can be applied to any existing distant supervision model.",
"Empirically, we show that our method has brought consistent performance gains in various deep neural network based models, achieving strong performances on the widely used New York Times dataset (Riedel et al., 2010) .",
"Our contributions are three-fold: • We propose a novel deep reinforcement learning framework for robust distant supervision relation extraction.",
"• Our method is model-independent, meaning that it could be applied to any state-of-the-art relation extractors.",
"• We show that our method can boost the performances of recently proposed neural relation extractors.",
"In Section 2, we will discuss related works on distant supervision relation extraction.",
"Next, we will describe our robust distant supervision framework in Section 3.",
"In Section 4, empirical evaluation results are shown.",
"And finally, we conclude in Section 5.",
"Mintz et al.",
"(2009) is the first study that combines dependency path and feature aggregation for distant supervision.",
"However, this approach would introduce a lot of false positives, as the same entity pair might have multiple relations.",
"To alleviate this issue, Hoffmann et al.",
"(2011) address this issue, and propose a model to jointly learn with multiple relations.",
"Surdeanu et al.",
"(2012) further propose a multi-instance multi-label learning framework to improve the performance.",
"Note that these early approaches do not explicitly remove noisy instances, but rather hope that the model would be able to suppress the noise.",
"Related Work Recently, with the advance of neural network techniques, deep learning methods (Zeng et al., 2014 (Zeng et al., , 2015 are introduced, and the hope is to model noisy distant supervision process in the hidden layers.",
"However, their approach only selects one most plausible instance per entity pair, inevitably missing out a lot of valuable training instances.",
"Recently, Lin et al.",
"(2016) propose an attention mechanism to select plausible instances from a set of noisy instances.",
"However, we believe that soft attention weight assignment might not be the optimal solution, since the false positives should be completely removed and placed in the negative set.",
"Ji et al.",
"(2017) combine the external knowledge to rich the representation of entity pair, in which way to improve the accuracy of attention weights.",
"Even though these above-mentioned methods can select high-quality instances, they ignore the false positive case: all the sentences of one entity pair belongs to the false positives.",
"In this work, we take a radical approach to solve this problem-We will make use of the distantly labeled resources as much as possible, while learning a independent false-positive indicator to remove false positives, and place them in the right place.",
"After our ACL submission, we notice that a contemporaneous study Feng et al.",
"(2018) also adopts reinforcement learning to learn an instance selector, but their reward is calculated from the prediction probabilities.",
"In contrast, while in our method, the reward is intuitively reflected by the performance change of the relation classifier.",
"Our approach is also complement to most of the approaches above, and can be directly applied on top of any existing relation extraction classifiers.",
"Reinforcement Learning for Distant Supervision We introduce a performance-driven, policy-based reinforcement learning method to heuristically recognize false positive samples.",
"Comparing to a prior study that has underutilized the distantlysupervised samples (Lin et al., 2016) , we consider an RL agent for robust distant supervision relation extraction.",
"We first describe the definitions of our RL method, including the policy-based agent, external environment, and pre-training strategy.",
"Next, we describe the retraining strategy for our RL agent.",
"The goal of our agent is to determine whether to retain or remove a distantlysupervised sentence, based on the performance change of relation classifier.",
"Finally, we describe the noisy-suppression method, where we teach our policy-based agent to make a redistribution for a cleaner distant supervision training dataset.",
"Distant supervision relation extraction is to predict the relation type of entity pair under the automatically-generated training set.",
"However, the issue is that these distantly-supervised sentences that mention this entity pair may not express the desired relation type.",
"Therefore, what our RL agent should do is to determine whether the distantly-supervised sentence is a true positive instance for this relation type.",
"For reinforcement learning, external environment and RL agent are two necessary components, and a robust agent is trained from the dynamic interaction between these two parts (Arulkumaran et al., 2017) .",
"First, the prerequisite of reinforcement learning is that the external environment should be modeled as a Markov decision process (MDP).",
"However, the traditional setting of relation extraction cannot satisfy this condition: the input sentences are independent of each other.",
"In other words, we cannot merely use the information of the sentence being processed as the state.",
"Thus, we add the information from the early states into the representation of the current state, in which way to model our task as a MDP problem (Fang et al., 2017) .",
"The other component, RL agent, is parameterized with a policy network π θ (s, a) = p(a|s; θ).",
"The probability distribution of actions A = {a remove , a remain } is calculated by policy network based on state vectors.",
"What needs to be noted is that, Deep Q Network (DQN) (Mnih et al., 2013) is also a widelyused RL method; however, it is not suitable for our case, even if our action space is small.",
"First, we cannot compute the immediate reward for every operation; In contrast, the accurate reward can only be obtained after finishing processing the whole training dataset.",
"Second, the stochastic policy of the policy network is capable of prevent-ing the agent from getting stuck in an intermediate state.",
"The following subsections detailedly introduce the definitions of the fundamental components in the proposed RL method.",
"States In order to satisfy the condition of MDP, the state s includes the information from the current sentence and the sentences that have been removed in early states.",
"The semantic and syntactic information of sentence is represented by a continuous real-valued vector.",
"According to some state-of-the-art supervised relation extraction approaches (Zeng et al., 2014; Nguyen and Grishman, 2015) , we utilize both word embedding and position embedding to convert sentence into vector.",
"With this sentence vector, the current state is the concatenation of the current sentence vector and the average vector of the removed sentences in early states.",
"We give relatively larger weight for the vector of the current sentence, in which way to magnify the dominating influence of the current sentence information for the decision of action.",
"Actions At each step, our agent is required to determine whether the instance is false positive for target relation type.",
"Each relation type has a agent 1 .",
"There are two actions for each agent: whether to remove or retain the current instance from the training set.",
"With the initial distantlysupervised dataset that is blended with incorrectlylabeled instances, we hope that our agent is capable of using the policy network to filter noisy instances; Under this cleaned dataset, distant supervision is then expected to achieve better performance.",
"Rewards As previously mentioned, the intuition of our model is that, when the incorrectly-labeled instances are filtered, the better performance of relation classifier will achieve.",
"Therefore, we use the change of performance as the result-driven reward for a series of actions decided by the agent.",
"Compared to accuracy, we adopt the F 1 score as the evaluation criterion, since accuracy might not be an indicative metric in a multi-class classification setting where the data distribution could be imbalanced.",
"Thus, the reward can be formulated as the RL Agent Train Relation Classifier \" #$\" \" # × + # + ×(− # ) Noisy dataset - ./# Cleaned dataset - #$\" Cleaned dataset - # Removed part Removed part Train # = ( \" # -\" #$\" ) Relation Classifier RL Agent Epoch − 1 : Epoch : Figure 2 : The proposed policy-based reinforcement learning framework.",
"The agent tries to remove the wrong-labeled sentences from the distantly-supervised positive dataset P ori .",
"In order to calculate the reward, P ori is split into the training part P ori t and the validation part P ori v ; their corresponding negative part are represented as N ori t and N ori v .",
"In each epoch i, the agent performs a series of actions to recognize the false positive samples from P ori t and treat them as negative samples.",
"Then, a new relation classifier is trained under the new dataset Noisy dataset - ./# + - ./# { 6 #$\" , 6 #$\" } - #$\" - ./# + - # { 6 # , 6 # } {P i t , N i t }.",
"With this relation classifier, F 1 score is calculated from the new validation set {P i v , N i v }, where P i v is also filtered by the current agent.",
"After that, the current reward is measured as the difference of F 1 between the adjacent epochs.",
"difference between the adjacent epochs: R i = α(F i 1 − F i−1 1 ) (1) As this equation shows, in step i, our agent is given a positive reward only if F 1 gets improved; otherwise, the agent will receive a negative reward.",
"Under this setting, the value of reward is proportional to the difference of F 1 , and α is used to convert this difference into a rational numeric range.",
"Naturally, the value of the reward is in a continuous space, which is more reasonable than a binary reward (−1 and 1), because this setting can reflect the number of wrong-labeled instance that the agent has removed.",
"In order to avoid the randomness of F 1 , we use the average F 1 of last five epochs to calculate the reward.",
"Policy Network For each input sentence, our policy network is to determine whether it expresses the target relation type and then make removal action if it is irrelevant to the target relation type.",
"Thus, it is analogous to a binary relation classifier.",
"CNN is commonly used to construct relation classification system (Santos et al., 2015; Xu et al., 2015; Shen and Huang, 2016) , so we adopt a simple CNN with window size c w and kernel size c k , to model policy network π(s; θ).",
"The reason why we do not choice the variants of CNN (Zeng et al., 2015; Lin et al., 2016) that are well-designed for distant supervision is that these two models belong to bag-level models (dealing with a bag of sentences simultaneously) and deal with the multi-classification problem; We just need a model to do binary sentencelevel classification.",
"Naturally, the simpler network is adopted.",
"Training Policy-based Agent Unlike the goal of distant supervision relation extraction, our agent is to determine whether an annotated sentence expresses the target relation type rather than predict the relationship of entity pair, so sentences are treated independently despite belonging to the same entity pair.",
"In distant supervision training dataset, one relation type contains several thousands or ten thousands sentences; moreover, reward R can only be calculated after processing the whole positive set of this relation type.",
"If we randomly initialize the parameters of policy network and train this network by trial and errors, it will waste a lot of time and be inclined to poor convergence properties.",
"In order to overcome this problem, we adopt a supervised learning procedure to pre-train our policy network, in which way to provide a general learning direction for our policy-based agent.",
"Pre-training Strategy The pre-training strategy, inspired from AlphaGo (Silver et al., 2016) , is a common strategy in RL related works to accelerate the training of RL agents.",
"Normally, they utilize a small part of the annotated dataset to train policy networks before reinforcement learning.",
"For example, AlphaGo uses the collected experts moves to do a supervised learning for Go RL agent.",
"However, in distant supervision relation extraction task, there is not any supervised information that can be used unless let linguistic experts to do some manual annotations for part of the entity pairs.",
"However, this is expensive, and it is not the original intention of distant supervision.",
"Under this circumstance, we propose a compromised solution.",
"With well-aligned corpus, the true positive samples should have evident advantage in quantity compared with false positive samples in the distantly-supervised dataset.",
"So, for a specific relation type, we directly treat the distantly-supervised positive set as the positive set, and randomly extract part of distantly-supervised negative set as the negative set.",
"In order to better consider prior information during this pre-training procedure, the amount of negative samples is 10 times of the number of positive samples.",
"It is because, when learning with massive negative samples, the agent is more likely to develop toward a better direction.",
"Cross-entropy cost function is used to train this binary classifier, where the negative label corresponds to the removing action, and the positive label corresponds to the retaining action.",
"(2) J(θ) = i y i log[π(a = y i |s i ; θ)] + (1 − y i )log[1 − π(a = y i |s i ; θ)] Due to the noisy nature of the distantly-labeled instances, if we let this pre-training process overfit this noisy dataset, the predicted probabilities of most samples tend to be close to 0 or 1, which is difficult to be corrected and unnecessarily increases the training cost of reinforcement learning.",
"So, we stop this training process when the accuracy reaches 85% ∼ 90%.",
"Theoretically, our approach can be explained as increasing the entropy of the policy gradient agent, and preventing the entropy of the policy being too low, which means that the lack of exploration may be a concern.",
"3.1.2 Retraining Agent with Rewards As shown in Figure 2 , in order to discover incorrectly-labeled instances without any supervised information, we introduce a policy-based RL method.",
"What our agent tries to deal with is the noisy samples from the distantly-supervised positive dataset; Here we call it as the DS positive dataset.",
"We split it into the training positive set P ori t and the validation positive set P ori v ; naturally, both of these two set are noisy.",
"Correspondingly, the training negative set N ori t and the validation negative set N ori v are constructed by randomly selected from the DS negative dataset.",
"In every epoch, the agent removes a noisy sample set Ψ i from P ori t according to the stochastic policy π(a|s), and we obtain a new positive set P t = P ori t − Ψ i .",
"Because Ψ i is recognized as the wrong-labeled samples, we redistribute it into the negative set N t = N ori t + Ψ i .",
"Under this setting, the scale of training set is constant for each epoch.",
"Now we utilize the cleaned data {P t , N t } to train a relation classifier.",
"The desirable situation is that RL agent has the capacity to increase the performance of relation classifier through relocating incorrectly-labeled false positive instances.",
"Therefore, we use the validation set {P ori v , N ori v } to measure the performance of the current agent.",
"First, this validation set is filtered and redistributed by the current agent as {P v , N v }; the F 1 score of the current relation classifier is calculated from it.",
"Finally, the difference of F 1 scores between the current and previous epoch is used to calculate reward.",
"Next, we will introduce several strategies to train a more robust RL agent.",
"Removing the fixed number of sentences in each epoch In every epoch, we let the RL agent to remove a fixed number of sentences or less (when the number of the removed sentences in one epoch does not reach this fixed number during training), in which way to prevent the case that the agent tries to remove more false positive instances by removing more instances.",
"Under the restriction of fixed number, if the agent decides to remove the current state, it means the chance of removing other states decrease.",
"Therefore, in order to obtain a better reward, the agent should try to remove a instance set that includes more negative instances.",
"Loss function The quality of the RL agent is reflected by the quality of the removed part.",
"After the pre-training process, the agent just possesses Algorithm 1 Retraining agent with rewards for relation k. For a clearer expression, k is omitted in the following algorithm.",
"Require: Positive set {P ori t , P ori v }, Negative set {N ori t , N ori v }, the fixed number of removal γ t , γ v 1: Load parameters θ from pre-trained policy network 2: Initialize s * as the all-zero vector with the same dimension of s j 3: for epoch i = 1 → N do 4: for s j ∈ P ori t do 5: s j = concatenation(s j , s * ) 6: Randomly sample a j ∼ π(a| s j ; θ); compute p j = π(a = 0| s j ; θ) 7: if a j == 0 then Rank T based on p j from high to low, obtain T rank 12: for t i in T rank [: γ t ] do 13: Add t i [0] into Ψ i 14: end for 15: P i t = P ori t − Ψ i , N i t = N ori t + Ψ i , R = α(F i 1 − F i−1 1 ) 19 : Ω i−1 = Ψ i−1 − Ψ i ∩ Ψ i−1 ; Ω i = Ψ i − Ψ i ∩ Ψ i−1 20: 21: Updata θ: g ∝ θ Ω i log π(a|s; θ)R + θ Ω i−1 log π(a|s; θ)(−R) 22: end for the ability to distinguish the obvious false positive instances, which means the discrimination of the indistinguishable wrong-labeled instances are still ambiguous.",
"Particularly, this indistinguishable part is the criterion to reflect the quality of the agent.",
"Therefore, regardless of these easydistinguished instances, the different parts of the removed parts in different epochs are the determinant of the change of F 1 scores.",
"Therefore, we definite two sets: Ω i−1 = Ψ i−1 − (Ψ i ∩ Ψ i−1 ) (3) Ω i = Ψ i − (Ψ i ∩ Ψ i−1 ) (4) where Ψ i is the removed part of epoch i. Ω i−1 and Ω i are represented with the different colors in Figure 2.",
"If F 1 score increases in the epoch i, it means the actions of the epoch i is more reasonable than that in the epoch i − 1.",
"In other words, Ω i is more negative than Ω i−1 .",
"Thus, we assign the positive reward to Ω i and the negative reward to Ω i−1 , and vice versa.",
"In summary, the ultimate loss function is formulated as follow: (5) J(θ) = Ω i log π(a|s; θ)R + Ω i−1 log π(a|s; θ)(−R) Redistributing Training Dataset with Policy-based Agents Through the above reinforcement learning procedure, for each relation type, we obtain a agent as the false-positive indicator.",
"These agents possess the capability of recognizing incorrectly-labeled instances of the corresponding relation types.",
"We adopt these agents as classifiers to recognize false positive samples in the noisy distantly-supervised training dataset.",
"For one entity pair, if all the sentence aligned from corpus are classified as false positive, then this entity pair is redistributed into the negative set.",
"Experiments We adopt a policy-based RL method to generate a series of relation indicators and use them to re-distribute training dataset by moving false positive samples to negative sample set.",
"Therefore, our experiments are intended to demonstrate that our RL agents possess this capability.",
"Datast and Evaluation Metrics We evaluate the proposed method on a commonlyused dataset 2 , which is first presented in Riedel et al.",
"(2010) .",
"This dataset is generated by aligning entity pairs from Freebase with New York Times corpus(NYT).",
"Entity mentions of NYT corpus are recognized by the Stanford named entity recognizer (Finkel et al., 2005) .",
"Similar to the previous works, we adopt the held-out evaluation to evaluate our model, which can provide an approximate measure of the classification ability without costly human evaluation.",
"Similar to the generation of the training set, the entity pairs in test set are also selected from Freebase, which will be predicted under the sentences discovered from the NYT corpus.",
"Experimental Settings Policy-based Agent The action space of our RL agent just includes two actions.",
"Therefore, the agent can be modeled as a binary classifier.",
"We adopt a single-window CNN as this policy network.",
"The detailed hyperparameter settings are presented in Table 1 .",
"As for word embeddings, we directly use the word embedding file released by Lin et al.",
"(2016) 3 , which just keeps the words that appear more than 100 times in NYT.",
"Moreover, we have the same dimension setting of the position embedding, and the maximum length of relative distance is −30 and 30 (\"-\" and \"+\" represent the left and right side of the entities).",
"The learning rate of reinforcement learning is 2e −5 .",
"For each relation type, the fixed number γ t , γ v are according to the pre-trained agent.",
"When one relation type has too many distantsupervised positive sentences (for example, /lo-2 http://iesl.cs.umass.edu/riedel/ecml/ 3 https://github.com/thunlp/NRE Table 2 : Comparison of F 1 scores among three cases: the relation classifier is trained with the original dataset, the redistributed dataset generated by the pre-trained agent, and the redistributed dataset generated by our RL agent respectively.",
"The name of relation types are abbreviated: /peo/per/pob represents /people/person/place of birth cation/location/contains has 75768 sentences), we sample a subset of size 7,500 sentences to train the agent.",
"For the average vector of the removed sentences, in the pre-training process and the first state of the retraining process, it is set as all-zero vector.",
"Relation Classifier for Calculating Reward In order to evaluate a series of actions by agent, we use a simple CNN model, because the simple network is more sensitive to the quality of the training set.",
"The proportion between P ori t and P ori v is 2:1, and they are all derived from the training set of Riedel dataset; the corresponding negative sample sets N ori t and N ori v are randomly selected from the Riedel negative dataset, whose size is twice that of their corresponding positive sets.",
"The Effectiveness of Reinforcement Learning In Table 2 , we list the F 1 scores before and after adopting the proposed RL method.",
"Even though there are 52 actual relation types in Riedel dataset, only 10 relation types have more than 1000 pos- Zeng et al.",
"(2015) and Lin et al.",
"(2016) are both the robust models to solve wrong labeling problem of distant supervision relation extraction.",
"Zeng et al.",
"(2015) combine at-least-one multi-instance learning with deep neural network to extract only one active sentence to predict the relation between entity pair; Lin et al.",
"(2016) combine all sentences of one entity pair and assign soft attention weights to them, in which way to generate a compositive relation representation for this entity pair.",
"However, the false positive phenomenon also includes the case that all the sentences of one entity pair are wrong, which is because the corpus is not completely aligned with the knowledge base.",
"This phenomenon is also common between Riedel dataset and Freebase through our manual inspection.",
"Obviously, there is nothing the above two methods can do in this case.",
"The proposed RL method is to tackle this problem.",
"We adopt our RL agents to redistribute Riedel dataset by moving false positive samples into the negative sample set.",
"Then we use Zeng et al.",
"(2015) and Lin et al.",
"(2016) to predict relations on this cleaned dataset, and compare the performance with that on the original Riedel dataset.",
"As shown in Figure 3 and Figure 4 , under the assistant of our RL agent, the same model can achieve obvious improvement with more reasonable training dataset.",
"In order to give the more intuitive comparison, we calculate the AUC value of each PR curve, which reflects the area size under these curves.",
"These comparable results also indicate the effectiveness of our policy-based RL method.",
"Moreover, as can be seen from the result of t-test evaluation, all the p-values are less than 5e-02, so the improvements are significant.",
"proportional to the original scale, which is in accordance with the actual accident situation.",
"At the same time, we analyze the correlation between the false positive phenomenon and the number of sentences of entity pairs : With this the number ranging from 1 to 5, the corresponding percentages are [55.9%, 32.0%, 3.7%, 4.4%, 0.7%].",
"This distribution is consistent with our assumption.",
"Because Freebase is, to some extent, not completely aligned with the NYT corpus, entity pairs with fewer sentences are more likely to be false positive, which is the major factor hindering the performance of the previous systems.",
"In Table 4 , we present some false positive examples selected by our agents.",
"Taking entity pair (Sami Moubayed, Syria) as an example, it is obvious that there is not any valuable information reflecting relation /people/person/place of birth.",
"Both of these sentences talks about the situation analysis of Syria from the political analyst Sami Moubayed.",
"We also found that, for some entity pairs, even though there are multiple sentences, all of them are identical.",
"This phenomenon also increases the probability of the appearance of false positive samples.",
"Case Study Conclusion In this work, we propose a deep reinforcement learning framework for robust distant supervision.",
"The intuition is that, in contrast to prior works that utilize only one instance per entity pair and use soft attention weights to select plausible distantly supervised examples, we describe a policy-based framework to systematically learn to relocate the false positive samples, and better utilize the unlabeled data.",
"More specifically, our goal is to Table 2. teach the reinforcement agent to optimize the selection/redistribution strategy that maximizes the reward of boosting the performance of relation classification.",
"An important aspect of our work is that our framework does not depend on a specific form of the relation classifier, meaning that it is a plug-and-play technique that could be potentially applied to any relation extraction pipeline.",
"In experiments, we show that our framework boosts the performance of distant supervision relation extraction of various strong deep learning baselines on the widely used New York Times -Freebase dataset."
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.1.1",
"3.2",
"4",
"4.1",
"4.2.1",
"4.2.2",
"4.3",
"5"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Reinforcement Learning for Distant Supervision",
"Training Policy-based Agent",
"Pre-training Strategy",
"Redistributing Training Dataset with",
"Experiments",
"Datast and Evaluation Metrics",
"Policy-based Agent",
"Relation Classifier for Calculating Reward",
"The Effectiveness of Reinforcement Learning",
"Conclusion"
]
} | GEM-SciDuet-train-136#paper-1365#slide-6 | Evaluation on a Synthetic Noise Dataset | False Positive: Other relation types
True Positive + False Positive: samples
False Positive Removed Part Epoch | False Positive: Other relation types
True Positive + False Positive: samples
False Positive Removed Part Epoch | [] |
GEM-SciDuet-train-136#paper-1365#slide-8 | 1365 | Robust Distant Supervision Relation Extraction via Deep Reinforcement Learning | Distant supervision has become the standard method for relation extraction. However, even though it is an efficient method, it does not come at no cost-The resulted distantly-supervised training samples are often very noisy. To combat the noise, most of the recent state-of-theart approaches focus on selecting onebest sentence or calculating soft attention weights over the set of the sentences of one specific entity pair. However, these methods are suboptimal, and the false positive problem is still a key stumbling bottleneck for the performance. We argue that those incorrectly-labeled candidate sentences must be treated with a hard decision, rather than being dealt with soft attention weights. To do this, our paper describes a radical solution-We explore a deep reinforcement learning strategy to generate the false-positive indicator, where we automatically recognize false positives for each relation type without any supervised information. Unlike the removal operation in the previous studies, we redistribute them into the negative examples. The experimental results show that the proposed strategy significantly improves the performance of distant supervision comparing to state-of-the-art systems. | {
"paper_content_id": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32,
33,
34,
35,
36,
37,
38,
39,
40,
41,
42,
43,
44,
45,
46,
47,
48,
49,
50,
51,
52,
53,
54,
55,
56,
57,
58,
59,
60,
61,
62,
63,
64,
65,
66,
67,
68,
69,
70,
71,
72,
73,
74,
75,
76,
77,
78,
79,
80,
81,
82,
83,
84,
85,
86,
87,
88,
89,
90,
91,
92,
93,
94,
95,
96,
97,
98,
99,
100,
101,
102,
103,
104,
105,
106,
107,
108,
109,
110,
111,
112,
113,
114,
115,
116,
117,
118,
119,
120,
121,
122,
123,
124,
125,
126,
127,
128,
129,
130,
131,
132,
133,
134,
135,
136,
137,
138,
139,
140,
141,
142,
143,
144,
145,
146,
147,
148,
149,
150,
151,
152,
153,
154,
155,
156,
157,
158,
159,
160,
161,
162,
163,
164,
165,
166,
167,
168,
169,
170,
171,
172,
173,
174,
175,
176,
177,
178,
179,
180,
181,
182,
183,
184,
185,
186,
187,
188,
189,
190,
191,
192
],
"paper_content_text": [
"Introduction Relation extraction is a core task in information extraction and natural language understanding.",
"The goal of relation extraction is to predict relations for entities in a sentence (Zelenko et al., 2003; Bunescu and Mooney, 2005; GuoDong et al., 2005) .",
"For example, given a sentence \"Barack Obama is married to Michelle Obama.",
"\", a relation classifier aims at predicting the relation of \"spouse\".",
"In downstream applications, relation extraction is the key module for constructing knowledge graphs, and it is a vital component of many natural language processing applications such as structured search, sentiment analysis, question answering, and summarization.",
"A major issue encountered in the early development of relation extraction algorithms is the data sparsity issue-It is extremely expensive, and almost impossible for human annotators to go through a large corpus of millions of sentences to provide a large amount of labeled training instances.",
"Therefore, distant supervision relation extraction (Mintz et al., 2009; Hoffmann et al., 2011; Surdeanu et al., 2012) becomes popular, because it uses entity pairs from knowledge bases to select a set of noisy instances from unlabeled data.",
"In recent years, neural network approaches (Zeng et al., 2014 (Zeng et al., , 2015 have been proposed to train the relation extractor under these noisy conditions.",
"To suppress the noisy (Roth et al., 2013) , recent stud-ies (Lin et al., 2016) have proposed the use of attention mechanisms to place soft weights on a set of noisy sentences, and select samples.",
"However, we argue that only selecting one example or based on soft attention weights are not the optimal strategy: To improve the robustness, we need a systematic solution to make use of more instances, while removing false positives and placing them in the right place.",
"In this paper, we investigate the possibility of using dynamic selection strategies for robust distant supervision.",
"More specifically, we design a deep reinforcement learning agent, whose goal is to learn to choose whether to remove or remain the distantly supervised candidate instance based on the performance change of the relation classifier.",
"Intuitively, our agent would like to remove false positives, and reconstruct a cleaned set of distantly supervised instances to maximize the reward based on the classification accuracy.",
"Our proposed method is classifier-independent, and it can be applied to any existing distant supervision model.",
"Empirically, we show that our method has brought consistent performance gains in various deep neural network based models, achieving strong performances on the widely used New York Times dataset (Riedel et al., 2010) .",
"Our contributions are three-fold: • We propose a novel deep reinforcement learning framework for robust distant supervision relation extraction.",
"• Our method is model-independent, meaning that it could be applied to any state-of-the-art relation extractors.",
"• We show that our method can boost the performances of recently proposed neural relation extractors.",
"In Section 2, we will discuss related works on distant supervision relation extraction.",
"Next, we will describe our robust distant supervision framework in Section 3.",
"In Section 4, empirical evaluation results are shown.",
"And finally, we conclude in Section 5.",
"Mintz et al.",
"(2009) is the first study that combines dependency path and feature aggregation for distant supervision.",
"However, this approach would introduce a lot of false positives, as the same entity pair might have multiple relations.",
"To alleviate this issue, Hoffmann et al.",
"(2011) address this issue, and propose a model to jointly learn with multiple relations.",
"Surdeanu et al.",
"(2012) further propose a multi-instance multi-label learning framework to improve the performance.",
"Note that these early approaches do not explicitly remove noisy instances, but rather hope that the model would be able to suppress the noise.",
"Related Work Recently, with the advance of neural network techniques, deep learning methods (Zeng et al., 2014 (Zeng et al., , 2015 are introduced, and the hope is to model noisy distant supervision process in the hidden layers.",
"However, their approach only selects one most plausible instance per entity pair, inevitably missing out a lot of valuable training instances.",
"Recently, Lin et al.",
"(2016) propose an attention mechanism to select plausible instances from a set of noisy instances.",
"However, we believe that soft attention weight assignment might not be the optimal solution, since the false positives should be completely removed and placed in the negative set.",
"Ji et al.",
"(2017) combine the external knowledge to rich the representation of entity pair, in which way to improve the accuracy of attention weights.",
"Even though these above-mentioned methods can select high-quality instances, they ignore the false positive case: all the sentences of one entity pair belongs to the false positives.",
"In this work, we take a radical approach to solve this problem-We will make use of the distantly labeled resources as much as possible, while learning a independent false-positive indicator to remove false positives, and place them in the right place.",
"After our ACL submission, we notice that a contemporaneous study Feng et al.",
"(2018) also adopts reinforcement learning to learn an instance selector, but their reward is calculated from the prediction probabilities.",
"In contrast, while in our method, the reward is intuitively reflected by the performance change of the relation classifier.",
"Our approach is also complement to most of the approaches above, and can be directly applied on top of any existing relation extraction classifiers.",
"Reinforcement Learning for Distant Supervision We introduce a performance-driven, policy-based reinforcement learning method to heuristically recognize false positive samples.",
"Comparing to a prior study that has underutilized the distantlysupervised samples (Lin et al., 2016) , we consider an RL agent for robust distant supervision relation extraction.",
"We first describe the definitions of our RL method, including the policy-based agent, external environment, and pre-training strategy.",
"Next, we describe the retraining strategy for our RL agent.",
"The goal of our agent is to determine whether to retain or remove a distantlysupervised sentence, based on the performance change of relation classifier.",
"Finally, we describe the noisy-suppression method, where we teach our policy-based agent to make a redistribution for a cleaner distant supervision training dataset.",
"Distant supervision relation extraction is to predict the relation type of entity pair under the automatically-generated training set.",
"However, the issue is that these distantly-supervised sentences that mention this entity pair may not express the desired relation type.",
"Therefore, what our RL agent should do is to determine whether the distantly-supervised sentence is a true positive instance for this relation type.",
"For reinforcement learning, external environment and RL agent are two necessary components, and a robust agent is trained from the dynamic interaction between these two parts (Arulkumaran et al., 2017) .",
"First, the prerequisite of reinforcement learning is that the external environment should be modeled as a Markov decision process (MDP).",
"However, the traditional setting of relation extraction cannot satisfy this condition: the input sentences are independent of each other.",
"In other words, we cannot merely use the information of the sentence being processed as the state.",
"Thus, we add the information from the early states into the representation of the current state, in which way to model our task as a MDP problem (Fang et al., 2017) .",
"The other component, RL agent, is parameterized with a policy network π θ (s, a) = p(a|s; θ).",
"The probability distribution of actions A = {a remove , a remain } is calculated by policy network based on state vectors.",
"What needs to be noted is that, Deep Q Network (DQN) (Mnih et al., 2013) is also a widelyused RL method; however, it is not suitable for our case, even if our action space is small.",
"First, we cannot compute the immediate reward for every operation; In contrast, the accurate reward can only be obtained after finishing processing the whole training dataset.",
"Second, the stochastic policy of the policy network is capable of prevent-ing the agent from getting stuck in an intermediate state.",
"The following subsections detailedly introduce the definitions of the fundamental components in the proposed RL method.",
"States In order to satisfy the condition of MDP, the state s includes the information from the current sentence and the sentences that have been removed in early states.",
"The semantic and syntactic information of sentence is represented by a continuous real-valued vector.",
"According to some state-of-the-art supervised relation extraction approaches (Zeng et al., 2014; Nguyen and Grishman, 2015) , we utilize both word embedding and position embedding to convert sentence into vector.",
"With this sentence vector, the current state is the concatenation of the current sentence vector and the average vector of the removed sentences in early states.",
"We give relatively larger weight for the vector of the current sentence, in which way to magnify the dominating influence of the current sentence information for the decision of action.",
"Actions At each step, our agent is required to determine whether the instance is false positive for target relation type.",
"Each relation type has a agent 1 .",
"There are two actions for each agent: whether to remove or retain the current instance from the training set.",
"With the initial distantlysupervised dataset that is blended with incorrectlylabeled instances, we hope that our agent is capable of using the policy network to filter noisy instances; Under this cleaned dataset, distant supervision is then expected to achieve better performance.",
"Rewards As previously mentioned, the intuition of our model is that, when the incorrectly-labeled instances are filtered, the better performance of relation classifier will achieve.",
"Therefore, we use the change of performance as the result-driven reward for a series of actions decided by the agent.",
"Compared to accuracy, we adopt the F 1 score as the evaluation criterion, since accuracy might not be an indicative metric in a multi-class classification setting where the data distribution could be imbalanced.",
"Thus, the reward can be formulated as the RL Agent Train Relation Classifier \" #$\" \" # × + # + ×(− # ) Noisy dataset - ./# Cleaned dataset - #$\" Cleaned dataset - # Removed part Removed part Train # = ( \" # -\" #$\" ) Relation Classifier RL Agent Epoch − 1 : Epoch : Figure 2 : The proposed policy-based reinforcement learning framework.",
"The agent tries to remove the wrong-labeled sentences from the distantly-supervised positive dataset P ori .",
"In order to calculate the reward, P ori is split into the training part P ori t and the validation part P ori v ; their corresponding negative part are represented as N ori t and N ori v .",
"In each epoch i, the agent performs a series of actions to recognize the false positive samples from P ori t and treat them as negative samples.",
"Then, a new relation classifier is trained under the new dataset Noisy dataset - ./# + - ./# { 6 #$\" , 6 #$\" } - #$\" - ./# + - # { 6 # , 6 # } {P i t , N i t }.",
"With this relation classifier, F 1 score is calculated from the new validation set {P i v , N i v }, where P i v is also filtered by the current agent.",
"After that, the current reward is measured as the difference of F 1 between the adjacent epochs.",
"difference between the adjacent epochs: R i = α(F i 1 − F i−1 1 ) (1) As this equation shows, in step i, our agent is given a positive reward only if F 1 gets improved; otherwise, the agent will receive a negative reward.",
"Under this setting, the value of reward is proportional to the difference of F 1 , and α is used to convert this difference into a rational numeric range.",
"Naturally, the value of the reward is in a continuous space, which is more reasonable than a binary reward (−1 and 1), because this setting can reflect the number of wrong-labeled instance that the agent has removed.",
"In order to avoid the randomness of F 1 , we use the average F 1 of last five epochs to calculate the reward.",
"Policy Network For each input sentence, our policy network is to determine whether it expresses the target relation type and then make removal action if it is irrelevant to the target relation type.",
"Thus, it is analogous to a binary relation classifier.",
"CNN is commonly used to construct relation classification system (Santos et al., 2015; Xu et al., 2015; Shen and Huang, 2016) , so we adopt a simple CNN with window size c w and kernel size c k , to model policy network π(s; θ).",
"The reason why we do not choice the variants of CNN (Zeng et al., 2015; Lin et al., 2016) that are well-designed for distant supervision is that these two models belong to bag-level models (dealing with a bag of sentences simultaneously) and deal with the multi-classification problem; We just need a model to do binary sentencelevel classification.",
"Naturally, the simpler network is adopted.",
"Training Policy-based Agent Unlike the goal of distant supervision relation extraction, our agent is to determine whether an annotated sentence expresses the target relation type rather than predict the relationship of entity pair, so sentences are treated independently despite belonging to the same entity pair.",
"In distant supervision training dataset, one relation type contains several thousands or ten thousands sentences; moreover, reward R can only be calculated after processing the whole positive set of this relation type.",
"If we randomly initialize the parameters of policy network and train this network by trial and errors, it will waste a lot of time and be inclined to poor convergence properties.",
"In order to overcome this problem, we adopt a supervised learning procedure to pre-train our policy network, in which way to provide a general learning direction for our policy-based agent.",
"Pre-training Strategy The pre-training strategy, inspired from AlphaGo (Silver et al., 2016) , is a common strategy in RL related works to accelerate the training of RL agents.",
"Normally, they utilize a small part of the annotated dataset to train policy networks before reinforcement learning.",
"For example, AlphaGo uses the collected experts moves to do a supervised learning for Go RL agent.",
"However, in distant supervision relation extraction task, there is not any supervised information that can be used unless let linguistic experts to do some manual annotations for part of the entity pairs.",
"However, this is expensive, and it is not the original intention of distant supervision.",
"Under this circumstance, we propose a compromised solution.",
"With well-aligned corpus, the true positive samples should have evident advantage in quantity compared with false positive samples in the distantly-supervised dataset.",
"So, for a specific relation type, we directly treat the distantly-supervised positive set as the positive set, and randomly extract part of distantly-supervised negative set as the negative set.",
"In order to better consider prior information during this pre-training procedure, the amount of negative samples is 10 times of the number of positive samples.",
"It is because, when learning with massive negative samples, the agent is more likely to develop toward a better direction.",
"Cross-entropy cost function is used to train this binary classifier, where the negative label corresponds to the removing action, and the positive label corresponds to the retaining action.",
"(2) J(θ) = i y i log[π(a = y i |s i ; θ)] + (1 − y i )log[1 − π(a = y i |s i ; θ)] Due to the noisy nature of the distantly-labeled instances, if we let this pre-training process overfit this noisy dataset, the predicted probabilities of most samples tend to be close to 0 or 1, which is difficult to be corrected and unnecessarily increases the training cost of reinforcement learning.",
"So, we stop this training process when the accuracy reaches 85% ∼ 90%.",
"Theoretically, our approach can be explained as increasing the entropy of the policy gradient agent, and preventing the entropy of the policy being too low, which means that the lack of exploration may be a concern.",
"3.1.2 Retraining Agent with Rewards As shown in Figure 2 , in order to discover incorrectly-labeled instances without any supervised information, we introduce a policy-based RL method.",
"What our agent tries to deal with is the noisy samples from the distantly-supervised positive dataset; Here we call it as the DS positive dataset.",
"We split it into the training positive set P ori t and the validation positive set P ori v ; naturally, both of these two set are noisy.",
"Correspondingly, the training negative set N ori t and the validation negative set N ori v are constructed by randomly selected from the DS negative dataset.",
"In every epoch, the agent removes a noisy sample set Ψ i from P ori t according to the stochastic policy π(a|s), and we obtain a new positive set P t = P ori t − Ψ i .",
"Because Ψ i is recognized as the wrong-labeled samples, we redistribute it into the negative set N t = N ori t + Ψ i .",
"Under this setting, the scale of training set is constant for each epoch.",
"Now we utilize the cleaned data {P t , N t } to train a relation classifier.",
"The desirable situation is that RL agent has the capacity to increase the performance of relation classifier through relocating incorrectly-labeled false positive instances.",
"Therefore, we use the validation set {P ori v , N ori v } to measure the performance of the current agent.",
"First, this validation set is filtered and redistributed by the current agent as {P v , N v }; the F 1 score of the current relation classifier is calculated from it.",
"Finally, the difference of F 1 scores between the current and previous epoch is used to calculate reward.",
"Next, we will introduce several strategies to train a more robust RL agent.",
"Removing the fixed number of sentences in each epoch In every epoch, we let the RL agent to remove a fixed number of sentences or less (when the number of the removed sentences in one epoch does not reach this fixed number during training), in which way to prevent the case that the agent tries to remove more false positive instances by removing more instances.",
"Under the restriction of fixed number, if the agent decides to remove the current state, it means the chance of removing other states decrease.",
"Therefore, in order to obtain a better reward, the agent should try to remove a instance set that includes more negative instances.",
"Loss function The quality of the RL agent is reflected by the quality of the removed part.",
"After the pre-training process, the agent just possesses Algorithm 1 Retraining agent with rewards for relation k. For a clearer expression, k is omitted in the following algorithm.",
"Require: Positive set {P ori t , P ori v }, Negative set {N ori t , N ori v }, the fixed number of removal γ t , γ v 1: Load parameters θ from pre-trained policy network 2: Initialize s * as the all-zero vector with the same dimension of s j 3: for epoch i = 1 → N do 4: for s j ∈ P ori t do 5: s j = concatenation(s j , s * ) 6: Randomly sample a j ∼ π(a| s j ; θ); compute p j = π(a = 0| s j ; θ) 7: if a j == 0 then Rank T based on p j from high to low, obtain T rank 12: for t i in T rank [: γ t ] do 13: Add t i [0] into Ψ i 14: end for 15: P i t = P ori t − Ψ i , N i t = N ori t + Ψ i , R = α(F i 1 − F i−1 1 ) 19 : Ω i−1 = Ψ i−1 − Ψ i ∩ Ψ i−1 ; Ω i = Ψ i − Ψ i ∩ Ψ i−1 20: 21: Updata θ: g ∝ θ Ω i log π(a|s; θ)R + θ Ω i−1 log π(a|s; θ)(−R) 22: end for the ability to distinguish the obvious false positive instances, which means the discrimination of the indistinguishable wrong-labeled instances are still ambiguous.",
"Particularly, this indistinguishable part is the criterion to reflect the quality of the agent.",
"Therefore, regardless of these easydistinguished instances, the different parts of the removed parts in different epochs are the determinant of the change of F 1 scores.",
"Therefore, we definite two sets: Ω i−1 = Ψ i−1 − (Ψ i ∩ Ψ i−1 ) (3) Ω i = Ψ i − (Ψ i ∩ Ψ i−1 ) (4) where Ψ i is the removed part of epoch i. Ω i−1 and Ω i are represented with the different colors in Figure 2.",
"If F 1 score increases in the epoch i, it means the actions of the epoch i is more reasonable than that in the epoch i − 1.",
"In other words, Ω i is more negative than Ω i−1 .",
"Thus, we assign the positive reward to Ω i and the negative reward to Ω i−1 , and vice versa.",
"In summary, the ultimate loss function is formulated as follow: (5) J(θ) = Ω i log π(a|s; θ)R + Ω i−1 log π(a|s; θ)(−R) Redistributing Training Dataset with Policy-based Agents Through the above reinforcement learning procedure, for each relation type, we obtain a agent as the false-positive indicator.",
"These agents possess the capability of recognizing incorrectly-labeled instances of the corresponding relation types.",
"We adopt these agents as classifiers to recognize false positive samples in the noisy distantly-supervised training dataset.",
"For one entity pair, if all the sentence aligned from corpus are classified as false positive, then this entity pair is redistributed into the negative set.",
"Experiments We adopt a policy-based RL method to generate a series of relation indicators and use them to re-distribute training dataset by moving false positive samples to negative sample set.",
"Therefore, our experiments are intended to demonstrate that our RL agents possess this capability.",
"Datast and Evaluation Metrics We evaluate the proposed method on a commonlyused dataset 2 , which is first presented in Riedel et al.",
"(2010) .",
"This dataset is generated by aligning entity pairs from Freebase with New York Times corpus(NYT).",
"Entity mentions of NYT corpus are recognized by the Stanford named entity recognizer (Finkel et al., 2005) .",
"Similar to the previous works, we adopt the held-out evaluation to evaluate our model, which can provide an approximate measure of the classification ability without costly human evaluation.",
"Similar to the generation of the training set, the entity pairs in test set are also selected from Freebase, which will be predicted under the sentences discovered from the NYT corpus.",
"Experimental Settings Policy-based Agent The action space of our RL agent just includes two actions.",
"Therefore, the agent can be modeled as a binary classifier.",
"We adopt a single-window CNN as this policy network.",
"The detailed hyperparameter settings are presented in Table 1 .",
"As for word embeddings, we directly use the word embedding file released by Lin et al.",
"(2016) 3 , which just keeps the words that appear more than 100 times in NYT.",
"Moreover, we have the same dimension setting of the position embedding, and the maximum length of relative distance is −30 and 30 (\"-\" and \"+\" represent the left and right side of the entities).",
"The learning rate of reinforcement learning is 2e −5 .",
"For each relation type, the fixed number γ t , γ v are according to the pre-trained agent.",
"When one relation type has too many distantsupervised positive sentences (for example, /lo-2 http://iesl.cs.umass.edu/riedel/ecml/ 3 https://github.com/thunlp/NRE Table 2 : Comparison of F 1 scores among three cases: the relation classifier is trained with the original dataset, the redistributed dataset generated by the pre-trained agent, and the redistributed dataset generated by our RL agent respectively.",
"The name of relation types are abbreviated: /peo/per/pob represents /people/person/place of birth cation/location/contains has 75768 sentences), we sample a subset of size 7,500 sentences to train the agent.",
"For the average vector of the removed sentences, in the pre-training process and the first state of the retraining process, it is set as all-zero vector.",
"Relation Classifier for Calculating Reward In order to evaluate a series of actions by agent, we use a simple CNN model, because the simple network is more sensitive to the quality of the training set.",
"The proportion between P ori t and P ori v is 2:1, and they are all derived from the training set of Riedel dataset; the corresponding negative sample sets N ori t and N ori v are randomly selected from the Riedel negative dataset, whose size is twice that of their corresponding positive sets.",
"The Effectiveness of Reinforcement Learning In Table 2 , we list the F 1 scores before and after adopting the proposed RL method.",
"Even though there are 52 actual relation types in Riedel dataset, only 10 relation types have more than 1000 pos- Zeng et al.",
"(2015) and Lin et al.",
"(2016) are both the robust models to solve wrong labeling problem of distant supervision relation extraction.",
"Zeng et al.",
"(2015) combine at-least-one multi-instance learning with deep neural network to extract only one active sentence to predict the relation between entity pair; Lin et al.",
"(2016) combine all sentences of one entity pair and assign soft attention weights to them, in which way to generate a compositive relation representation for this entity pair.",
"However, the false positive phenomenon also includes the case that all the sentences of one entity pair are wrong, which is because the corpus is not completely aligned with the knowledge base.",
"This phenomenon is also common between Riedel dataset and Freebase through our manual inspection.",
"Obviously, there is nothing the above two methods can do in this case.",
"The proposed RL method is to tackle this problem.",
"We adopt our RL agents to redistribute Riedel dataset by moving false positive samples into the negative sample set.",
"Then we use Zeng et al.",
"(2015) and Lin et al.",
"(2016) to predict relations on this cleaned dataset, and compare the performance with that on the original Riedel dataset.",
"As shown in Figure 3 and Figure 4 , under the assistant of our RL agent, the same model can achieve obvious improvement with more reasonable training dataset.",
"In order to give the more intuitive comparison, we calculate the AUC value of each PR curve, which reflects the area size under these curves.",
"These comparable results also indicate the effectiveness of our policy-based RL method.",
"Moreover, as can be seen from the result of t-test evaluation, all the p-values are less than 5e-02, so the improvements are significant.",
"proportional to the original scale, which is in accordance with the actual accident situation.",
"At the same time, we analyze the correlation between the false positive phenomenon and the number of sentences of entity pairs : With this the number ranging from 1 to 5, the corresponding percentages are [55.9%, 32.0%, 3.7%, 4.4%, 0.7%].",
"This distribution is consistent with our assumption.",
"Because Freebase is, to some extent, not completely aligned with the NYT corpus, entity pairs with fewer sentences are more likely to be false positive, which is the major factor hindering the performance of the previous systems.",
"In Table 4 , we present some false positive examples selected by our agents.",
"Taking entity pair (Sami Moubayed, Syria) as an example, it is obvious that there is not any valuable information reflecting relation /people/person/place of birth.",
"Both of these sentences talks about the situation analysis of Syria from the political analyst Sami Moubayed.",
"We also found that, for some entity pairs, even though there are multiple sentences, all of them are identical.",
"This phenomenon also increases the probability of the appearance of false positive samples.",
"Case Study Conclusion In this work, we propose a deep reinforcement learning framework for robust distant supervision.",
"The intuition is that, in contrast to prior works that utilize only one instance per entity pair and use soft attention weights to select plausible distantly supervised examples, we describe a policy-based framework to systematically learn to relocate the false positive samples, and better utilize the unlabeled data.",
"More specifically, our goal is to Table 2. teach the reinforcement agent to optimize the selection/redistribution strategy that maximizes the reward of boosting the performance of relation classification.",
"An important aspect of our work is that our framework does not depend on a specific form of the relation classifier, meaning that it is a plug-and-play technique that could be potentially applied to any relation extraction pipeline.",
"In experiments, we show that our framework boosts the performance of distant supervision relation extraction of various strong deep learning baselines on the widely used New York Times -Freebase dataset."
]
} | {
"paper_header_number": [
"1",
"2",
"3",
"3.1",
"3.1.1",
"3.2",
"4",
"4.1",
"4.2.1",
"4.2.2",
"4.3",
"5"
],
"paper_header_content": [
"Introduction",
"Related Work",
"Reinforcement Learning for Distant Supervision",
"Training Policy-based Agent",
"Pre-training Strategy",
"Redistributing Training Dataset with",
"Experiments",
"Datast and Evaluation Metrics",
"Policy-based Agent",
"Relation Classifier for Calculating Reward",
"The Effectiveness of Reinforcement Learning",
"Conclusion"
]
} | GEM-SciDuet-train-136#paper-1365#slide-8 | Conclusion | We propose a deep reinforcement learning method
for robust distant supervision relation extraction.
Our method is model-agnostic.
Our method boost the performance of recently proposed neural relation extractors. | We propose a deep reinforcement learning method
for robust distant supervision relation extraction.
Our method is model-agnostic.
Our method boost the performance of recently proposed neural relation extractors. | [] |